But if you don't get position updates for 500ms and then suddenly get them all out once, that results in terrible game play.Īll that said, on a local wired network, packets are almost never delayed or dropped and so TCP is perfectly fine as an initial development target. It's not such a big deal if you miss one position update of the enemy player because a couple of milliseconds later you will receive another position update (and probably won't even notice). adding sequence numbers to the payload data). Real-time games generally can't tolerate the type of delays that can result from TCP sockets so they use UDP for most of the game traffic and have mechanisms to deal with dropped and out-of-order data (e.g. TCP sounds like a big improvement, and it is for most types of network traffic, but those features come at a cost: a delayed or dropped packet causes all the following packets to be delayed as well (to guarantee in-order delivery). UDP packets are sent as they arrive, may be out of order, and may have gaps (on a congested network, UDP packets are dropped before TCP packets). TCP will deliver all the data that is sent and in the order that it was sent. TCP is an in-order, reliable transport mechanism and UDP is best-effort. You may be aware of this already since you are writing a game, but for those that aren't, here is a quick primer on TCP vs UDP for real-time games: Your question implies that UDP is probably what you want for a low latency game and there is truth to that. The WebRTC Data Channel API is designed to be very similar to WebSockets (once the connection is established) so it should be fairly simple to integrate once it is widely available. As correctly notes, WebSocket average latencies are basically equivalent to raw TCP or UDP, especially on a local network, so it should be fine for you development phase. I would recommend developing your game using WebSockets on a local wired network and then moving to the WebRTC Data Channel API once it is available. Note: WebRTC transport can be over RTP or secured when over SRTP. "Real-time games" often means transferring not media, but things like player positions. RTP is optimized for loss-tolerant real-time media transport. WebRTC uses RTP (= UDP based) for media transport but needs a signaling channel in addition (which can be WebSocket i.e.). However, if you run WSS (that is secure WS = TLS), no intermediaries can interfere, and there you are: raw TCP, with a HTTP compatible prelude (WS handshake). Caveat: intermediaries may fragment the traffic at their own decision. Essentially this renders WebSocket a fancy prelude for raw TCP. More so: with a WebSocket implementation capable of streaming processing, you can (after the initial WebSocket handshake), start a single WebSocket message and frame in each direction and then send up to 2^63 octets with no overhead at all. Very low.įor a more detailed analysis of WebSocket wire-level overhead, please see this blog post - this includes analysis covering layers beyond WebSocket also. The wire level overhead of WebSocket (compared to raw TCP) is between 2 octets (unmasked payload of length 64k) per message (the former numbers assume the message is not fragmented into multiple WebSocket frames). The point is: WebSocket does add virtually no latency to the one you will get anyway, based on the network. On MAN, it's around 10ms, WAN (over residential ADSL to server in same country) around 30ms, and so on up to around 120-200ms via 3.5G. On a LAN, you can get Round-trip times for messages over WebSocket of 200 microsec (from browser JS to WebSocket server and back), which is similar to raw ICMP pings.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |