WebSockets, caution required!

almost 9 years ago

When developers hear that WebSockets are going to land in the near future in Rails they get all giddy with excitement.

But your users don’t care if you use WebSockets:

  • Users want “delightful realtime web apps”.

  • Developers want “delightfully easy to build realtime web apps”.

  • Operations want “delightfully easy to deploy, scale and manage realtime web apps”.

If WebSockets get us there, great, but it is an implementation detail that comes at high cost.

Do we really need ultra high performance, full duplex Client-Server communication?

WebSockets provides simple APIs to broadcast information to clients and simple APIs to ship information from the clients to the web server.

A realtime channel to send information from the server to the client is very welcome. In fact it is a part of HTTP 1.1.

However, a brand new API for shipping information to the server from web browsers introduce a new decision point for developers:

  • When a user posts a message on chat, do I make a RESTful call and POST a message or do I bypass REST and use WebSockets?

  • If I use the new backchannel, how do I debug it? How do I log what is going on? How do I profile it? How do I ensure it does not slow down other traffic to my site? Do I also expose this endpoint in a controller action? How do I rate limit this? How do I ensure my background WebSocket thread does not exhaust my db connection limit?

:warning: If an API allows hundreds of different connections concurrent access to the database, bad stuff will happen.

Introducing this backchannel is not a clear win and comes with many caveats.

I do not think the majority of web applications need a new backchannel into the web server. On a technical level you would opt for such a construct if you were managing 10k interactive console sessions on the web. You can transport data more efficiently to the server, in that the web server no longer needs to parse HTTP headers, Rails does not need to do a middleware crawl and so on.

But the majority of web applications out there are predominantly read applications. Lots of users benefit from live updates, but very few change information. It is incredibly rare to be in a situation where the HTTP header parsing optimisation is warranted; this work is done sub millisecond. Bypassing Rack middleware on the other hand can be significant, especially when full stack middleware crawls are a 10-20ms affair. That however is an implementation detail we can optimise and not a reason to rule out REST for client to server communications.

For realtime web applications we need simple APIs to broadcast information reliably and quickly to clients. We do not need new mechanisms for shipping information to the server.

What’s wrong with WebSockets?

WebSockets had a very tumultuous ride with a super duper unstable spec during the journey. The side effects of this joyride show in quite a few spots. Take a look at Ilya Grigorik’s very complete implementation. 5 framing protocols, 3 handshake protocols and so on.

At last, today, this is all stable and we have RFC6455 which is implemented ubiquitously across all major modern browsers. However, there was some collateral damage:

  • IE9 and earlier are not supported

  • Many libraries – including the most popular Ruby one – ship with multiple implementations, despite Hixie 75 being flawed.

I am confident the collateral damage will, in time, be healed. That said, even the most perfect implementation comes with significant technical drawbacks.

1) Proxy servers can wreak havoc with WebSockets running over unsecured HTTP

The proxy server issue is quite widespread. Our initial release of Discourse used WebSockets, however reports kept coming in of “missing updates on topics” and so on. Amongst the various proxy pariahs was my mobile phone network Telstra which basically let you have an open socket, but did not let any data through.

To work around the “WebSocket is dead but still appears open problem” WebSocket implementers usually introduce a ping/pong message. This solution works fine provided you are running over HTTPS, but over HTTP all bets are off and rogue proxies will break you.

That said, “… but you must have HTTPS” is the weakest argument against WebSocket adoption, I want all the web to be HTTPS, it is the future and it is getting cheaper every day. But you should know that weird stuff will definitely happen you deploy WebSockets over unsecured HTTP. Unfortunately for us at Discourse dropping support for HTTP is not an option quite yet, as it would hurt adoption.

2) Web browsers allow huge numbers of open WebSockets

The infamous 6 connections per host limit does not apply to WebSockets. Instead a far bigger limit holds (255 in Chrome and 200 in Firefox). This blessing is also a curse. It means that end users opening lots of tabs can cause large amounts of load and consume large amounts of continuous server resources. Open 20 tabs with a WebSocket based application and you are risking 20 connections unless the client/server mitigates.

There are quite a few ways to mitigate:

  • If we have a reliable queue driving stuff, we can shut down sockets after a while (or when in a background tab) and reconnect later on and catch up.

  • If we have a reliable queue driving stuff, we can throttle and turn back high numbers of TCP connections at our proxy or even iptables, but it is hard to guess if we are turning away the right connections.

  • On Firefox and Chrome we can share a connection by using a shared web worker, which is unlikely to be supported on mobile and [is absent ]
    (Can I use... Support tables for HTML5, CSS3, etc) from Microsoft’s offerings. I noticed Facebook are experimenting with shared workers (Gmail and Twitter are not).

  • MessageBus uses browser visibility APIs to slow down communication on out-of-focus tabs, falling back to a 2 minute poll on background tabs.

3) WebSockets and HTTP/2 transport are not unified

HTTP/2 is able to cope with the multiple tab problem much more efficiently than WebSockets. A single HTTP/2 connection can be multiplexed across tabs, which makes loading pages in new tabs much faster and significantly reduces the cost of polling or long polling from a networking point of view. Unfortunately, HTTP/2 does not play nice with WebSockets. There is no way to tunnel a WebSocket over HTTP/2, they are separate protocols.

There is an expired draft to unify the two protocols, but no momentum around it.

HTTP/2 has the ability to stream data to clients by sending multiple DATA frames, meaning that streaming data from the server to the client is fully supported.

Unlike running a Socket server, which includes a fair amount of complex Ruby code, running a HTTP/2 server is super easy. HTTP/2 is now in NGINX mainline, you can simply enable the protocol and you are done.

4) Implementing WebSockets efficiently on the server side requires epoll, kqueue or I/O Completion ports.

Efficient long polling, HTTP streaming (or Server Sent Events) is fairly simple to implement in pure Ruby since we do not need to repeatedly run IO.select. The most complicated structure we need to deal with is a TimerThread

Efficient Socket servers on the other hand are rather complicated in the Ruby world. We need to keep track of potentially thousands of connections dispatching Ruby methods when new data is available on any of the sockets.

Ruby ships with IO select which allows you to watch an array of sockets for new data, however it is fairly inefficient cause you force the kernel to keep walking big arrays to figure out if you have any new data. Additionally, it has a hard limit of 1024 entries (depending on how you compiled your kernel), you can not select on longer lists. EventMachine solves this limitation by implementing epoll (and kqueue for BSD).

Implementing epoll correctly is not easy.

5) Load balancing WebSockets is complicated

If you decide to run a farm of WebSockets, proper load balancing is complicated. If you find out that your Socket servers are overloaded and decide to quickly add a few servers to the mix, you have no clean way of re-balancing current traffic. You have to terminate overloaded servers due to connections being open indefinitely. At that point you are exposing yourself to a flood of connections (which can be somewhat mitigated by clients). Furthermore if “on reconnect” we have to refresh the page, restart a socket server and you will flood your web servers.

With WebSockets you are forced to run TCP proxies as opposed to HTTP proxies. TCP proxies can not inject headers, rewrite URLs or perform many of the roles we traditionally let our HTTP proxies take care of.

Denial of service attacks that are usually mitigated by front end HTTP proxies can not be handled by TCP proxies; what happens if someone connects to a socket and starts pumping messages into it that cause database reads in your Rails app? A single connection can wreak enormous amounts of havoc.

6) Sophisticated WebSocket implementations end up re-inventing HTTP

Say we need to be subscribed to 10 channels on the web browser (a chat channel, a notifications channel and so on), clearly we will not want to make 10 different WebSocket connections. We end up multiplexing commands to multiple channels on a single WebSocket.

Posting “Sam said hello” to the “/chat” channel ends up looking very much like HTTP. We have “routing” which specifies the channel we are posting on, this looks very much like HTTP headers. We have a payload, that looks like HTTP body. Unlike HTTP/2 we are unlikely to get header compression or even body compression.

7) WebSockets give you the illusion of reliability

WebSockets ship with a very appealing API.

  • You can connect
  • You have a reliable connection to the server due to TCP
  • You can send and receive messages

But… the Internet is a very unreliable place. Laptops go offline, you turn on airplane mode when you had it with all the annoying marketing calls, Internet sometimes does not work.

This means that this appealing API still needs to be backed by reliable messaging, you need to be able to catch up with a backlog of messages and so on.

When implementing WebSockets you need to treat them just as though they are simple HTTP calls that can go missing be processed at the server out-of-order and so on. They only provide the illusion of reliability.

WebSockets are an implementation detail, not a feature

At best, WebSockets are a value add. They provide yet another transport mechanism.

There are very valid technical reasons many of the biggest sites on the Internet have not adopted them. Twitter use HTTP/2 + polling, Facebook and Gmail use Long Polling. Saying WebSockets are the only way and the way of the future, is wrongheaded. HTTP/2 may end up winning this battle due to the huge amount of WebSocket connections web browsers allow, and HTTP/3 may unify the protocols.

  • You may want to avoid running dedicated socket servers (which at scale you are likely to want to run so sockets do not interfere with standard HTTP traffic). At Discourse we run no dedicated long polling servers, adding capacity is trivial. Capacity is always balanced.

  • You may be happy with a 30 second delays and be fine with polling.

  • You may prefer the consolidated transport HTTP/2 offers and go for long polling + streaming on HTTP/2

Messaging reliability is far more important than WebSockets

MessageBus is backed by a reliable pub/sub channel. Messages are globally sequenced. Messages are locally sequenced to a channel. This means that at any point you can “catch up” with old messages (capped). API wise it means that when a client subscribes it has the option to tell the server what position the channel is:

// subscribe to the chat channel at position 7
MessageBus.subscribe('/chat', function(msg){ alert(msg); }, 7);

Due to the reliable underpinnings of MessageBus it is immune to a class of issues that affect pure WebSocket implementations.

This underpinning makes it trivial to write very efficient [cross process caches] (https://github.com/discourse/discourse/blob/master/lib/distributed_cache.rb) amongst many other uses.

Reliable messaging is a well understood concept. You can use Erlang, RabbitMQ, ZeroMQ, Redis, PostgreSQL or even MySQL to implement reliable messaging.

With reliable messaging implemented, multiple transport mechanisms can be implemented with ease. This “unlocks” the ability to do long-polling, long-polling with chunked encoding, EventSource, polling, forever iframes etc in your framework.

:warning: When picking a realtime framework, prefer reliable underpinnings to WebSockets.

Where do I stand?

Discourse does not use WebSockets. Discourse docker ships with HTTP/2 templates.

We have a realtime web application. I can make a realtime chat room just fine in 200 lines of code. I can run it just fine in Rails 3 or 4 today by simply including the gem. We handle millions of long polls a day for our hosted customers. As soon as someone posts a reply to a topic in Discourse it pops up on the screen.

We regard MessageBus as a fundamental and integral part of our stack. It enables reliable server/server live communication and reliable server/client live communication. It is transport agnostic. It has one dependency on rack and one dependency on redis, that is all.

When I hear people getting excited about WebSockets, this is the picture I see in my mind.

In a world that already had HTTP/2 it is very unlikely we would see WebSockets being ratified as it covers that vast majority of use cases WebSockets solves.

Special thank you to Ilya, Evan, Matt, Jeff, Richard and Jeremy for reviewing this article.

Comments

Sam Saffron almost 9 years ago
Sam Saffron

One thing some people found confusing was that the article is lacking a bit on the “concrete recommendations” front. What do I recommend you do when you need realtime web apps?

  1. Always prefer frameworks that provide multiple transport protocols with automatic fallback (and manual override). Examples are: socket.io , SignalR , Nchan and MessageBus for Ruby. (there are many others not listed here, feel free to mention them in comments)

  2. Always prefer frameworks that support backing your channels with a reliable queue. socket io, SignalR, Nchan, MessageBus.

  3. Consider avoiding WebSockets altogether and only enabling the following 3 protocols provided they fallback cleanly: long-polling with chunked encoding, long-polling and polling. EventSource is simply a convenience client API, on the server it is long-polling with chunked encoding.

  4. Always deploy your realtime backend over SSL if possible.

  5. Always prefer having an HTTP/2 backend for your realtime backend. (keeping in mind that setup will get complicated if you also want to enable WebSockets)

  6. For realtime prefer using a framework / gem / library over building it yourself, there are tons of little edge cases that take many months to nut out.

  7. In some cases WebSockets may be a good fit and the best tool for the job. Games often require ultra fast full duplex communications. Interactive SSH sessions on a web page would also be a great fit.

Thomas OR almost 9 years ago
Thomas OR

Good stuff - unfortunately a lot of this knowledge is typically gathered from hindsight. Load balancing/clustering and connecting to application logic is non-trivial and most developers don’t realize the implications of running an async persistent channel and how it affects application logic…

I’ve implemented web socket and push servers in event machine previously, and I realized getting a web socket working is just the tip of the iceberg. So for a new application we took a different strategy and used Torquebox (for exactly the reasons you point out…)

Torquebox does a good job and if you use the JMS bridge web sockets connect directly into the managed infrastructure (it also automatically passes web session to socket session, etc.). Web socket subscription channels become virtual JMS queues and we’re able to leverage a lot of JEE ‘goodness’ (never thought I’d say that!) with queue management, multiplexing message processors, reliable messages, virtual queues, etc. (And it’s all built in - all you do is start it up and it works).

Our upwards channel is REST (as you pointed out, you need the dispatch anyways), but the downwards to client is web sockets so we can manage client subscriptions and broadcast channels (via STOMP). So far it has scaled really well. We use HTTPS/nginx/wss and Sinatra/MongoDB…

For reliability we number every web socket message on the way down, and if the web client notices a skip, it tosses state and resyncs, not easy but it works well at the end of the day.

almost 9 years ago

Hey Sam, thanks for the hat tip to Nchan.

One thing that seriously bugs me about Websocket is the complete lack of persistence metadata in the protocol. If you want robustness when faced with unreliable connections, you need to bundle your persistent queue state information within the websocket data.

EventSource gives you Event IDs which are meant for resuming interrupted connections. Long-Polling can bundle queue state in the headers. But websocket? Nah. You can’t write an interruption-robust server for websokets while remaining data agnostic.

My feeling is, with http/2 long-polling is going to get pretty interesting again.

almost 9 years ago

MessageBus looks really useful, but I’m having a little trouble understanding how one would manage to get a good number of users on a server. Most typical Ruby applications today are using a forking application server like Unicorn to fork off a handful of worker processes (1 per CPU core or so) or a threading application server like Puma that manage a relatively small thread pool. If you have lots of clients long-polling, won’t you need to greatly inflate the number of processes or threads to avoid having all workers tied up serving long-polling clients? I can see a few issues with doing that, and Puma specifically warns against it:

be careful not to set the number of maximum threads to a very large number, as you may exhaust resources on the system (or hit resource limits).

My biggest concern would be abusive clients making lots of connections to other endpoints that don’t spend most of their time sleeping, essentially DOSing the server.

Sam Saffron almost 9 years ago
Sam Saffron

Racks little secret is Rack Hijack that is implemented in puma/thin/unicorn and passenger. Thin has its own little bit of magic that you get access to by throwing :async. MessageBus supports all of these (I know Faye supports Hijack, it may also support thin specific interfaces)

These interfaces mean that I can take ownership of a socket (and the web server relinquishes ownership) … MessageBus can service thousands of concurrent long polls this way using only 2 threads.

Navaneeth Kn almost 9 years ago
Navaneeth Kn

Nice blog, Sam.

rack.hijack is a new info to me. Thanks for explaining that.

After hijack, will app server releases the socket completely and proceeds to handling next request? When do you close this hijacked socket?

For an Android & iOS apps, I was planning to use Pusher. But this post made me think again. Do you think the issues mentioned here is only the way Browser’s handle WebSockets. Or will this be a problem in general to all WebSocket implementations?

Using MessageBus and long polling makes sense for an android app?

Sam Saffron almost 9 years ago
Sam Saffron

Out of the box we close long polls after 29 seconds (this works around some proxies being naughty) this is configurable.

Pretty much impossible to answer your question :slight_smile:

  • It is possible a standard TCP server would do the trick for you.
  • It is possible websockets fit best
  • It is possible long polling would work best

What is the problem at hand? Can you describe it with a bit more details.

Tiago Cardoso almost 9 years ago
Tiago Cardoso

Your post just reflects a lot of my concerns with this Rails 5 release. It doesn’t do anything remotely inventive.

  • Action Cable: As your post reflects, we (mostly) don’t really need web sockets. Action Cable would be really cool if it implemented fallbacks (SSE, long polling) and added support for plugging in your own pubsub. If my server already has RabbitMQ or ZeroMQ, now I have install Redis, with an undocumented history in pubsub performance and whose history of replication and reliability is bumpy at best. We’re back to full-opinionated mode.
  • A direct by-product of last point: now I have to install eventmachine AND celluloid, even if I’m running unicorn AND my app is just a CRUD app. Memory footprint alarm set?
  • rails-api: It was fine as it was (an outsider gem), but they had to add it to rails. in a world where you can use a more API-friendly framework like Grape, why?

Source Maps will be cool, but that’s sprockets. HTTP2 support will be cool, but that’ll be rack. In the end, will remain with yet another rails release where the apps are not mountable, like in any other rack framework. Is it at least faster? Since I know that you had a lot of benchmarks for it, you could be the guy that measures the benefit of upgrading a rails 4 app to rails 5.

Navaneeth Kn almost 9 years ago
Navaneeth Kn

Sure.

Our Android APP shows products from different e-commerce stores like Amazon, Wallmart etc. We fetch the products via API provided by these stores, and keep it in our system. Android APP receives data using the HTTP API written on Rails.

The use case is this-

  • When user is on a product page, we fire a backend message to update the latest price
  • Since this gets processed in the background, it might take couple of seconds to come back
  • When the BG job comes back, we need to update the client saying the price has refreshed.

Currently this is implemented using short polls.

Sam Saffron almost 9 years ago
Sam Saffron

I see,

Your options are

  • Implement rack hijack yourself, then you can simply kick off the job and once done stream the info to the socket you hijacked. This is likely to be the best performing option for you.

  • Use MessageBus, which means you need to be listening on MessageBus on a particular channel, then publish to it once job is done. This has slightly higher overhead than implementing hijack yourself.

  • Write a custom TCP server to handle this, which is the most complicated option but probably the best performing.

  • Web sockets, which involve using a web socket library in your application and so on which seems very much overkill here.

If you can get your head around rack hijack its probably the best fit imo, cause your Android client code base will be simplest. You simply issue a GET to /latest/price_on_topic … it hijacks and frees up a slot on your server, you funnel it to a threadpool that runs the job, job finishes and you then ship data to the socket.

Second option which would be slightly less code on your side is to use something like MessageBus

Sam Saffron almost 9 years ago
Sam Saffron

Regarding Action Cable, it looks like there is a PR to remove Celluloid, but kicking out EventMachine is significantly harder. I really hope that both transport and message queue is adaptarized for release, though I am unclear if the team are interested in reliable messaging.

Regarding source maps, we already have them for Discourse and so you can have a look at how we do it, we notices sprockets was being REALLY slow compiling our assets so we shell out to uglifyjs direct which is much faster and as a bonus get source maps.

My understanding with rails-api is that it is still an outsider gem unlike action cable.

http://rubybench.org/ is not showing a perf increase in Rails 5, but we do need to add some more comprehensive benches, help in that department is very welcome.

HTTP2 is very unlikely to find its way into rack in the near future, it would be a massive undertaking. Rack 2 should be HTTP/2 friendly · Issue #5 · tenderlove/the_metal · GitHub

Tiago Cardoso almost 9 years ago
Tiago Cardoso

You’re right about celluloid, and I’ve seen something about adapters regarding pubsub. Kicking EM as a base dependency is possible, but not without creating specific drivers in websocket-driver for puma and celluloid-io (I think reel has something). Still, the “only websockets” choice…

rails api is indeed a feature of rails 5. one can use api scaffolding to create rails apps which remove the views from the equation. If only they had something like that for activerecord…

I think http2 is indeed a goal of rack 2, check this presentation. Don’t know how ready it is though, but it seems that the middleware layer will be cleaned up.

I’ll check about source maps and the rubybench, thanks a lot.

Gareth Bult over 8 years ago
Gareth Bult

Anyone who’s still interested in WebSockets might like to take a look at http://crossbar.io. Whereas I can see the technical validity of all the points listed, as a developer of [crossbar]/websockets based applications, in practice, I’m not coming up against any of the issues documented here. (and my ‘why websockets kick ass’ list is significant …)

Sam Saffron over 8 years ago
Sam Saffron

Keep in mind crossbar builds on http://autobahn.ws/ which offers long poll fallbacks and so on. Stuff like WAMP prefers websockets but works fine with other transports.

Gareth Bult over 8 years ago
Gareth Bult

Indeed, although Autobahn is really an integral part of Crossbar and they are developed by the same people as part of the same ‘system’. My point is, that if you extend the context of ‘websockets’ to include (something like) Crossbar (and autobahn), not only does it provide arguments to cancel pretty much all of the points covered, it actually does a 180 on some of them. Take load-balancing for example, crossbar is able to load-balance incoming connections over multiple providers using a number of different built-in schedulers, without having to mess with the likes of HAProxy or Varnish. Furthermore, DDOS prevention for normal HTTP applications that ‘expect’ potentially thousands of connections per second can be difficult, whereas WebSocket/WAMP applications that make one persistent connection and stick to it until (for example) the tab is closed, might make one per hour … they are very easy to protect with simple ‘linux pset’ firewall rulesets.

At the moment there are some features in web browsers (I’m talking about HTTP caching in particular) mean that you still want to use HTTP for [some] static objects, but when developing websockets based SPA’s, rather than seeing any of the above issues, what I do see are real performance improvements of between 5x and 20x (according to Google dev tools) when it comes to response and latency … and that’s before we consider the applications of two-way real time communications like WebRTC. Seriously, it’s not just me, take a look at StackOverflow and see what people think of long-poll vs Websockets, to quote one response with 300+ votes, "WebSockets - is definitely the future … " !!!

Sam Saffron over 8 years ago
Sam Saffron

This I very much doubt.

Compare HTTP/2 chunked encoding to web sockets. Show me a real benchmark that is doing equivalent work that is 5x to 20x faster using web sockets. Its technically impossible this is true. Stuff is governed by the speed of light. To be 5x faster you are going have to have a trick that allows you to bypass physics.

Gareth Bult over 8 years ago
Gareth Bult

Erm, no, not realy. There are number of factors at play …

  1. Packet size … HTTP uses headers that are typically 500 bytes+. WAMP
    uses a fraction of this, thus Websocket applications exchange less data,
    hence are more responsive and in real terms, and much faster. If you’re
    writing an application that swaps 10 byte packets with the server, probably
    a LOT more than 20x faster.

  2. Typically web-servers can support pipelining but turn it off, so every
    GET needs to open a new connection, whereas with websockets you’re always
    working with an open connection. Again, fewer TCP transactions, less data
    transferred, hence lower latency and in real terms, much faster.

  3. When you use websockets, it doesn’t really make sense to reconnect for
    each page, so you tend to implement your application as an SPA. So you only
    ever load your assets (CSS, IMG etc) once, rather than once per GET. I know
    the browser will cache stuff, but you still need to clear and reload the
    DOM for every GET. If you’re just replacing a component within the page,
    all this work goes away, less processing, more speed. (the framework I use
    makes it look like you’re navigating around the site, but (relatively)
    transparently updates stuff “in page”)

You can “see” the timing difference by looking at Google developer tools
while loading a page, then looking at an equivalent transaction for a
websocket based page transition. Now I appreciate this isn’t “just” down to
Websockets, but even without the SPA stuff that (for me) “comes with it”,
(1) and (2) still make it notably quicker in my use-cases. I’m sure you
could find instancer where it’s slower, if you have massive pages for
example and use web-server compression, but typically if you’re “thinking”
about using websockets, it’s because you want something that’s a little
more “interactive”, which means you’re generally swapping lots of
relatively small packets.

Maybe technically impossible within your context, but not so when
implementing and timing a real web page against the equivalent websocket
implementation - physics doesn’t come into it, a change in the way we
implement online applications (and/or websites) does.

over 8 years ago

WebSockets is a fairly broken protocol with limited use-cases. Here’s an entertaining take on the topic:

It’s an older article but worth reading for the burn.

over 8 years ago

In some cases WebSockets may be a good fit and the best tool for the job. Games often require ultra fast full duplex communications.

WebSockets uses TCP and you should never use TCP for games. You really should be using UDP, but I think the only way to do that right now is to use WebRTC’s RTCDataChannel, which doesn’t have great cross-browser support.

Steven Gittleson almost 8 years ago
Steven Gittleson

Since Websockets is not supported in HTTP2. My question thefore is in the world of HTTP2 AND Angular 2 are there better alternatives to using SignalR for real-time communications between server (Azure) and the Chrome browser? We have just started building a new large Web application that will only support the Chrome browser.

Thanks,

-Steven

Sam Saffron almost 8 years ago
Sam Saffron

My understanding is that SignalR is transport agnostic, you should be fine.

almost 8 years ago

I disagree with most of the points mentioned in the article. WebSockets were not intended for 100% reliability, if you are making a game that needs to update at least 30 times every second, HTTP cannot handle that type of load. This is also similar to the ways that console games connect to their servers using UDP, which is even less reliable than WebSockets. You also only mentioned big WebSocket implementations such as socket.io leaving out better implementations such as uWebSockets. Even though they are not supported in earlier versions of Internet Explorer, they have been standardized by the W3C and are now in every major browser, even mobile. The most important part of making a web application is picking your protocol, and if you thought WebSockets were bad for the application you were creating, don’t hate on it, it is very useful for various other purposes than the single application you are making. I made Orpe.ga and I use WebSockets every day :slight_smile:

Sam Saffron almost 8 years ago
Sam Saffron

Sounds to me like your online game is exactly that, websockets are great fit for orpe.ge, slither.io and various other highly concurrent real-time games, though yes, you would prefer UDP so maybe experiments with webrtc make sense.

Use the right tool for the job.

Alex over 7 years ago
Alex

Oh, BTW, another problem we just bumped into is websockets not working with Windows-integrated authentication in Chrome and (iirc) Firefox.

Anurag Singh about 6 years ago
Anurag Singh

Hi All,

very useful discussion.

I need suggestion here

We are developing a real time application, where users can make changes on screen chart (kendo UI with Angular 4) and that update should be shown to all connected users.
We have N Tier architecture UI --> 4 front end servers (Weblogic) - 2 Backend server(Weblogic).
We are using spring websocket for data replication among servers.
Spring recommends using a stomp broker like Rabbit MQ in multi server environment. why they don’t tell.
Because I am using In memory stomp brokers and every server is client as well as server.
Can you please tell me if this solution work (we will have only 50-100 users working at a time).
If it doesn’t work I am thinking about using SSE with Weblogic JMS queue(Event driven Architecture).
Can you please suggest me and also give me some points, how to proceed further.

Thanks,
Dheeraj

Sam Saffron about 6 years ago
Sam Saffron

For the basic problem of “Broadcast” VS “Two way comms”, SSE or chunked encoding HTTP is going to be an easier to deploy and easier to maintain solution than web sockets. You will be able to drive all the updates via HTTP/2 and not need to worry about running a second protocol.

As to how you should structure your JVM backend, I have no idea, I am not a Java expert so I am afraid I can not be of much help there.

almost 6 years ago

Should have read this months ago… now we’re feeling the pain. So what’s going to be the trick then to load-balance properly?

Sam Saffron almost 6 years ago
Sam Saffron

Give up on HTTP based load balancing for the IP address and use straight up TCP balancing, HAProxy allows you to do that, has limitation but can be made to work.

Zelus almost 4 years ago
Zelus

Hey Sam, curious to hear if your stance / concrete recommendations have changed since the introduction of uwebsocket in 2019

Sam Saffron almost 4 years ago
Sam Saffron

Looks like a nice web socket library, but does not really change anything about my article

Steve almost 3 years ago
Steve

Hey Sam, wondering if you’ve seen https://anycable.io/ and thoughts there. Looks promising IMO.

Sam Saffron almost 3 years ago
Sam Saffron

Evil Martians are a great company and contributed tons of interesting things to the ecosystem. I am sure that what is written on the box is correct and it reduces memory usage over Rails native Action Cable and scales better.

That said, the fundamentals here still stand, fascinatingly if I were to rewrite the article for today, no much would change.


comments powered by Discourse