HTTP vs Websockets: A performance comparison


In many web applications, websockets are used to push messages to a client for real-time updates. One of the more interesting and often overlooked features is that most websocket libraries also support directly responding to websocket messages from a client (acknowledgements in message queue-speak). The following example uses to increases a global counter and return the new value to the client that sent the getCounter message:

Feathers uses this for exposing its APIs both ways, via a traditional HTTP REST API and completely through websockets in which case it also sends real-time updates. It allows to use either transport or both at the same time. Since it is just a very small and fast abstraction that wraps Express and it also does not add a lot of overhead and performs about the same as implementing the routes and websocket handlers by hand (but without having to write all the repetitive glue code).

Usually we recommend using a websocket connection when getting started with Feathers because you get real-time updates for free and it is faster than a traditional HTTP connection. Until now I never had any real numbers to put behind the “faster” claim so it was time to do some real benchmarks. This post compares the performance of request/responses between HTTP and an equivalent websocket call. You can find the repository with the server, test website and benchmark scripts at daffl/websockets-vs-http. For a more detailed analysis of real-time message formats also see A comparison between WebSockets, server-sent events, and polling by Alexis Abril.

Test setup

The server used for testing is pretty much the same as the example on the homepage. It has one simple service that implements the getmethod and replies with the id it got passed. This means a request to /messages/testing will return a JSON object like { "id": "testing" }. The equivalent websocket message is socket.emit('get', 'messages', 'testing', function(error, message) {});.

To get a more realistic environment the server is hosted on Heroku using a 1x standard Dyno. The benchmarks are run on a 2014 Macbook pro with 2.8 Ghz i7 and 16 Gigabytes of RAM in the latest Chrome browser.

Browser client

For the browser I set up a small web page that allows to run and time a fixed number of (parallel) requests and a (sequential) series of requests within a certain timeframe.

The browser websocket vs http test page

All browser tests do not include the time (~190ms) it takes to establish websocket connection. The times for a single HTTP and equivalent websocket request look like this:

On average a single HTTP request took about 107ms and a request 83ms. For a larger number of parallel requests things started to look quite different. 50 requests via took ~180ms while completing the same number of HTTP requests took around 5 seconds. Overall HTTP allowed to complete about 10 requests per second while could handle almost 4000 requests in the same time. The main reason for this large difference is that the browser limits the number of concurrent HTTP connections (6 by default in Chrome), while there is no limitation how many messages a websocket connection can send or receive. We will look at some benchmarks without that restriction a little later.

Data transfer

Another interesting number to look at was the amount of data transferred between both protocols. Once established, a websocket connection does not have to send headers with its messages so we can expect the total data transfer per message to be less than an equivalent HTTP request. Establishing a connection takes 1 HTTP request (~230 bytes) and one 86 byte websocket frame. Excluding this initial connection setup, the data transfer for actual requests looked like this:

One HTTP request and response took a total of 282 bytes while the request and response websocket frames weighed in at a total of 54 bytes (31 bytes for the request message and 24 bytes for the response). This difference will be less significant for larger payloads however since the HTTP header size doesn’t change.

Load benchmarks

Now that we got some insight how a single browser client behaves and how much data is being transferred I was also curious about load tests from multiple clients and ran some benchmarks. For benchmarking HTTP requests I used Autocannon and for websockets I settled on websocket-bench which has similar options and good support for The only common data point both tools supported though was the total runtime of the benchmark, which is what we will compare here by charting the total (fixed) number of requests (per connection) over the time it took to make them (as requests per second). The benchmarks are each running 100 concurrent connections and, unlike the browser numbers above, include the time it takes to establish the websocket connection. Here are the results for 1, 10 and 50 requests per connection:

Benchmark results for 100 concurrent connections and one , 10 and 50 requests each

As we can see, making a single request per connection is about 50% slower using since the connection has to be established first. This overhead is smaller but still noticeable for ten requests. At 50 requests from the same connection, is already 50% faster. To get a better idea of the peak throughput I ran the same benchmark with a larger number (500, 1000 and 2000) of requests per connection:

Benchmark results for 100 concurrent connections and 500 to 2000 requests each

Here we can see that the HTTP benchmark peaks at about~950 requests per second while serves about ~3900 requests per second.


An important thing to note is that even when used via websockets, the communication with the Feathers server is still RESTful. Although most often used in the context of HTTP, Representational State Transfer (REST) is an architectural design pattern and not a transport protocol. The HTTP protocol is just one implementation of the REST architecture. What this means for web and real-time APIs in general is the topic for another post.

When it comes to scalability, one advantage of a REST architecture is statelessness which means that any server can handle any request and there is no need to synchronize any shared state other than the database. Feathers applies the same concept to its websocket connections. The only information attached to the socket is the authenticated user (to decide what real-time events to send). As long as real-time events are synchronized across multiple instances, this architecture can be scaled across multiple servers similar to a traditional HTTP setup.


The benchmarks I ran are by no means a comprehensive analysis of all the nuances in the performance difference between HTTP and websockets. It did however confirm my initial impression that for many cases websockets can be faster than a traditional HTTP API.

Enabling different communication protocols and being able to transparently switch between them without having to change your application logic was one of the key design goals of Feathers. There is nothing wrong with web frameworks that help handling HTTP requests and responses with newer language features, different design patterns or that are simply faster.However, I still believe that a protocol independent architecture and being able to dynamically choose the most appropriate transport protocol will be crucial for the future of connected APIs and applications. In the case of our benchmark, we were able to get a 400% performance boost by using a different protocol without having to change anything in our actual application.

About Nguyễn Viết Hiền

Passionate, Loyal
This entry was posted in C#, Integration, Knowledge, Problem solving, Programming, Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s