Long polling is the simplest way of having persistent connection with server, that doesn’t use any specific protocol like WebSocket or Server Side Events.
Being very easy to implement, it’s also good enough in a lot of cases.
The simplest way to get new information from the server is periodic polling. That is, regular requests to the server: “Hello, I’m here, do you have any information for me?”. For example, once every 10 seconds.
In response, the server first takes a notice to itself that the client is online, and second – sends a packet of messages it got till that moment.
That works, but there are downsides:
So, if we’re talking about a very small service, the approach may be viable, but generally, it needs an improvement.
So-called “long polling” is a much better way to poll the server.
It’s also very easy to implement, and delivers messages without delays.
The flow:
The situation when the browser sent a request and has a pending connection with the server, is standard for this method. Only when a message is delivered, the connection is reestablished.
If the connection is lost, because of, say, a network error, the browser immediately sends a new request.
A sketch of client-side subscribe
function that makes long requests:
async function subscribe() {
let response = await fetch("/subscribe");
if (response.status == 502) {
// Status 502 is a connection timeout error,
// may happen when the connection was pending for too long,
// and the remote server or a proxy closed it
// let's reconnect
await subscribe();
} else if (response.status != 200) {
// An error - let's show it
showMessage(response.statusText);
// Reconnect in one second
await new Promise(resolve => setTimeout(resolve, 1000));
await subscribe();
} else {
// Get and show the message
let message = await response.text();
showMessage(message);
// Call subscribe() again to get the next message
await subscribe();
}
}
subscribe();
As you can see, subscribe
function makes a fetch, then waits for the response, handles it and calls itself again.
Server should be ok with many pending connections
The server architecture must be able to work with many pending connections.
Certain server architectures run one process per connection, resulting in there being as many processes as there are connections, while each process consumes quite a bit of memory. So, too many connections will just consume it all.
That’s often the case for backends written in languages like PHP and Ruby.
Servers written using Node.js usually don’t have such problems.
That said, it isn’t a programming language issue. Most modern languages, including PHP and Ruby allow to implement a proper backend. Just please make sure that your server architecture works fine with many simultaneous connections.
Long polling works great in situations when messages are rare.
If messages come very often, then the chart of requesting-receiving messages, painted above, becomes saw-like.
Every message is a separate request, supplied with headers, authentication overhead, and so on.
So, in this case, another method is preferred, such as Websocket or Server Sent Events.