logo

Sticky Sessions & jsessionid

Load Balancing

When building software applications that scale there are numerous technological challenges that begin to pop up. One of these is sessions. In your application, sessions help keep user specific information in memory so that the server doesn’t have to make server queries for the same information repeatedly. Things like username, etc. Sessions are also powerful for securing your database queries and keeping database queries specific to the current logged in user.


updateCurrentUserEmail(newEmail){
self.currentUser.email = newEmail;
self.currentUser.save();
}

Without sessions, this operation might be more complicated with a secret user token, which adds more database queries overall, or if you’re not thinking about security as much as you should, it might look like this:


updateCurrentUserEmail(user,newEmail){
user.email = newEmail;
user.save();
}

This operation is NOT advised, and could very easily be hacked by a client that can easily spoof the HTTP request. The problem with scaling an app server is this though, with vertical scaling costs rise exponentially, but if we could scale vertically our costs could scale linearly, which is far more ideal. If we are using Heroku we just magically tell Heroku, use 20 dynos and we are now using 20 dynos, but Heroku doesn’t have sticky sessions, so we can’t use sessions for our Heroku applications.

Instead, what I recommend is setting up our own Load Balancer in front of our Application servers that forwards all of our traffic to our application servers so that we can use multiple application servers and scale horizontally. The basic formula for this is something called Round Robin, which is really simple. If you have 3 servers A, B, & C traffic all requests are fed to the servers in a rotating system so request 1 goes to A, 2 -> B, 3 -> C, 4 ->A, 5->B, …


a%n = Server Number to handle the request
a = request number
n = number of servers

This works really well, however sessions are completely unusable, because sessions are stored in memory. We can store sessions in Redis, or something similar, but we still have the problem that this greatly increases the number of Redis queries we have to do and this slows down requests and limits the number of requests our servers can handle before becoming maxed out.

What’s the solution?
Sticky Sessions!

Imagine if a users first request hits Server A, and all further requests from that user still hit only Server A. This would mean we could store that users session in memory on Server A and limit our Redis calls. We do this by settings a JSessionID. I use numerous systems for load balancing including HaProxy and custom systems written in NGinx and GO as well, but we need to set up something called the JSessionID on all of them, because JSessionID is just an arbitrary name for the cookie that tells the Load Balancer what server your requests should go to.

In fact, based on what language you’re programming you might have seen the jesssionid referred to as PHPSESSID (PHP), or ASPSESSIONID (Microsoft ASP).

So setting the Sticky Session forces the Load Balancer to feed the users requests to the same server which greatly decreases overall server work and keeps all the security benefits of using sessions, but what happens if the server the user is connected to crashes? This is an interesting problem, but is solved fairly easily by also storing sessions in Redis. So sessions are stored both in memory on the server and in Redis on your Redis instance/cluster. We then need to build a handshake system on each of your app servers so that when a request comes in to that server with a jsessionid it doesn’t recognize, the app server knows it needs to retrieve that users current session data from the Redis instance/cluster and store it in its own memory and then tell the client to update it’s existing jessionid and you’re good to go.

Scaling your application servers should be easy, by implementing sticky sessions you can greatly increase your server performance without losing the ability to use in memory sessions. This becomes extremely powerful and necessary when you want to scale applications with WebSockets. Many systems like Socket.io require Sticky Sessions in order to run fluidly and this was my primary reason for migrating from Heroku into my own cloud and first building my own Load Balancer built on HaProxy which allows for Sticky Sessions and custom load balance algorithms including Round Robin and LeastConn, which can be preferred when websockets are involved because it keeps all the app servers with an equivalent number of users connected to them at any point in time.

  1. Rashmi Samantaray

    Hello, the example with JSESSIONID and Redis is exactly what I have on hand. I want to implement you suggested solution, but can’t find an example where i can have JBOSS (my app server) to retrieve the non-existent ID from REDIS and use it. In case the request from browser/client does not have JSESSIONID, Let JBOSS create the JSESSIONID and store in REDIS. One example implementation on JBOSS would be very helpful.

Leave a Reply

*

captcha *