Enabled configurable session affinity at the load balancer.
I would like each request from a user to go to the same web role instance. The motivation is performance of cached data.
Configuration based to IP address, form data, and query string data would be useful. I believe this can be configured at the load balancer.
In my case, this is a Facebook app, so affinity based on the fb_sig_user parameter in the POST data would send the same Facebook user to the same VM instance.
Thank you for the feedback.
I’ve moved this suggestion to the ‘Cloud Services’ forum, instead of the ‘Traffic Manager’ forum. Note that Traffic Manager provides DNS-level distribution of traffic between cloud services, typically deployed in different data centers. This suggestion is about network/application-level load-balancing within a data center.
Load balancer should have options.
Sticky, Round Robin, Least Busy.
It should also have the ability to act as a firewall, meaning you could import windows firewall rules. This is a lot cleaner than running windows firewall on each pc. Being able to block China, Russia, and other countries that have no business being on our sites (us based audience) has a distinct and powerful corporate advantage. Although you can have firewall rules in both your linux and windows deployments, why not follow the cloudstack model and allow rules directly in the load balancer?
Brandon Warlick commented
Our apps must have session affinity. At this point we are going with RackSpace. Would rather stick with Azure but we cant at this point
Mark Hildreth commented
So after about 2 weeks of R&D, we reached a workable solution that allows us to stick with the In Process session state provider using ARR. Unfortunately, this also requires the use of a worker role to coordinate the redirection of requests between instances, so this is a more expensive option for the long term. Apparently, (and this may just be a rumor) the Java SDK for Azure does just have a checkbox for sticky sessions - why not support that in the .Net SDK?
Nariman Haghighi commented
I think it's important to note how difficult it is to achieve H/A deployments on Azure today - this is arguably its largest shortcoming. Let's look at how the H/A issue is related to affinity. First, note that the cookie used by the RDP file to reach a specific instance in a given role (mstshash) doesn't work for HTTP connections. If it did, affinity would be simple to achieve. But what's more important than affinity (and closely related) is the fact that we can't reach a specific instance from the outside and hence can't monitor it for app-specific errors. This makes it next to impossible to have have fault-tolerant deployments as there is zero awareness of the application status by the NLB. The hack for the moment seems to be to do on-VM monitoring using a powershell script and the Set-RoleInstanceStatus call. This deserves a real solution.
Mark Hildreth commented
Just a checkbox to allow for sticky IP would be nice. Forcing a round-robin scheme does not work well for site that rely on in-process session state. And for the nay-sayers that say everything in the session should be serializable, keep in mind that in-process session state is the DEFAULT, so there are many, many apps out there that unknowingly rely on this behavior. Although the AppFabric Caching provider is good, it still requires all session objects to be serializable, so it's not exactly a silver bullet.
Session affinity is an important option for a number of scenarios, including rolling application updates for no downtime. I would like to be able to run two concurrent versions of my application (that is, two hosted services) behind a single VIP, with session affinity tying a user to a specific version. I should be able to migrate their data from v.old to v.new and flip their session affinity flag to the new service programatically.
Benjamin Guinebertière commented
could be done by installing and configuring Application Request Routing on a worker role. More explanations at http://go.archims.fr/hW54Xz
Vengi Mutthineni commented
I would recommened microsoft to come up with sticky session kind of F5 configuration on load balancer, which can help interms of performance, time savy and hassle free when compare to storing on data blobs and writing extar lines of code to access the blob,
Even amazon has the possibility to configure sticky sessions: http://aws.amazon.com/elasticloadbalancing/
Please implement sticky sessions! In WCF InstanceContextMode=InstanceContextMode.PerSession is failing when client is not connected to the same server each request.
this is indeed a feature, that is highly needed! Consider a web site, holding several informations about a current signed user in a session state + several data results from database hold in http cache. Lets say its about hundreds KB or even several MB in some cases per user. Lets average this amount to 1MB. Lets install our application over SMALL instances of azure. In this case I have a RAM bottleneck when reaching cca 1.700 users, this is 1.7GB RAM of session/cache data (which equals total RAM of small instance). Even in multiple instances (lets say 10 small instances which gives about 17GB RAM at total) I have to count with the possibility, that every user can hit every server therefore this is limiting the available RAM from all instances to only one = every user can set its own session/cache data in each instance, that is we have to store/load session/cache data when we hit cca 1.700 users instead of 17.000 users when we would limit each user (IP/session) to one instance. Thats very, very limiting the scale possibilities.
Shaun Tonstad commented
Instead of doing the work to add affinity, could we have an option to make an endpoint external and addressable? i.e. so we can hit the load balancer if we choose, or address the endpoint directly? There are many architectures and patterns that require many servers which maintain state. I'd be willing to give up the SLA uptime benefits for this functionality. It is very important.