It has traditionally been assumed that load testing is done for the sake of fine tuning performance and that's true. But it's also done to fine tune scalability, and that's a whole different ball game - the next CapCal customer or prospect I meet whose site is actually able to handle the kinds of loads they expect will be the first. Not because they lack a first class, multi-tiered infrastructure with all the latest hardware (or even better, a load-balanced, autoscaling cloud deployment), but because there are umpteen million "gotchas" laying in wait to surprise you at the worst possible moment - everything from load balancer settings to database, web server, OS or network settings and application configuration parameters, the list is endless. So instead of showing a performance drop at a certain load a server will begin spewing out errors indicating that an invisible boundary was crossed somewhere.
Have a look at the 2 minute CapCal test above (click on it for a better view) that attempts to reach 1,000 virtual users but at about 700 begins generating thousands of 503 (out of resources) errors. The green bars that show the ever-increasing bandwidth also drop precipitously when the errors start because only error headers are being returned instead of content.
Is this a site that expects to have more than 700 people online at any given time? Try 7,000 or even 70,000! Could this test be run in the lab using a tool like JMeter? Maybe at 1,000 virtual users, but at higher loads it just isn't practical or feasible because of the number of computers that would be required.
This is an example of a scalability test done against a single static page that doesn't even begin to stress the servers and yet it shows a scalability limit that required all of 2 minutes to uncover. Proof once again that performance rocks but scalability rules!
Highscalability is Up For Sale
11 months ago
No comments:
Post a Comment