This is the second time on this blog where the words "airline" and "crash" are used together but fortunately we are referring to websites, not airplanes (see Pet Airways Crashes on Opening Day). The above graph shows the results of a CapCal Crash Test on a new promotional site for one of the major airlines (sorry, I can't use their name without legal approval).
There are only two pages in this test, the first one going to the main page and the next one submitting a registration form. The objective of a crash test is not to actually to crash the site but to find the point at which performance starts taking a nosedive (there's that airplane thing again). "Good" response time is less than a second, where "bad" starts at about 2 seconds and goes up. (These aren't subjective measures as much as the result of having run thousands of tests and seeing the same pattern over and over - usually after it reaches 2 seconds it goes up rapidly from there).
This one was able to retain good response time until it reached around 400 users. At that point you can see the "hockey stick" effect in the red bars on the right that reach 6 seconds at about 1,340 users. At 2,000 users, most people would think the site was offline and go somewhere else. For all intents and purposes it has crashed, even though it is still handling requests. Since the average response time is 6 seconds, some of the pages are probably taking more than 10. Life is much too short for slow web apps.
However, have a look at the Site Errors column to the left of the green bars - at a little over 1,000 users the server starts generating errors, and those can be even worse than slow response times. Don't you just hate it when you get a "500 server error" after you've patiently filled in all the fields in a form? Wouldn't you hate it even more if you were the airline or the company that put together the web site? That's why scalability and performance go hand and hand but scalability is king!
The site we tested was a staging site that mirrors the production site so this is what they can realistically expect to see in production. This test used five small Amazon instances so the cost was negligible. Not too shabby for something that could prevent a million dollar disaster if the promotional campaign itself flopped because of a glitch in the web site!