Automated functional testing, for someone who is watching it for the first time, is quite amazing. What would take a person hours to do by hand flashes before you on the screen within minutes. It's that phenomenal time savings and increased test coverage that makes it worthwhile, but the "while" part is the rub - if you end up spending all your time maintaining the test scripts as the application changes, the time savings starts to degrade to the point of ever diminishing returns.
So obviously making the tests easier to create and maintain is a huge leap forward, but if you have to turn around and create performance tests using a DIFFERENT tool you are faced with the same dilemma if not worse. Take the HP Mercury suite as an example - QuickTest Professional uses a scripting engine based on VBScript, while LoadRunner uses the C language as its basis. In other words, maintaining a load testing script can very often require more technical expertise than it took to write the application itself!
One of the major differences between functional and performance testing is that functional testing can be done manually while it is extremely difficult and unwieldy to do performance testing that way. That doesn't mean that people haven't tried it, and the thought of coordinating dozens of people to perform a load test would be funny if it weren't so absurd - in fact it was only last year that I heard of a case where this was being done at a major corporation for lack of a suitable load testing tool.
But even with a load testing tool at your disposal, if you've already invested in functional test automation doesn't it seem bizarre that you would have to turn around and duplicate all that cost and effort just to do performance testing? Sure, there are some major differences between functional and performance testing but that doesn't mean that an entirely different tool with its own language is necessary. It's evolved that way for historical reasons that have a lot more to do with the search for more revenue by test tool vendors than the search for more efficient testing methodologies.
If you could use a subset of your functional tests for performance testing not only would it save time and money but you could be many times more productive and efficient in your testing cycle. That's what we call "integrated performance testing" and it involves capturing an automated functional test while it is running and then immediately using it to create and execute a performance test. Not only have you saved the time and money involved in writing and maintaining a separate set of tests but you've also made sure that both kinds of testing get done at the same time.
Here you can see a screen recording of an automated Worksoft Certify test being run with CapCal Integrated Performance Testing. The demo shows batch command execution because that is how these tests will be integrated into the nightly build cycle. However, we've also tested with HP QuickTest Pro, Compuware QA Tester and Automated QA Test Complete. For a proof of concept in your own environment with whatever tool you are using, just write to firstname.lastname@example.org and someone will get right back to you!