SOA brings change. It is a change to the way we do IT business. No question of it. Anyone who has tried to ‘tack’ SOA onto the side of an organization has seen the resistence that this generates. “We’ve always done it that way before… why change?”
One place where SOA has an impact, but few people speak openly about it, is the change that SOA has on the world of software testing.
There are some huge changes here:
1) Regression testing: The only hope you have for insuring that your service can handle itself in a changing environment is to create automated regression tests. This was optional in the past. Now it is both Required and quite Feasable. Since services have no User Interface, there is no need to worry about whether the control has moved. Maintaining an automated regression test script is much easier.
2) Boundary testing: If the intended use of a service is a good thing, what about the unintended use of a service? Can the service survive being hit with FAR more requests than it was designed for? Does it throttle itself? Does it protect itself? Can private data leak out?
3) Integration testing: A LOT of the capability of a Service Oriented business app will move to the composition layer. Testing the services helps to establish a baseline to show that the defects should be avoidable below the service line, but many of the bugs will occur because a composed service assumed that an underlying service would have a side effect that it does not have, (or vice versa). As defects are found in Integration testing, the test cases need to be updated at the service level to insure that assumptions are constrained and tested.
4) Stub testing: you may need to test the composition layer before a service is available or while it is only ‘local’ instead of ‘enterprise’. For that reason, the test team needs to be able to generate ‘stub services’ that apps can call that behave in a manner that is compatible with the service definition, but has far less effect. Otherwise, integrated testing is a joke.
5) New service validation and service compatibility validation: if a new service is entering an environment, and the goal is for the users of the existing service to transition over to it, then there has to be a way to test the new service to insure that it is compatible with the existing one. Automated regression tests that were designed to test the existing service need to be pointed to the new service and failures noted. Note that the team developing the new service is not likely to be the same team as the one that developed the prior one, so source code for the regression test must be available and shared and documented and stable. This requires a level of ‘test team integration’ that many organizations will find challenging.
This is, I’m sure, a subset of the changes that SOA brings to the world of testing. I encourage those who are involved in testing to share other ideas and concerns that they have come across with respect to SOA development.
PingBack from http://randolf.wordpress.com/2007/04/27/testing-in-a-soa-world/
As a developer, I’m well aware of one inescapable fact regarding software in complex systems: You can only test it so much, and then you have to live with it. That is, it is almost impossible to create software that never fails. As a developer of services, I am also aware of the special requirements of services. When an exception occurs, it should never bring the service down. All exceptions must be handled gracefully, even if that means re-starting the service. So, getting back to my earlier mention of the model of DNA, in a complex biological organism, exceptions are handled by the messaging system which is the nervous system.
I write services that send emails when exceptions occur. This is not entirely different from the way that many Microsoft applications send a message to Microsoft when a critical problem occurs (with the user’s permission, of course). To that end, I have been building an entire .Net Provider-based mechanism for doing this, as one of the most important questions is, who should be notified?
It revolves around a class called "MailJob," which encapsulates a configurable set of recipients, as well as the other characteristics of an email. The "MailJob" class is similar to the System.Net.Smtp.MailMessage class (and can be easily converted to one), but is designed to work within the MailProvider structure. The MailProvider follows the Provider pattern, and as such, the configuration may be stored in any medium, according to the needs of the service or application.
We have a service-oriented applicaion that was built almost 2 years ago, which fetches weather data from AWOS (Automated Weather Observation System) stations, and provides it to the National Weather Service via FTP. After 6 months to a year of dealing with the various unexpected exigencies, of which we were informed immediately via email, the system is rock-solid, and is trusted by the National Weather Service and the FAA (Federal Aviation Association).
So, while I could regale you with stories about the testing mechanisms we have developed (and we have), I wanted to emphasize that prevention is only half the battle in an SOA world, and perhaps the smaller half. No matter how robust an individual serviced component may be, it still has to live with the vargaries of external dependencies, and as systems evolve and become more complex, Murphy is inescapable. Getting humans involved in issues ASAP is of paramount importance.
Sage advice, UC.
I will take that to heart in my own work.
— N
Hi Nick,
Shrini here – I hope you remember me. It is long time that we lost touch.
Here is a thought around boundary testing (on a diffrent note) what I call it as "Boundary value exploration" here …
Your observations on Testing in SOA apps are interesting … I will come back with my comments
http://shrinik.blogspot.com/2007/03/boundary-value-exploration-bve.html
Shrini Kulkarni
Principal Consultant Testing
iGATE Bangalore
Trying to squeeze this in before yet another gym workout. As I have been talking about on my personal