I’ve been looking at different ways to implement the ATAM method these past few weeks. Why? Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University. Along the way, I’ve realized that there is a flaw that seems difficult to address.
Different lists of criteria
The ATAM method is not a difficult thing to understand. At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants. Get the business stakeholders to sign off. Then evaluate the ability of the architecture to perform according to that priority. An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.
So where do we get these lists of attributes?
A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.” I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method. Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers.
Of course, there are other possible lists of attributes. The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012. They use different terms. Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.” For each quality characteristic, there are different quality metrics.
Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).
The missing criteria
One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.” The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time. If your time is shorter, the number of features is shorter.
I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well. In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.
How is that quality?
I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security. After all, shouldn’t we measure the quality of the product independently of the date on which it ships?
In a perfect world, perhaps. But look at the method that ATAM proposes. The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off. In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.” Try explaining the difference to your business customer! I can’t.
However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way. We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention.
For example: let’s say that your business wants you to implement an eCommerce solution. In eCommerce, security is very important. Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft. Security matters. In fact, I’d say that security matters more than “going live” does.
So your priority may be, in this example:
- Security,
- Usability,
- Time-to-Release,
- Flexibility,
- Reliability,
- Scalability,
- Performance,
- Maintainability,
- Testability, and
- Interoperability.
This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable. On the other hand, if the code is not particularly maintainable, we will ship anyway.”
Now, that’s something I can sink my teeth into. Basically, the “Time to Release” attribute is a dividing line. Everything above the line is critical to quality. Everything below the line is good practice.
As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one. Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.
So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list. It may offer insight into the mind, and expectations, of your customer and improve your odds of success.
You point out that "time to release" is not found on typical lists of software quality attributes. That is because this is not a characteristic of *product* quality, but rather a characteristic of the *project* or of business objectives. Traditional attributes such as usability, maintainability, efficiency, availability, and the others are focused on properties of the product itself. I agree that "time to release" is an important success factor for a project, but it should not be lumped in with quality attributes that deliberately describe characteristics of the product being developed. Instead, I would use "time to release" expectations (based on some sound business analysis) to make appropriate trade-offs with features, quality, staff available, and budget available. It doesn't do you much business good to release a perfect product well after its effective market window, but nor does it usually do you much business good to release a piece of crap as quickly as you can.
The SABSA methodology has the concept of "Business Attributes Profile" and has a taxonomy of business attributes. It's fairly well-rounded and extensive, even though it started from a security perspective. I have a copy of the book "Enterprise Security Architecture: A Business-Driven Approach" that discusses it, but I suspect you can find some or all of it online.
Good article. I too agree that time-to-release is important. Balancing tradeoffs is much more than trading off on technical attributes. If internal and external business -related attributes and context are not considered then the value proposition of a proposed architecture or solution goes out the window.
On a related note, I remember taking an SEI architecture course and the instructor saying that while you can propose quality attributes to a customer/stakeholder, always accept any quality attribute they dream up as long as they can tell you how they would measure it. The challenge then is to understand the push pull of the attribute against other attributes. Eliciting requirements, quality attributes included, is always a tough job, because you will never get all of them identified and prioritized, and even if you do some stakeholders will say they don't care about some of them. But as the saying goes, they don't care until they do. Process (such as architecture reviews) plus experience will take you furthest in the right direction.
First off, I want to say that I'm honored to get a comment from someone as prominent and influential as Karl Weigers. Thank you, sir, for taking the time to read my humble blog.
I expected to get some criticism for suggesting that we include "time to release" on a list of quality attributes. You are correct in saying that it is not an attribute of quality. It is an indication of the amount of time and resources that a business is willing to commit to developing that product.
My intent was not to consider "time to release" as a quality attribute, per se, but to place this "non-attribute" on the same list as the other quality attributes when performing the prioritization with the business.
The goal, as I hope I explained, is simple: to empower the developers by making it clear, to EVERYONE, that some quality attributes are important enough to slip the ship date, while others are not. I cannot count the number of times I would have found that information useful.
In short, with respect to the ATAM method, I would say that the list of attributes should not be limited to descriptions of quality. Adding this one element would make a very big difference in the ability of the developers to prioritize their work and focus on the "stuff that matters the most" while, at the same time, resisting the often irrational pressure to "ship crap on time."
@Gerry,
Thank you for your pointer to the SABSA profile. Very thorough. In many ways, it is more complete than the profile available in the current ISO standards.
That said, it is as intellectually pure as the ISO list. The SABSA profile describes the product (as it was intended to) from different perspectives.
My post suggests that the project team needs to include a single "non-quality-attribute" in the list when performing the prioritization with the business customer. That single attribute, which I call "time to release" is a kind of lithmus test. Stuff "above" it on the priority list is sacred. Stuff below it is important, but not sacred. That defines the line between "we will not ship crap" and "we will ship imperfect software."
There is no perfect software.