/Tag: system quality attributes

EA Debt

By |2014-06-24T23:41:00+00:00June 24th, 2014|Enterprise Architecture|

(Note: I’ve added an addendum to this post)

It has been many years that we have lived with the concept of technical debt.  Martin Fowler did a good job of describing the concept in a 2003 article on his wiki.  Basically, the idea is that you can make a quick-and-dirty change to software.  You will have to spend time to fix that software later.  That additional time is like an interest payment on debt that you incur when you made the change.  Fixing the code to make it more elegant and clean “pays down the debt” so that future changes cost less to make.

I’d like my fellow practitioners to consider joining me in extending this idea to the enterprise.

Organizations change, and sometimes the changes have to be made quickly.  Processes that are implemented “quick and dirty” may have poorly defined roles, overlapping responsibilities, and duplicate accountabilities.  It may work, and it may be necessary to hit a deadline.  However, these “dirty” processes have the same problem that dirty code does – it costs more later to fix them. 

In a sense, partial process or accountability “fixes” in an organization create “enterprise architecture debt” or “EA debt.”  We run into it all the time in organizations.  Here are some examples that I’ve personally seen:

  • One team is responsible for checking the quality of all manufactured products and making sure that they get to distributors.  However, products that are custom developed have their own quality check and distribution function.  Effectively, two different teams duplicate a couple of functions.  This could be simplified, and doing so would likely reduce costs and improve consistency in quality checks. 
  • The marketing team uses data mining techniques to identify potential customers (prospects) and enters them into a system along with attributes like segment, predicted value, and targeting within specific marketing programs.  When a new customer reaches out to actually purchase a product, however, the customer record is created in a CRM system that is not linked to the marketing record.  Consistently linked customer information could provide valuable information about the effectiveness of marketing programs as well as enriching customer information for the sale of service and ongoing sales.
  • An outpatient specialty radiology department in a Hospital requires patients to be registered separately from other hospital services in order for patients to be handled.  For most patients, this is not a problem.  However, for patients within the hospital, the separate registration requirement creates opportunities for errors as information is hand-transcribed from one system to another.
  • A retailer sets up an e-commerce division to sell their wares online.  However, inventory and warehousing the new e-commerce site is not integrated into existing store systems.  The ecommerce “store” is treated as another physical store.  This works, but any attempt to allow customers to purchase online and pick up at a store become problematic because the retailer has no way to handle purchases made in one store to be fulfilled by another.

These, and a thousand more situations, are the result of “partial” or “messy” implementation of organizational changes.  They are a form of “EA debt” because any change to the organization that hits these capabilities will be more expensive to change in the future as complexity slows down the organization.  In effect, EA debt is like taking a Lego set and gluing the pieces together.  The parts will remain just as they are, but the will be very difficult to change in the future if something needs to change.  (Apologies to “The Lego Movie” for the metaphor).

Why call this “EA debt?”  Because it is not a financial term.  It is nearly impossible to accurately measure all of the EA debt in an organization.  It is, however, fairly straight forward to measure monetary debt.  So we have to be careful not to use terms like “enterprise debt” or “organizational debt” as these may be confused with general accounting concepts.  Just as technology teams sometimes twist the concept of an “asset” to apply to an information system, Enterprise architects are using the metaphor of debt to refer to one of the root causes of difficulty in making organizational change.

Addendum: I guess I shouldn’t be surprised that this idea is not novel.  It’s fairly self-evident.  It was my mistake that I didn’t go looking for other references to the idea before writing the above post.  Laziness.  No excuse.  While the concept of technical debt does in fact trace back to Ward Cunningham (inventor of the Wiki), as discussed by Martin Fowler in the referenced blog post, the application of that concept was first applied to EA in 2008 in the Pragmatic EA Framework, and is part of the current version as well.  I’d give a link to that presentation if I could but the best I’m able to do at this time is a general link to PEAF at http://www.pragmaticea.com.  Kevin directly responded below with links into his material .  It is no disgrace to be in the shadow of Kevin Smith (author of PEAF).  It is an error, however, to appear to originate the idea.  For that, my apologies.

Placing Architecture Properly into Scrum Processes

By |2016-09-28T22:44:52+00:00June 11th, 2013|Enterprise Architecture|

As I’m about to complete my share of a longer engagement on using Lean principles to improve the processes at an online services firm, it occurred to me that the efforts we undertook to properly embed Architecture practices into their Scrum process were novel.  I haven’t seen much written about how to do this in practice, and I imagine others may benefit from understanding the key connection points as well.  Hence this post.

First off, let me be clear: Agile software development practices are not at all averse to software architecture.  But let’s be clear about what I mean by software architecture.  In an agile team, most decisions are left to the team itself.  The team has a fairly short period of time to add a very narrow feature (described as a user story) to a working base of code and demonstrate that the story works.  The notion of taking a couple of months and detailing out a document full of diagrams that explains the architecture of the system: pretty silly.  (more…)

Time-to-Release – the missing System Quality Attribute

By |2012-03-09T01:26:25+00:00March 9th, 2012|Enterprise Architecture|

I’ve been looking at different ways to implement the ATAM method these past few weeks.  Why?  Because I’m looking at different ways to evaluate software architecture and I’m a fan of the ATAM method pioneered at the Software Engineering Institute at Carnegie Mellon University.  Along the way, I’ve realized that there is a flaw that seems difficult to address. 

Different lists of criteria

The ATAM method is not a difficult thing to understand.  At it’s core, it is quite simple: create a list of “quality attributes” and sort them into order, highest to lowest, for the priority that the business wants.  Get the business stakeholders to sign off.  Then evaluate the ability of the architecture to perform according to that priority.  An architecture that places a high priority on Throughput and a low priority on Robustness may look quite different from an architecture that places a high priority on Robustness and a low priority on Throughput.

So where do we get these lists of attributes?

A couple of years ago, my colleague Gabriel Morgan posted a good article on his blog called “Implementing System Quality Attributes.”  I’ve referred to it from time to time myself, just to get remind myself of a good core set of System Quality Attributes that we could use for evaluating system-level architecture as is required by the ATAM method.  Gabriel got his list of attributes from “Software Requirements” by Karl Wiegers

Of course, there are other possible lists of attributes.  The ISO defined a set of system quality attributes in the standard ISO 25010 and ISO 25012.  They use different terms.  Instead of System Quality Attributes, there are three high level “quality models” each of which present “quality characteristics.”  For each quality characteristic, there are different quality metrics.

Both the list of attributes from Wiegers, and the list of “quality characteristics” from the ISO are missing a key point… “Time to release” (or time to market).

The missing criteria

One of the old sayings from the early days of Microsoft is: “Ship date is a feature of the product.”  The intent of this statement is fairly simple: you can only fit a certain number of features into a product in a specific period of time.  If your time is shorter, the number of features is shorter. 

I’d like to suggest that the need to ship your software on a schedule may be more important than some of the quality attributes as well.  In other words, “time-to-release” needs to be on the list of system quality attributes, prioritized with the other attributes.

How is that quality?

I kind of expect to get flamed for making the suggestion that “time to release” should be on the list, prioritized with the likes of reliability, reusability, portability, and security.  After all, shouldn’t we measure the quality of the product independently of the date on which it ships? 

In a perfect world, perhaps.  But look at the method that ATAM proposes.  The method suggests that we should created a stack-ranked list of quality attributes and get the business to sign off.  In other words, the business has to decide whether “Flexibility” is more, or less, important than “Maintainability.”  Try explaining the difference to your business customer!  I can’t. 

However, if we create a list of attributes and put “Time to Release” on the list, we are empowering the development team in a critical way.  We are empowering them to MISS their deadlines of there is a quality attribute that is higher on the list that needs attention. 

For example: let’s say that your business wants you to implement an eCommerce solution.  In eCommerce, security is very important.  Not only can the credit card companies shut you down if you don’t meet strict PCI compliance requirements, but your reputation can be torpedoed if a hacker gets access to your customer’s credit card data and uses that information for identity theft.  Security matters.  In fact, I’d say that security matters more than “going live” does. 

So your priority may be, in this example:

  • Security,
  • Usability,
  • Time-to-Release,
  • Flexibility,
  • Reliability,
  • Scalability,
  • Performance,
  • Maintainability,
  • Testability, and
  • Interoperability.

This means that the business is saying something very specific: “if you cannot get security or usability right, we’d rather you delay the release than ship something that is not secure or not usable.  On the other hand, if the code is not particularly maintainable, we will ship anyway.”

Now, that’s something I can sink my teeth into.  Basically, the “Time to Release” attribute is a dividing line.  Everything above the line is critical to quality.  Everything below the line is good practice.

As an architect sitting in the “reviewer’s chair,” I cannot imagine a more important dividing line than this one.  Not only can I tell if an architecture is any good based on the criteria that rises “above” the line, but I can also argue that the business is taking an unacceptable sacrifice for any attribute that actually falls “below” the line.

So, when you are considering the different ways to stack-rank the quality attributes, consider adding the attribute of “time to release” into the list.  It may offer insight into the mind, and expectations, of your customer and improve your odds of success.

Introducing: Ecosystem Quality Attributes

By |2010-08-31T19:00:05+00:00August 31st, 2010|Enterprise Architecture|

There are benefits to taking an idea from one domain and applying it to another.  We all know of the famous case of “software patterns” that emerged from the concept of architectural patterns developed by Christopher Alexander for the world of building design.  Similarly, we have recently seen the emergence of checklists in medicine which is an idea borrowed from other complex domains (like airplane piloting).

I’m going to follow in that long path of “cross-domain pollination” to take an idea from software architecture and apply it to business architecture.  Not a big stretch for an Enterprise Architect, I know, but I’ve not seen this idea discussed elsewhere.  (Just because an idea is obvious, that doesn’t mean people will think of it: witness the length of time it took to add wheels to luggage!)  That said, I leave open the possibility that prior art exists, and that I’m simply not aware of it.  If that is the case, please don’t hesitate to point it out.

The concept I’m going to borrow from software architecture is that of System Quality Attributes.  They are the various “-ities” that a software system exhibits. These include Scalability, Reliability, Maintainability, and many others.  System Quality Attributes can be used to measure the ability of the system to meet business needs.  There are lots of ways to use SQAs but I am going to focus on one specifically valuable practice.  In my opinion, the best thing you can do with software quality attributes, during system planning, is to prioritize them.

Prioritizing the relative importance of a system’s quality attributes, early in system planning, can have a dramatic impact on the design of the system.  The design is simpler, and more intentional, because the goals of the system are more clear.  A prioritized list of System Quality Attributes provides “guard rails” for the design of a system.  Creating this priority, and then driving a design to meet it, is the domain of the solution architect.

Now for the new concept: Ecosystem quality attributes

Ecosystem quality attributes are the specific measurable attributes of a coherent ecosystem of business processes, information systems, and human actors constructed to deliver the required capabilities demanded by an organization’s business model.

At the highest level, an Ecosystem quality attributes may be applied across an entire operating model.  (For the sake of this approach, an operating model is the widest example of a business ecosystem).  EQAs may also be applied at the level of end-to-end business processes within an operating model. 

Hypothesis: The relative priority of Ecosystem Quality Attributes can have the same dramatic effect on ecosystem design as we’ve observed at the system level through the prioritization of System Quality Attributes.

Managing that relative priority of these attributes for each business model, and influencing the emergence of the operating model to deliver it, and then governing the systems within that ecosystem to insure that it comes into being, is the domain of the Enterprise Architect.

Attribute Definitions

In this section, I will outline a relatively useful set of Ecosystem Quality Attributes (EQAs) that an Enterprise Architect can use to measure their business ecosystem.

Note that Ecosystem Quality Attributes measure a business ecosystem, and therefore must include information that is not available unless you work outside the “boundaries” of IT.  In other words, using a system of EQAs to measure a business ecosystem is a business method, not an IT method.

Ecosystem Quality Attribute Description
Operating Model Alignment A measure of how well the ecosystem of processes, information, systems, and roles align to meet the business model and operating model requirements of the enterprise.  The business model places specific requirements on the ecosystem, requirements which may change as external influences, opportunities, customers, and markets change.  Systems that do a poor job of keeping up the the changing requirements of the market incur a “tax” on customer loyalty, top line revenue, customer service costs, and operational efficiency that is difficult to address without systemic change.
Federation Consistency A measure of how well the ecosystem supports, defends, and enforces the vertical division of duties, responsibilities, and accountabilities demanded by a federated decision structure.  Federated decision structures are very important in each of the four CISR operating models, but they apply in different ways with different amounts of federated control.  The ability of the system to keep “true” to the principles of federation intended for the ecosystem, with a minimum of unnecessary stress, is reflected by this measure. 
Collaborative Maturity A measure of how well the people and processes within an ecosystem support collaboration, ranging from information sharing at the low end, continuous measurement as a normal practice, and continuous improvement at the high end.  Core contributors to collaboration effectiveness, like shared commitment, consistency, culture, and expected level of rigor, all play into the level of collaborative maturity of a business ecosystem.
Dependency Risk A measure of the relative strength of the ecosystem as indicated by the number of dependencies taken on immature capabilities, information sources, systems, and shared processes.  A business process that takes a dependency on a weak supporting system accepts an increased level of risk as a result of that dependency.  Business leaders may decide to reduce the number of dependencies or increase the maturity of the supporting systems, based on their preferences for business efficiency and effectiveness.
Information Consistency A measure of the consistency of the core information elements across the breadth of an operating model.  A low level of consistency drives a “hidden tax” on efficient operations and business intelligence that is difficult to quantify.  Higher consistency reduces this tax, improves information velocity, and can have dramatic impacts on the ongoing costs of maintaining a business intelligence infrastructure.
Interaction Complexity A weighted measure of the number of interaction points between independently managed business elements (business units, information systems, master information stores) needed to perform core business processes within the ecosystem.  Weight is applied to interaction points indicating the level of standardization and isolation in the interactions themselves, so that factors that lead to increased cost of ownership drive up the measure.
Readiness Complexity A measure of how difficult it is for stakeholders to the ecosystem to learn to behave within the expectations of the business processes and culture of the ecosystem. 

This inclu
des readiness with respect to key scenarios, customer requirements, conceptual ontologies, key decisions, governance mechanisms, implementation details, systemic problems, planned roadmaps, change project status, and outstanding list of incidents.


Open question: is this the “right” list?  Am I missing elements?  Are there attributes that are not important from an ecosystem standpoint, and if so, why?  Answering these questions will require research, and is beyond the scope of this article.  I am more than willing to consider this list to be an “initial draft” for the use and refinement of the EA community. 

Prioritization and Tradeoff Method

It is a well-accepted premise, in software architecture, that there is no such thing as a perfect system.  A system has to be optimized for its use, and in doing so, the designer of the system has to apply tradeoffs.  We accept this idea without thinking in our daily lives: that a car can be designed for speed, or towing capacity, or gas mileage, but you cannot optimize for all three.  You have to set up your priorities, and optimize the system on the basis of those priorities.  A Sport-Utility Vehicle may put a priority on towing capacity first, acceleration second, and gas mileage last.  A small commuter’s car may reverse those priorities.  Both are acceptable.

I posit that the same is true for business ecosystems.  There is no perfect operating model or end-to-end business process.  Each is designed to suit the specifics of the business that demands it, and the business culture that drives it.  Each business ecosystem has to be optimized for specific quality attributes, and priorities must be taken.

So the challenge of this level of business architecture is to illustrate the importance of these attributes and host a discussion, among your business stakeholders, about the relative priority of each.  The deliverable is a simple document illustrating that priority and commenting on the intentional tradeoffs that result. 

What are some of the questions that your business leaders will consider:

  • How important is it that your business be constantly improving?  Is it more or less important than the need for information consistency? 
  • Does your business culture foster interdependency or would you prefer federation? 
  • Do you need to be able create a copy of yourself, where readiness matters, or can the business take on large scale interaction complexity in order to create market differentiation for the products and services?


The process works like this:

  1. Present the ecosystem quality attributes (in concrete terms, with example tradeoffs) to a set of senior execs and host a discussion.
  2. Create a simple document resulting from the discussion that outlines the “decision criteria” that managers should use when improving their processes and systems.
  3. Run that document past the execs to make sure that they agree with your distillation of their ramblings, and with your plan to communicate the results.
  4. Communicate the resulting priorities to mid-level managers, key subject matter experts, and the folks involved in making changes to the business (especially Quality, BPM, and IT professionals who are frequently involved in tradeoff decisions).


Value proposition

So why should we host this discussion?  What is the value of taking the time of senior business leaders to make them consider the relative importance of complexity, consistency, readiness, (etc) in their business? 

The value is in building a company that knows what it wants, knows how it will go get what it wants, and wastes as little time, as possible, debating the relative merits of obscure decision points, or fighting political battles based on competing business goals.  The value is in clarity.  If you WANT a company that is able to replicate itself at the drop of a hat, say so.  If, on the other hand, you want a company that uses a complex set of interactions in order to create a wide array of different services in the marketplace, saying so will reduce internal “churn” that occurs when smart people are left to their own guidance about what is the “right thing to do.”

Example (Help Wanted… Looking for victims volunteers)

When I outlined this blog post, I thought “I will describe this concept with an example, here.”  However, as I got to this section, I decided that it would be more effective to ASK for your input than to tell you what an example should look like.  So… I’m looking for volunteers.  If you are an EA or a Business Architect, and you’d like to submit yourself to a two-hour telephone call where I ask you probing questions about your business model, business culture, and business behavior, then I’d like to hear from you.  Taking your input, I’d like to produce a couple of sample “ecosystem quality attribute prioritization documents,” and using that experience, refine the method. 

So if you are game for a phone call, drop me a line.  I will be in the New York/New Jersey area in about two weeks, and in the Chicago area for a few more days, in mid September.  If you’d prefer to meet in person and it works out to be convenient, perhaps we can do this face to face.  Respond by clicking the link labeled Email Blog Author in the upper right hand corner of the page.


I don’t find it “startling" or “novel” to say that Enterprise Architecture should directly impact the structure, function, or responsibilities of various business units.  My readers know that this is not a new proposition for me.  To the contrary: I believe that an EA that does NOT have the intent of impacting the business itself (in terms of business unit alignment, processes, roles, scope, funding, or vision) is not performing the role of an Enterprise Architect. 

The EQA concept and method is very much in the domain of Enterprise Architecture, not IT.  This method is concerned with the measurement of the structure and function of the business itself.  Where that structure and function is influenced by software systems, then IT folks would provide input.  Information Technology is not, however, a predominant concern. 

That said, the business method I described borrows shamelessly from the concept of System Quality Attributes and the ATAM architectural method.  I believe that the concept is sound and that a rigorous approach to business architecture, along the lines outlined here, has the potential for providing clear, simple, and elegant guidance to the army of business people (including IT folks) that are charged with taking the often-nebulous intent of business leaders and translating it into an effective, intentional, enterprise.

As always, dear reader, I’d love to hear your feedback.

Design for Business Agility: Can Mathematical Models be developed?

By |2010-06-17T01:43:00+00:00June 17th, 2010|Enterprise Architecture|

Sometimes, if you multi-task two different activities, your mind finds connections between them that would not be otherwise obvious.  I’m listening to a podcast on Design for Six Sigma and Design for Innovation in Manufacturing.  At the same time, I’m writing out e-mail on an integrated systems design that I’m working through with my internal organization.

So what occurs to me?  There are about a dozen different possible system designs that I could consider. I have built one design, but how do I know that it is right?  I can use principles and practices, but in reality, those principles are based on the experiences of other people, not science or math.  Just empirical observation.  My principles are based on good guesswork.

In a manufacturing environment, I could create a mathematical model, in software, that shows how changes to processes would impact the speed and quality of manufactured products.  I can consider different designs, and then introduce disruptive events, and watch the results.  But I cannot really do that in a software system design.

Here’s what I want to do.

I want to model a set of software services, with responsibilities and message composition built up, in a reasonable model of a solution.  I’d like to create four or five different models of the same solution, using different services (different responsibilities, different message composition).  Then I’d like to turn on simulation of each model to make sure that it performs reasonably well.  I can tell you now that all of these designs will probably work.

Then I want to introduce disruptive events like business change. I’d like to know which system designs are most able to stand up to changes in business strategies.  That means, which system designs will necessitate patterns of deployment behavior that creates an obstacle to agility.

In other words: Which designs, once implemented, drive people to make slow changes when fast changes are needed?

Principles and Patterns are based on experience.  That’s great.  But science and math should be considered as well.  Any ideas?