/2008

Feedback requested: Information driven process design

By |2008-12-17T16:58:38+00:00December 17th, 2008|Enterprise Architecture|

An esteemed associate of mine asked me recently if I believe that a conceptual information model, created and delivered independently from a process model, can be considered useful when attempting to improve a business.  In other words, if you have an conceptual information model, can you use it directly, or do you need to produce a process model as well?

The answer, as is typical of EA answers, is buried in the question.  If the goal is to improve a business measurable (like customer satisfaction, or average dollars per order, or customer acquisition cost), then the information model is not useful by itself.  A process model that illustrates how the information is generated and managed must also exist.

So we will often need to develop both a conceptual model of a business and a process model for the business… but which comes first?  Must they be done in parallel?  Or should an architect create one before the other?

Personally, I know of cases where a process model existed long before a conceptual model did, and vice versa, so clearly the efforts are not contingent upon the other.  In fact, in the situation I am in right now, the business has defined a rich process model that has grown out of date.  I have separately developed a conceptual information model that includes concepts considered important by the stakeholders.

Now comes an interesting question: how do we take an updated conceptual information model and use it to improve an existing (but dated) process model?

I have my ideas, but I’m wondering if you, gentle reader, have specific ideas to share as well?  I’ll outline my thinking, but I invite a discussion: is there a better way?

Situation: a project team finds that they have a conceptual information model, and/or business vocabulary, that is not in sync with the processes that the business says they want to standardize upon.  How do we use one to improve the other?

Nick’s method:

  • Step 1: insure the conceptual model reflects the complete breadth of the process model.  This requires going through the process model and identifying all elements referenced, and insuring that they are correctly represented in the information model.  Capturing nouns, verbs, and relationships is key to this step, as are the negotiation skills needed to get everyone to agree on the resulting diagram.
     
  • Step 2: identify entities on the information model that are key entities.  Indicators of key entity:  (a) many different stakeholders define the entity as important to their work, (b) the entity is necessary to model the primary relationship between two other key entities, or (c) the entity is part of a key business measure.  An example of the third indicator: if the business scorecard includes a measure of “number of open incidents” then the term ‘incident’ is a key entity.
     
  • Step 3: establish dependency relationships for key entities.  It is common for one data entity to depend upon another.  The ‘order’ entity depends upon the ‘product’ entity, for example (in most businesses, it is difficult to order a product that the business does not have in their catalog). 
     
  • Step 4: define a loose process model that describes each of the lifecycle events of the key entities on a timeline: when is the entity data created?  When is it used?  When is it updated?  When is it archived? When is it deleted?  Drill down on the steps to identify where specific information must enter the process in order to manage the information.
     
  • Step 5: compare the newly generated “loose” process model to the out of date process model in existence.  Use the new one as a guide to making incremental changes to the existing process model. 

OK… that’s a swag.  Does anyone have a reference to a well documented and sound methodology for taking a conceptual information model and using it to improve an existing, and potentially out of date, process model?

Adopting a new technology like Oslo

By |2008-12-03T13:41:00+00:00December 3rd, 2008|Enterprise Architecture|

Sometimes, when something new comes along, the best way to see it being useful is to see it being used.  Think about it.  If I went back to 1960 and visited a family somewhere in the midwest of the USA, and explained a “computer chip” to them, would they see value?  Maybe.  Probably not.  Life is just fine as it is, thank you. 

But if I showed them how I could use a computer chip to make a simple and useful device, that could do the trick.

Oslo is a new technology for modeling, and many Microsoft-platform developers are unfamiliar with model-driven development in general.  I don’t think the best thing is to say “it’s cool” but to say “here is how you use it to solve a problem.”

Microsoft IT is looking to adopt Oslo in a big way, and along the way, we will be going through all of those same growing pains.  We use modeling tools in many areas, and some teams are quite sophisticated in their use of modeling, but Oslo is a major step forward for the Microsoft platform, and we are excited to be adding this new tool to the arsenal.

As we do, I hope to be able to come back to you, in this forum or in some other one, to talk about the useful problems we were able to solve using Oslo.  I believe that “showing” is better than “telling.”

But for those of you who are still curious, please jump over to the Oslo Developer site and download the CTP or read up on some excellent material.  I especially like this blog post (Oslo == 42) for helping to put Oslo into context.

Creating a distinction between business services and SOA services

By |2008-11-30T17:19:00+00:00November 30th, 2008|Enterprise Architecture|

I’m always a bit dismayed when I hear the following terms mixed up, or combined: SOA service and business service.  In my mind, these things are different.  In one sense, they are related, but indirectly.

A business service is a function (or capability) of the business that is offered to one or more customers.  Those customers are often  internal, because this scenario is often applied to corporate supporting functions. For example, the accounting business unit may provide “accounts payable” services to every business division of an enterprise.  Those divisions are internal customers.  The business unit is accounting, and the business service is “accounts payable.”

In some cases, the customers of the function may be both internal and external.  Many years ago, the Carlson company took their marketing division and not only made it into a shared function, that their various internal divisions could use,  but that division was able to offer their services to the general market as well.  They provide a list of shared business services used by both internal and external customers.

The people who use shared business functions are “businesspeople” of all stripes.  They have work to do, and a business service is simply a way to do it.   A shared business service includes responsibilities, and therefore people who are responsible.  It is a kind of “sub-business” that has customers, and processes, and capabilities, and information.  In many companies, IT is run as a shared business service, providing technology services to many areas of the business. 

A SOA service is a different animal altogether.  Service Oriented Architecture (SOA) is an architectural style.  That means it is a set of software design patterns.  These patterns are united in their support of a basic set of principles.  The people who use SOA are people who write software.  (If you compose an application, even if it is simple to do, you are writing software.)

The logical data model that encapsulates this concept is below.  This is a very tiny part of the data model derived from our traceability model, which allows us to recognize the interdependencies between business processes, applications, and business units.  At the top of the image you see business services.  SOA services are on the lower right.  (click the image to enlarge)

A business unit may provide zero or more business services.  Not all of the capabilities required by a business unit may be involved in a business service. 

SOA provides the ability to share features.  Those features may provide information, or calculations, or data manipulation.  They may also include the limited automation of some elements of a business process.  SOA services are provided by “installed software” (we use the term “application” many times for this entity… a different blog post someday…).

Business-vs-SOA-Service

(note: I updated the image about 12 hours after posting this blog, due to an error in the original image -ANM)

The point of this post is to provide sufficient context to challenge the notion that SOA provides shared business services.  It does not.  SOA provides shared features that many business units call upon.  Those features are required by the business processes within those business units. 

Note to responders: before you flame me, take the time to try to map your concepts to the diagram above.  You may find that if you look for your concepts, and not your words, that you are simply using different words than I am to refer to the same concepts.  Disagree with me about concepts and I’m interested.  Disagree with me because I don’t use a word in the same way that you do, and we will probably not have a very interesting discussion.

Software Reflects The Process That Creates It

By |2008-11-26T16:13:45+00:00November 26th, 2008|Enterprise Architecture|

Of all the ‘laws of software’ that I subscribe to, this one is one of the most fundamental, and unwavering.  I cannot find an exception to it, and years of experience reinforce it for me.  I can look at a chunk of source code, or an operations manual, or even a build script, and see the effects of the software development process used to create the artifact.

Process affects architecture.  If you use agile techniques, you will not only get your results in a different amount of time and features will appear in a different sequence than if you used iterative spiral techniques, but the software itself will have a different structure, different patterns, and different interfaces.

Just making an observation.  Probably not even a controversial one, but one that bears making. 

Software reflects the process that creates it. 

Corollary:

If you want to improve the quality of the software you produce (regardless of how you measure quality), you can change tools, and you can change information, and you can change training, to your heart’s content… but the big effects will come from changing the process.

Using the PMO to measure the behavior of the customer

By |2008-11-22T16:14:47+00:00November 22nd, 2008|Enterprise Architecture|

There are a great many products on the market these days that provide information about a set of projects.  The idea is to let the stakeholders know how well their money is being spent.  Information Technology departments often get criticized for "always asking for money" but never showing value, so Project Management Offices (PMOs) have been adopting these tools at an increasing rate.

Most tools capture basic statistics, and then let the IT group add whatever project stats that they want. Today, I want to examine those additional statistics: what measures should the project management office be tracking?

What logic leads to these measurements anyway?  Plenty of reasons.  Here’s my take:

image

The key to understanding the metrics is to look at the outcome.  We want to improve the success of IT projects.  The measurements are there to encourage the practices that lead to project success.

Are we measuring the right practices?  What are the practices that lead to project success? 

We can guess, or we can go find projects that are successful and ask the project leaders what they did.  We can do this for dozens of projects, and find common actions.  We can look for the "critical behaviors" that led to success, and measure them.

Some of those things are in the typical scorecard. 

  • Insure that the requirements are stable and well described
  • Insure that the direction of the result is chosen, understood, and agreed to by the customer
  • Insure that the project team is making steady progress toward delivering the final solution
  • Insure that emerging risks are recognized and reported as soon as possible.

But is this enough?  Are these all of the behaviors that account for success?

If you ask a successful project manager about the things that lead to project success, have you ever heard things like this:

  • "We had a good rapport with the customer.  When we needed something, he went all out to get it for us."
  • "The customer was part of the team.  Her door was always open, and she made decisions quickly."
  • "The customer really backed us up.  If he had a tough call to make, he’d go get the support from other stakeholders."
  • "When we needed to start user testing, our project sponsor organized all the business resources and made sure they ran the tests they committed to."

The project scorecard is measuring the success of IT team behavior, but not the success of business team behavior, and as a result, the scorecard cannot possibly predict the success or failure of the project. 

If building a system requires a partnership, then we need to measure the customer’s behavior as well.  Assuming that we do, who will look at the numbers that show a customer that is not being responsive?

Customers are business people.  They have managers too. 

Think about it.  The project scorecard can be used to demonstrate that the right behaviors are happening on both sides.  After all, if a project fails because the business sponsor was unwilling to buy in to the approach, or wouldn’t sign off on the interface design, or because the business users wouldn’t participate in the test process, why should the IT team take the rap for missing the dates or overruns in cost?

Here’s another benefit: if your project team resents the PMO, because they seem like the "project police," then adding the customer’s behavior to the metrics can get the project team to sign up.  After all, a complete scorecard is a fair scorecard.  If the project team can point to the scorecard to demonstrate that the business sponsor is being lazy or uncooperative, then they are far more likely to support the PMO.

The business value of elegant design

By |2008-11-03T03:25:19+00:00November 3rd, 2008|Enterprise Architecture|

In my last post, I highlighted the design process, suggesting that designers and architects should consider using creativity, in addition to methods and patterns, to build a truly useful system.  In this one, I’d like to talk about the business value of this idea.  What does the business get by adopting good design practices?

Before I go too far, I’d like to pass along a recommendation for a book on the subject, "Sketching User Experiences: Getting the Design Right and the Right Design" by Bill Buxton (link).  I have been told that this book will eloquently explain what it means to use good design principles and why every business will benefit.  I have not read the book (yet), so my opinions are unfiltered.  I speak from personal experience of 28 years in the software business, including my focus on the field of "human-computer interaction" (HCI) while attending university and years of passion around creating simple, effective, easy to use systems. 

I’m also taking a page from a friend and trusted colleague, Peter Moon, who has been sharing his passion for design with me over the course of the past year.  He inspired me to write these posts.  Thank you, Peter.

Cycles of innovation

First off, I’d like to clarify what I mean by "using wild creativity."  The process of design, IMHO, is a creative one, but not a crazy one, and we are not seeking ‘perfection.’  You can use creativity without blowing the budget or going into ‘analysis paralysis.’  First thing is to understand the process itself, and then to understand when, and how, to apply it. 

When I’m talking about using creativity, I’m talking about a creative process, the result of which is to expand the number of design choices available.  You take a problem and brainstorm out different possibilities in what I call an "expansion cycle."  That gives you many choices to choose from.  Then you evaluate each one, dropping off some of choices for good reasons like feasibility, cost, alignment, schedule, and risk.  This happens at a ‘reduction point’.

Each time you do this, your number of design choices is more constrained, and your reduction cycle brings you to a narrower range of choices.  After a few cycles, you get a choice that you can live with and you commit to using it.

image

The amount of time that this process takes does not have to be any longer than the normal design cycle, especially if you are using agile principles and you have the customer close by.  You don’t commit to expensive and time-consuming technical prototypes until about the third cycle. 

The first expansion cycle is done on paper and white boards.  Same for the second one.  Sketch.  Scribble.  Be creative.  Wave arms.  Use the cheapest, quickest, most flexible tool that will work.  Paper is good.  Some folks have adapted tablet input devices for sketches.  That’s a pretty good idea, IMHO.  Just keep it creative.

Design is not only for user interfaces

One beef I have with many discussions of design is the notion that this cycle of creativity is really useful for user interfaces, without much discussion of how to use this concept for system architecture.  The reality is that the architecture of the system is a construct built through the creative use of various architectural and design patterns. 

When sketching out design choices for system architecture, you can consider different patterns for integration, data management, logical representation, rules management, flexibility, cross-cutting concerns, etc.  It is just as creative, but the effect on the final product is not visual, but rather a quality effect.   Your system quality attributes benefit: flexibility, reliability, scalability, security, throughput, etc.  So don’t take the things I’m saying as "applying only to user interface design."  I include U/X but do not limit the use of design to U/X concerns.  It’s a good method.  Use it everywhere it works. 

Understanding what customers value

When you are looking for business value, you have to look for any changes in measures of value… things that our six-sigma friends call "CTQ" or "Critical to Quality."  These are "the things that are important to the users."  When you listen to your customers, you find out what is important to them.  Don’t assume you know.

This is more than collecting requirements.  This is about finding out what the customers think is important… what they value.  Look at the decisions they have made, not just the things they say.  Listen to their language, not just their words.  If someone is effusive about using "simple software with limited choices" but they use really complex software on their desktop, then drill in… there’s more there. 

Understanding the customer is the first step in designing a solution, because only when you know how to measure your success in the terms that the customer would recognize, only then can you be effective in selecting a good design.

The business value of meeting customer value

Customers don’t share all of their requirements with IT, even when it is in their best interests to do so.  (Obvious, right?)  But who is to blame for failing to capture requirements?  Both of us.  We get so wrapped up in functional requirements: the things the system has to do, that both customers and software folks can lose track of the intangible yet important things that drive purchase and use decisions: feel, crispness, comfort, friendliness, ease, and a connection to the metaphors that the customer is familiar with.

This is what Apple got right with the iPhone and what Google is chasing with their personal device.  This is why Amazon’s Kindle is pretty cool… not just because these devices are simple, but because they are appealing.

Example 1: Here is what happens when you deliver software that works wonderfully well, but no attempt was made to create elegant design.  Note the milestones: how frequently does the user have to request an app?  Also note that I indicate the time between funding a new version and getting it. Are they happy with the app when they are waiting for the next version?  Maybe, maybe not.  IMHO, the answer is quite often "no."  This is the unhappiness that drives cost. 

How much money does the enterprise spend on this app over it’s lifecycle. 

image

Example 2: Here is what happens when you deliver software that works well but feels great too.  Some things to note: fewer requests for change, and further apart.

Consider the cost argument: how much does enterprise spend on this app over it’s lifecycle?  More or less than above?

image

The total cost of ownership (TCO) includes costs incurred to maintain and update an application for many cycles.  The longer an application goes between cycles, the lower the total cost.  And an investment in good design can dramatically stretch out the time between maintenance cycles on an application.

Therefore, it is cost effective to spend a bit of time using creativity in developing new applications, not only in user experience, but also in the structure and patterns of the application’s architecture.  The cost of any one project may be affected (or not) but the TCO will go down… and that is what we all pay for.

The bizarre assumption of functional decomposition

By |2008-10-28T04:51:10+00:00October 28th, 2008|Enterprise Architecture|

I ran into a friend today and, as friends often do, we let our conversation wander over the different "broken things" in IT in general (and a few in Microsoft in specific).  One thing that I’d like to share from that conversation: a truly bizarre assumption that we teach, over and over, to new programmers… the assumption that simply following a "functional decomposition" process, a well-trained programmer will naturally end up with a good design.

Now, I’m no great expert on product design or graphic design or industrial design… but one thing I can say for certain: creating a good design is not the natural outcome of a simple process.  Only an excellent design process can produce excellent design.

Let me provide an example of good design from the world of products.  This picture is a picture of a footrest.  You read that right: a place to rest your feet.  Mundane, right? 

You tell me.  Does this product LOOK mundane to you?  How about the fact that it promotes movement and blood flow while serving it’s primary function? (special call out to the design firm, humanscale, for creating this beautiful product.)

 main_fm500

Design is not just a process of decomposing a problem into its constituent parts.  Nor is it a process of creating objects that contain functionality and data… yadda-yadda-yadda.  There are dozens of techniques.  Don’t read my blog post as a slam on any one of them.  I’m slamming anyone who thinks that they should reach a conceptual architecture, derive one solution, and go… without considering alternatives.

Design is a process where you consider the problem and then propose multiple, competing, wildly creative solutions.  You then narrow down your brainstorm and weed out the bits that won’t work… and then you propose multiple, competing, wildly creative refinements… and the cycle continues.  This happens a couple of times, until you have refined your solution by being creative, then logical, then creative, then… you get the picture.

When was the last time you were in a design review and the team being reviewed came in with multiple solutions, asking for help to narrow it down?  Really?

In no other field is the word ‘design’ so misused as it is in software development.

I have not met many folks that use a process whereby they design multiple, competing, wildly creative solutions in software and then evaluate them, select one or two, and go after the problem again at a finer level of abstraction. 

Not many folks at all.

Why is that?  We are living with the myth: that you can follow a simple process, produce a simple and "pretty good" solution architecture that represents a "good design".  No alternatives.  Very little creativity.

How’s that working for ya?

Non-Functional Requirements: the "All-Other" classification

By |2008-10-14T01:55:06+00:00October 14th, 2008|Enterprise Architecture|

I’ve seen various taxonomies of requirements.  Like all taxonomies, any set of requirement types exists to classify or partition requirements into coherent groups for further analysis.  Most break down the list of requirements into things reminiscent of "who or where the requirement comes from."

For example, one taxonomy I’ve seen recently described:

  • Business Requirements – high level business goals
  • User Requirements – the user experience needs
  • Functional Requirements – business process or functionality needs
  • Non Functional Requirements – all other requirements (like quality attributes)

Another taxonomy may be:

  • Information requirements – needs for information storage
  • Functional Requirements – needs for functionality
  • Non-functional requirements – all other requirements

I’ve seen others as well.  Most will have a category that contains "non-functional" requirements.  And there’s where my heartburn lay. 

steel_bucket

When creating classifiers of a type, whether in OO, or in taxonomy efforts, it is a very good idea to avoid creating a type called "All-Other."  If you create a type called "All-Other," that tells me that you don’t really know enough about your domain, and you don’t know why you have things in your domain that you cannot classify, but you do, so you create a category for "everything I cannot classify" and throw all elements in. 

How do you know you have one of these types in your taxonomy?  If the definition of the class contains a negative, as in Non-Functional or Non-Testable or Non-useful.

Basically, the category of ‘non-functional requirements’ is an "all-other" category.

Over the years, software development has matured to the point where we have categories for most requirements, and they are well understood, so the stuff that falls into the "non-functional requirements" category is very constrained.  We have a coherent set because we have identified all of the elements that don’t belong there.  Yet the name remains.

I’d like to suggest that we kill off the classification of "non-functional" requirements, and replace the name with "quality metric requirements."  Basically, that’s all that is left in the modern "All-Other" requirements class: those requirements that reflect a measurable goal of system quality, usually expressed as a metric.  For example: "the online store must be available for browsing of the product catalog 24 hours per day, reliably 99.99% of the time."  Availability is a notorious ‘non-functional requirement.’

But if we replace the category of ‘non-functional requirements’ and call it a quality metric requirement, then we get three benefits:

  1. We can make the statement that all ‘quality metric’ requirements are actually derived from a measurable goal, not a fiction.  The business should not say ‘I want 2 second response time’ without explaining why that is important.  A reasonable requirement, like a 2 second response time, can be connected to the customer expectations or the business competitive strategy. 
  2. An less obvious relationship may be drawn when he business says "I need this system to be operating 99.999% of the time."   Anyone who has seen a requirement like this one knows that a "5-nines" requirement will definitely affect the cost of the solution, and probably the amount of time needed to test and deliver it.  If the customer needs this kind of reliability, they should be asked the answer "why."  By classifying this requirement as a quality metric, and by requiring that each quality metric must be defined, it should be much easier to catch those situations where the business has gold-plated their list of requirements.
  3. By removing the ‘All-Other’ classification, we lose the temptation to use it to toss in "other" requirements that we have no real understanding of.  This forces a level of quality into the requirements gathering process.  

So my suggestion for a requirements type taxonomy would be:

  • a) Business Ability Requirements— high level or "one liner" requirements that identify the high level statements of functionality we can expect to come directly from a business user.
     
  • b) Data Relationship Requirements — the understanding of logical data entities and their relationships, expressed as software requirements to model and store information that matches those data entities.
     
  • c) Reporting requirements – the understanding of the contents of documents, reports, and artifacts that are either generated by or consumed by the business processes themselves, often in the form of process artifacts.  Basically, any time your software facilitates communication between two people, or outward from a person to an external party, you would capture reporting requirements.
     
  • d) Functional interaction requirements – the requirements most easily drawn from an understanding of the processes that a user or customer will use when interacting with the software, functional requirements specify conditions and behaviors that must be met.
     
  • e) Quality Metric Requirements – the requirements that are drawn directly from business strategy or goals, including those that recognize customer expectations for software of a particular type, and those that establish or recognize a competitive position for the company in the marketplace.  This includes the software quality attributes like reliability, availability, usability, and flexibility.

It is time to get rid of the ‘all-other’ category of software requirements. 

"Correct" is a point of view

By |2008-10-11T04:32:38+00:00October 11th, 2008|Enterprise Architecture|

My friends in the Agile community have succeeded in drilling a concept into my thick skull so deeply that the concept shows up in other things I do.  What is that concept: don’t try to build the perfect app.  Build the least complicated app that will do the job.  Let the customer tell you when you are done. 

Makes sense.  Too bad there are so many developers who still insist on making this bit or that bit of code "really solid" or "reusable" when no one is paying for anything more than "functional" and "bug free." 

So that bit of agile philosophy tends to get repeated a lot, even by me.  There are a lot of people to reach, so we hammer home the concept: do the simplest thing possible… to the point where I use it even when I’m doing the ultimate BDUF exercise: Enterprise Architecture.

We tend to say things, in EA, like this: "we are not just about building apps right.  We are about building the right apps."

But to be really honest, no one really knows what the "right" apps are.  There are no tablets of stone that contain a perfect list of applications that should be funded or that should remain in the portfolio.  We are humans and we make human judgements, using the best tools we have. 

So the trick is to remember this: "Correct" is a point of view.  If you think that a particular list of applications should exist, or should be funded, it doesn’t matter if you think you are correct.  That is your point of view.  Another person, with another point of view, may believe him or herself to be just as correct.  You have to sell your concepts (and yourself) to be impactful. Help others to see your principles and how you used them to pick your list.  Help them to share your point of view. 

The challenge is not to do the "right" or "perfect" things, but to do a good job… and not to ‘gold plate’ the decision process with tons of special justifications or long meetings.  Make a ‘functional’ decision, dare I say, ‘agile’ decision.  Use the minimum amount of fuss that produces a good result. 

That, my friends, is Agile Enterprise Architecture.