Should Enterprise Architecture rock your world?

By |2006-10-29T09:17:00+00:00October 29th, 2006|Enterprise Architecture|

In many organizations, EA is a sidelined process or a last thought.  It is hard to be effective in that case.  In other organizations, EA is a core part of IT planning and delivery.  It is difficult to imagine EA having anything less than a pivotal role there.

The benefits of an Enterprise Architecture program are clear:

  • Fewer applications
  • Simpler applications
  • Fewer places where the same data was mastered in multiple locations

The longer term benefits are the really compelling part: 

  • Drop in the cost of ownership (not the cost of development)
  • More rapid development of business capabilities
  • Better business intelligence

Sounds good, doesn’t it.  If you are still in the “I’m thinking about it” stage, then consider adding a step, about a year or two in, that looks at whether you are having the effect you should be having. 

If your program is established and running, consider a process every 18 months or so to ask the same questions:

  1. Is Enterprise Architecture, as it is currently practiced in the organization, producing the benefits that it should produce? 
  2. Has the existance of Enterprise Architecture fundamentally changed the way people do business in the IT teams?
  3. Have the technologies developed or consumed changed in a fundamental way?
  4. Has the organization actually adopted, and rolled out, a major change in the technical platform that was overdue but needed EA to exist before it could occur?
  5. If things go the way they are going today, will the enterprise end up with a simpler portfolio, with fewer, better integrated, more configurable applications? 
  6. Will it happen soon enough?

Why add this step?  Because, as a change agent, you have to, first and foremost, believe your own story.  You have to convince others to change by first believing that change is necessary.  This is an elemental part of the conversation. 

However, if you believe your own story, day in and day out, you are very likely to miss one aspect of critical thinking: self reflection.  So, you need to force yourself, and your team, to self-reflect.  Otherwise, you may think you have the best program in the world, only to wake up one day to find your program marginalized, reorganized, or dispersed.

If EA is a new bird at your company, or has been tried before with limited success, you must know that your ‘hall pass’ has an expiration date.  You cannot expect to spend a decade ‘experimenting’ with changes to the processes needed to get EA off the ground.  You don’t have that long. 

Focus on some key areas:

  • Your architects are trained and buy in.  Spend heavily but wisely.  Spend not one red cent on technical training.  Spend on process improvement, leadership, collaboration, communication, and how to drive for results. 
  • You have control and ownership over the planning and development processes that need to change.  Make sure that you can measure the adoption of process changes. 
  • You have templates and procedures in place to get the most important activities done.  You have a mechanism to change them, and you don’t wait until the last minute to do so.
  • Use 6-sigma on your own program.  Ask “what must you improve to change the results?”

I know that this advice is general, but everyone’s problems are different.  At the end of the day, a truly effective EA program, added to an average IT organization, should ROCK THE WORLD.  It can happen gradually.  It can even happen in fits and starts.  But know this: if you are running an effective EA program, the impact will be huge.

What is your impact?  If it isn’t huge, in terms of the kinds of systems you have or will have, it’s because you aren’t being effective. 

Architecture as eye candy

By |2006-10-28T03:36:00+00:00October 28th, 2006|Enterprise Architecture|

Sometimes, the architect is not really relevant.

For architectural purists, this is a shocking thing to say.  To them, I’d say that teams can choose to ignore their architect.  In fact, in some situations, they are REQUIRED to ignore their architect.  (case in point to follow).  If that is the case, then creating a model, or delivering a review, is pretty pointless.

When are teams required to ignore an architect?

I’ve come across two situations.  Perhaps there are more.

1. When the process used to select projects, fund projects, collect requirements, etc, has no mechanism for involving an architect. 

If an architect delivers a model in this environment, it MUST be ignored because there is no team that is expecting to receive the model, or trained to understand it, or governed by alignment with it. 

2. When the architect has stated goals that conflict with the desires and preconcieved notions of powerful people.  If a person is used to funding IT projects without regard to anyone else’s wishes, and he finds out that an architect is saying things like “build for the enterprise” or “create fewer, simpler, more consistent solutions,” he can simply refuse to cooperate with the architect.  The architect is outside the meeting loop, outside the process. 

Delivering a model in this environment is pointless, because the team dynamics are oriented to listen only to the “customer” even though the money comes from the corporation, and the customer may not represent the corporate interests. 

In both cases, the folks involved in development are not being mean or malicious by ignoring their architect.  They are following the ‘rules’ that exist in the environment.  They have no choice.

So if you find yourself in this situation, don’t create models.  You will be wasting your valuable time.  Work on the real problem.  Find the support you need to connect the process up, so that architecture has ties into project funding, and a recognized voice in project delivery. 

Architecture makes lousy (and expensive) shelf-ware.

Alas, We must differ…

By |2006-10-27T04:09:00+00:00October 27th, 2006|Enterprise Architecture|

Udi Dahan posted an interesting reply to a recent posting of mine.  In my post, I go into detail to present a scenario where two services are coupled because the business itself is coupled.  He disagreed with my design and offered an alternative.  I will discuss his alternative and show that our designs are similar but that mine is more stable and more appropriate to the specific example I described.

I’ll see if I can add real diagrams tomorrow, when I’m on my own PC.  Right now, we will have to suffice with ‘text diagrams’.[addenda: I did my best to create diagrams.  The large one may get cut off in your browser.  Open it in a seperate window to see all of it.]

From what I can tell of Udi’s model, it looks like this:


    — writes –> [co-op master db]

    — subscribes –> [partner change events] — generated by –> [partner-master service]

    — writes –> [local partner data cache]

    — calls async –> [insert-partner-master]

    — notifies –> [original caller]


    — writes –> [partner master db]

    — publishes //change events// –> to all subscribers


My model was a bit different.  Here’s mine in the same, goofy, notation. 

[composite orchestration: create-co-op-partner]

    — calls sync –> [partner-master] (retrieve partner id)

    — calls sync –> [add-to-co-op-master] (retrieve co-op id)

    — notifies –> [original caller]


    — writes –> [partner master db]


    — writes –> [co-op master db]


Addenda: I created this diagram to show my viewpoint, in context of some callers.  you may need to open this is a seperate window to see all of it.

Some interesting comparisons: my message exchange pattern (MEP) is less reliant on async calls.  The assumption I made is that the orchestration itself is reliable, so if it cannot call one of the downstream services, the orchestration engine retries later (perhaps dehydrating as needed).  This probably lowers scalability.  On the other hand, my orchestration has two advantages: first off, it is both simpler to build and, in the 99% sunny-day flow, it performs far better. 

So there are architectural tradeoffs between the two designs: Udi wins for scalability, while I get performance. 

What is the cost of the scalability?  Udi’s design is far more complex and thus more expensive to build and own.  Do we get so many new co-op partners every day that we need the added cost of Udi’s design?  I doubt it.  Perhaps if the example were dealing with orders, but it isn’t.  It is dealing with co-op partner agreements… negotiated legal documents.  Even very large companies may create a handful of these in a month.  So the added complexity (and cost) produces no return on investment.

The most important difference, however, is not the use of sync or async services.  In fact, both models assume that the orchestration lives in an async container, and if you started with my model, it would be a trivial change to move to async services and pub-sub.  So, while I can chide Udi’s design on the basis of cost, that isn’t my disagreement with it.  In fact, I quite like the notions of publish-subscribe and local distributed data cache.  However, his model is not “elegant” in my opinion.

The source of my discomfort is the coupling.  In my model, the ‘create-co-op-partner’ service performs ONLY orchestration.  It makes no attempt to call a local database or store cache records.  It calls only other services.  This allows the fine grained services to be called directly by other consuming applications.  In effect, my model allows the business process to be encapsulated and seperated from the fine-grained services that are called by it. 

Udi binds them tightly together.  In his model, a change to the business process affects all systems that call ‘create co-op partner’ while in my model, any systems that are consuming the fine-grained services would not be affected by the change.  These are three different things: two fine grained services and one (composite) process service.  Tying the process service to one of the data services just doesn’t feel right to me. 

Which one is better?

That’s not an easy question.  I sat at my desk for an hour before coming up with a situation where my model is better, but I don’t think it is all that common of a situation.  I will say that generically, I believe that decoupling these three things from one another feels better.  That said: the best I could do to prove it is an odd case.  here goes:

let’s say that we are implementing the following process in our orchestration (both models have an orchestration.  same process for both):  ‘Create-co-op-partner’ service is called and passed a data document that describes a co-op partner.  There is no ‘master partner id’ so we search against the master partner service (or local partner cache) to see if the partner already exists.  It does, so we get the partner id.  We then create the co-op partner with the partner id we found. 

Along comes a change to the requirements.  (no… that NEVER happens ;-).

Our business wants another business process for the ‘spy toys’ division.  In this process, ‘create-co-op-partner’ will get a data document that describes a co-op partner.  The difference is that in this model, we don’t search for the partner first.  If no master partner id arrives on the data document, we always create a new partner first, and then create the co-op partner record.  Two different processes: two databases that need to be coordinated.

With my model, you simple create a new composite service that performs the new orchestration, call both fine-grained services, and move on.  No changes to the existing services.  No regression testing.

With Udi’s model, you have two choices: either change an existing service to support both processes (and incur regression testing costs) or you create a new service that performs the new orchestration rules as well as performs fine-grained database work to add the co-op partner.

Let’s say we have Udi’s model in production and we assign this new requirement to Tom, our very cool support developer.  Of course, our good support programmer choses to create a new service.  He doesn’t want to regression test thousands of lines of code that he didn’t change.  So he copies the existing service, changes the process code, and puts it out on the test server.  Of course, Tom would also realize that he has copied the code for the fine-grained database work to two places.  That code is not different between the services, but it could get there. Fixing a bug would have to happen twice.  Bad.  So Tom promptly creates a fine-grained service that both orchestration services will call and refactors out the common code.  Viola’  Udi’s model just migrated and morphed into mine. 

So with all due respect to an excellent architect, I say this: just as water seeks a level, design seeks stability.  If you build a design that, when kicked with a change, immediately folds to another design, the second design may be more stable than the first.  Why not start there? 

The sync/async point is not meaningful.  It is a tradeoff.  I maintain that I made the more appropriate one given the tangible example at hand. 

SOA and database coupling

By |2006-10-25T09:18:00+00:00October 25th, 2006|Enterprise Architecture|

What do you do if two enterprise services share the same database?

I am running into this all the time.  As we work to break apart legacy applications, we need to recognize that ‘stovepipe’ applications are written from the perspective of ‘put everything we need in the local database.’  That means that we can get really lazy, and require the database to do some things that it does well, like referential integrity, but which complicate integration.

I’ll walk through the logic of breaking this apart by way of an example.

Example: if our company (Contoso) stores a list of all customers in Dynamics CRM, but we have an order entry system that is a custom app running in our Extranet.  The app is used by our field sales team to enter customer orders.  Let’s say that we successfully work out the problems of getting basic domain data to the extranet.  So we share the same lists of countries, or states/provinces/locales… data that changes rarely.

However, our legacy app was used to not only writing customers to the local database, but also reading customers from the local database.  So we built in operations, like the following:

Joe is a sales rep for Oregon and Idaho.  When Joe logs in to the legacy order entry system, we look up his region and display a list of customers in the region. 

That assumes that we HAVE a list of customers for his region. 

Clearly, the list of customers belongs to another service.  We have written a ‘customer lookup’ service and a ‘get-customer-details’ service that connect to Dynamics CRM.  So how, when changing our legacy app, do we break these up… or do we?

There are two design choices to consider here: 

1) How much data from another domain is REQUIRED to maintain local functionality?  (Correlary: is local functionality actually required or helpful?)

2) What is the cost and complexity of acquiring dynamically-changing non-local domain data?

Question 1 is really a requirements in business question, but it leads straight to a technical challenge.  Let’s say that the business wants to keep the following function:  Joe logs in and sees customers from his territory. 

Let’s say that they also want to add the following capability: Joe can look up a customer from anyone else’s territory and either create a local subsidiary or book a sale to the “non local” company (commission rules will apply later).  He CANNOT create a new customer.  (business rule).

So, in order to look up companies local to Joe, we have to know a couple of things:

a) For every salesman, what territories to they have?  [[note: This information is probably also managed in the CRM solution or perhaps another system.  It shouldn’t be managed in the order entry system, but it is possible.  Let’s say that in the past, it was managed locally, but we want to move it to using the data out of CRM.]]

b) For every customer, what territory are they in?

c) What customers are subsidiaries of other customers?

d) We want to find any customer by name (text search).

e) What customer data needs to appear on the actual order itself?

f) What customer data is needed to allow or disallow specific products, or marketing programs, from the order?

I would suggest that the local data that we need to accomplish this is a subset of customer information.  Not all the customer data is really needed to find a customer.  I would break up the list above into two use cases: find-customer and enter-order.

For find-customer, I’d want local data that includes the names of customers, their customer id, their territory id, a parent customer id (for subsidiaries) and very little else. 

In order to get this information, we could set up an event driven master data management pattern.  When the CRM system updates a customer, it sends an event to an event handler running in Biztalk or some other free-standing component.  (If your CRM system cannot send an event, then have your CRM system update a seperate table in a SQL Server database, and then wire up SQL Server Notification Services to detect the change and send the event… about 200 lines of XML).

Once the event handler gets the notification that data has changed, it passes it along to the subscribers.  In this case, a web service running on top of our order entry system.  That web service asks the CRM system for details about the customer: customer id, territory id, parent id, and customer name.  Once it gets this data, it stores it locally.  Nothing else.  We have our data feed.

When Joe logs in, he can see a list of all customers for his territory because we can look locally for the list.  He can even see other customers, subsidiaries, etc.  Now, if he wants to search for companies by a field that is not in our local database, say number of employees, then our local app would call a service on the CRM system.  That system would return a list containing (you guessed it) customer id, territory id, parent id, and customer name.

When Joe selects a company from the list, the app looks to the CRM system for actual customer details needed for the order header and for rules enforcement.  There is no point in storing this data locally for every customer, although it may be OK to store it locally for this customer now that we have it, along with a ‘cache date’ so that the local system can use local data when it is not too old, and look up remote data when the local data is ‘old enough’ (configurable).

What about creating a subsidiary?  The order entry system will need to allow Joe to enter data about the subsidiary, and to pick the parent company from the list of customers (as above).  Then, it can either use a synchronous call to the CRM solution to create the subsidiary first, copying the customer id (created remotely) to the local database, or the app can create the subsidiary locally (using a temporary primary key) and pass an async call to an event handler to add the subsidiary to CRM.  That event handler can then call back to the local system with the official customer id.

The cost of decoupling the databases, and removing a data feed from one system to another is the cost of setting up this Master Data Management mechanism that informs subscriber systems when a publisher system has changed data. 

Note that once you set it up, it can be reused by any number of publishers and subscribers. 

The key here is to get the remotely mastered data out of your local database.  Let the services bring data to you.  Don’t go get it with SQL jobs.  This allows you tremendous flexibility.  Services can keep a solid interface without regard to underlying changes.  You can move your CRM data from one CRM system to another without breaking connections to order entry, call center, billing, and many other integrated functions. 

The downstream savings in integration rewrites completely pays to keep these services decoupled and to use events and messages to move data around, as opposed to direct database feeds.

I hope this discussion helps.  Any questions?


declared, visible, and open coupling

By |2006-10-23T00:50:00+00:00October 23rd, 2006|Enterprise Architecture|

Recently, I blogged that two coupled services should have declared, visible, and open coupling.  I was promptly asked how.

First off, when you have two services, why would they be coupled?  Isn’t the POINT that your services are decoupled?  Sure.  That’s the point.  But the business doesn’t always work that way.  Sometimes, your services are coupled because the business is coupled.

Here’s an example. 

Let’s say that our company (Contoso) has a couple of co-op marketing programs.  This kind of program is fairly common these days.  You make an arrangement with a business partner.  They include a reference to your product in their advertising, and you provide some money to pay for the ad.  A good example that most folks will recognize: Intel has a co-op marketing program.  Any ad that specifically mentions that a machine has an Intel chip in it, and displays the Intel logo, earns a reimbursement from Intel for the costs of producing and running the ad.  As a result, we all seen (and heard) the “Intel Inside” brand quite frequently.  The program works. 

Now, let’s look at this from a business side.  I’m going to pick on two  closely related use cases.  Signing up to be a partner of any kind, and signing up to be a member of the co-op partner programs.

Contoso has lots of relationships with partners.  We have partners in the supply chain, in manufacturing, as well as in distribution and retail.  Each of our partners needs to be uniquely identified, so we have a system where we record the basic information about a partner: legal entity name, DUNS number,  Government Tax Id, Street Address, and other information needed to correctly identify this company.  Attached to this record, we may have information about contacts, directors/owners, and people that we recognize as contract signatories.

Seperately, we will have a system that records the information specific to the co-op programs.  It will contain the list of programs, eligibility rules, claim constraints, claim history, and measures of visibility and effectiveness of the co-op programs.  The partner identified in this system has to also exist in the core partner system, but not the other way around.  In other words, in order to be eligible, you have to be a partner in some way, but not all partners are eligible.  The supplier of cardboard boxes should not be able to claim co-op marketing funds for our product, for example.

So, what kind of service is “create co-op partner”?  First off, it is a composite service.  It needs to orchestrate between two lower-level services.  For the sake of argument, I’ll call the lower level services “partner-master” and “add-to-co-op-master”.

If I send data to a service that creates a co-op partner, and I don’t know the id in the partner master system, then I need to look the partner up, and if they are not found, create them in the partner master system, then take their partner id and add them to the co-op system.  I may not want to do any of that if they are not eligible to join the co-op system. 

Adding data to the co-op system itself is done in the ‘add-to-co-op-master’ service.  This service simply requires the existence of a ‘partner master’ id.  Of course, to be defensive, it will need to check to see if that id is valid or have some other way to trust the caller. 

And here’s where we get to the notion of a declared coupling.  The ‘create co-op partner’ service needs to have a way to declare, even if it is just in text, that it is responsible for checking the ‘partner-master’ service, looking for duplicates and creating the partner if necessary.  There needs to be data that shows the coupling of this composite service.  In the best of all possible worlds, This data would be available in the contract header information (although I don’t know of any standard way to do that).

In addition, there needs to be a way for the ‘add-to-co-op-master’ service to trust the partner id that comes in on a call, without actually calling the ‘partner-master’ service.  Perhaps it will only accept calls that have an ID if the call comes from a known composite service or from a known IP address?  Perhaps digital signatures are required?  Trust has to be established.

It is important that the fine-grained ‘add-to-co-op-master’ service NOT directly call the ‘partner-master’ service.  This would be unnecessary coupling.  

So, recap, the rules to making this coupling open and visible:

— In the header of any composite service, declare the child services that will be called, with sufficient reference data for the system that is asking for data to drill down and find out about the called service.

—  If a service has data reference requirements, it can limit the list of eligible callers to a known collaborator (like a known composite service).  It should not call other fine-grained services to validate inbound data.

— The only services that can call other services are composites, and composites can only call other services in a rules-defined order.  They should not perform their own data storage or data validation operations, relying only on underlying services to provide these capabilities.

It’s not dissimilar to the notions of ‘good practice’ that evolved out of object-oriented programming when it started to get off the ground.  Things like naming rules and style considerations, all of which were designed to make code easier to read and use.  The same goes for service coupling.  If we follow these basic rules, we can minimize coupling and keep it contained in an open and controlled location.

—  Addendum

In response to a question about application-level authentication, I am updating this post to include a link to an MSDN article that discusses this aspect in rich detail.  I highly recommend this ‘Patterns and Practices’ article.


What would you say are the requirements that all services should meet?

By |2006-10-18T22:03:00+00:00October 18th, 2006|Enterprise Architecture|

I was called on, today, to justify a technical decision that was ‘smelly’ that resulted from one of my goals.  I’m not particularly surprised.  If I were to see a really odd implementation, I would first question the design, and then the requirements that fed it.  In this post, I will share the situation and my response.  I hope it helps others implementing SOA applications.

Backstory: we have a legacy system.  Works well.  However, we are integrating a series of applications and one of the things we are trying to do is to remove ‘multiple masters’ of data.  That means breaking up legacy systems to find overlaps, where two apps master the same data, and require one to consume from the other.  Fewer masters, cleaner data, better integration.  The SOA promise.

The challenge is that someone has to go first.  Someone has to break up their app into services and deliver those services even if (a) the only one who will use the service is the app’s user interface, and (b) we plan to “version” the service so that instead of pointing to the local database, we will consume another system’s service… one that either doesn’t exist or isn’t ready for us yet.  Effectively, we have to create a service that we are planning to kill off.

Of course, breaking up an app isn’t easy.  One of the tasks is to break up the database.  You cannot have two services that behave in a decoupled manner if they are wound up tightly in the database and stored procedures.  So I asked for two “logical” databases where one now exists, because I have two services that are being delivered by the legacy app, one of which is likely to move later. 

Time for the challenge: the question I got was this: why do we need to break up the database into two databases?  Doesn’t make sense!  Inefficient!  No Referential Integrity! What gives?  (I paraphrases to make it sound more hysterical than it was.  I’m in that kind of mood.)

My response was careful.  Instead of dictating the design (I’m an architect, remember), I dictated the REQUIREMENTS that I will put on the services design and allow the software team to actually create a structure that works. 

So here’s my requirements of the services.  I’ll call the system ZIPPO to keep from quoting the project name.  I’ll relabel the two services to say that they provide Gadget information and Gadget Supplier information.  The rest of this post is my response.

I see value in:

  • Delivering ZIPPO in such a way that it consumes services that WILL exist somewhere else, even if they don’t YET exist somewhere else.  This is largely done by creating a service (locally) with the expectation that the service may move or redirect in the future.
  • Delivering a service that our user interface will consume with the expectation that the service could be consumed by other systems in the future.  Note that many of our products, including SQL Server’s management tools and Sharepoint Portal Server’s management tools, have the exact same design idea.  The APIs that they expose are the exact same ones that their own tools use.  No exceptions.  This is brilliant and a model for us to copy.
  • Keeping the services decoupled to the most rational extent possible.  Changes in one service need to have KNOWN impacts on other services.  If two services are tightly coupled in terms of business functionality, then we need a declared, visible, and open mechanism for describing that coupling.  There is no such thing as ‘perfectly decoupled.’  What this means
    • Services are responsible for exposing the data that they master at the service level (via both event publication and query response).
    • Master Data Management patterns should leverage the service interface to collect and distribute changes that have occurred in master data tables.
    • The implementation of one service needs to have no “back door” interaction with the implementation of another service. 

So, in your question, I was hearing you ask if we should move tables from one database to two.  I’d like to clarify that by saying that, first and foremost, the design is up to you, as long as you can align to the above concepts.  Secondly, it is appropriate for a table that exists as “master” in one database to be copied as “read only” in another.  We do this all the time.  The copy process itself is being gradually moved towards an eventing model and away from SQL jobs, but the net result is the same.

On the other hand, I don’t want you to leave the MASTERING of gadget supplier information in the same store as the MASTERING of new gadgets unless you can demonstrate that you have no back-end interactions between these tables (including referential integrity, cascading delete, etc).  That object is easier met with different databases, but one db is fine if you can pull it off.

We do Scrumbut

By |2006-10-13T18:28:00+00:00October 13th, 2006|Enterprise Architecture|

EricGu has a great post on something he calls scrumbut.  It rings very true.  One of the teams I was in formerly did exactly this:

  • Train everyone on Scrum
  • Used “Scrum, but” with all the changes that work against agile principles like no customer on the project, and wildly long deliverable cycles
  • Called it scrum
  • Blamed Scrum and Agile when it failed.

Certified Scrum Masters should be derided if they allow a process to be called Scrum if it doesn’t stick with some basic practices:

  • scope managed as a backlog
  • customer decides priority for any items on the backlog
  • sprints not to exceed 30 days
  • team (individual contributors only, no PM, no chickens) picks the items off the backlog that they can do in a sprint. 
  • Use of a daily burndown to track progress, not Project or Primavera
  • Monthly demonstration of progress directly to the customer or customer representative



What SHOULD a sequence diagram show?

By |2006-10-13T10:18:00+00:00October 13th, 2006|Enterprise Architecture|

For most folks, a UML sequence diagram is something that is either (a) unnecessary, (b) clearly required and essential.  There is rarely a middle ground.  So when you create a diagram (whether by force or by choice) I’d like you to consider the audience, first and foremost.  What do you want to say to them?

Personally, I don’t believe that it makes sense to create a diagram (of any kind) unless you are trying to communicate a thought that may not be clear any other way.  Diagrams are great for that, especially UML, because there is a rigorous way to create the diagram and a clear meaning for each of the symbols.

Therefore, first off, if you are going to create a sequence diagram, and you don’t do this on a daily basis, brush up on the standard.  Look over the description in wikipedia or one of the many sites on the web.  (Ignore the agile sites… Many agilists don’t have any fondness for modeling as anything more than Whiteboard Art and I’ve seen a few that play fast and loose with the diagram standards, which lowers the effectiveness of the communication).

But more important, try to consider this question: “who am I communicating with and what do they need to understand?” 

Let’s say that you are trying to describe a scenario where five systems communicate in a distributed system flow.  Three of them communicate through sync calls to one another.  The other two communicate through async event notifications.

Why would you want to communicate anything at all?  Really.  You can describe things in text, right?  Well, what if the teams that will develop or update or maintain these systems are not familiar with async calls?  What if they’ve never done anything async before?

If that is the case, then you want to make sure that you illustrate the object lifelines.  You want to make sure that you show the “ACK” (aka “thank you”) messages that go from an event notification to the event handler.  You want to show that the lifeline of the caller ENDS before the operations of the async partner is complete.  You want to show that a message is sent back to the collaborator at a later date with information about the processing and you want to make it clear if that return notification is async as well.

It’s a lot to illustrate.  But the point is not to show messages moving.  It is to educate: what will the reader LEARN by reading your diagram?  What do they not already know? 

If you think like an educator, you often find that you remove excess detail, while clearly showing the things that you need the reader to understand. 

So, to answer the question, what should a sequence diagram show?  It should show the information that justifies its existence in the clearest possible manner, with a minimum of excess detail.


What is the scope of a governance project?

By |2006-10-12T02:49:00+00:00October 12th, 2006|Enterprise Architecture|

One thing that I get to consider: what is the right way to govern large IT projects?  I’m in the fortunate postion of asking that question because I’m trying to figure out the correct and most useful role for Enterprise Architecture in providing input, insuring quality, and measuring progress towards goals through the process improvement lifecycle.

Governance is an odd thing.  To some, it means “getting roadblocks cleared.”  To others it means “preventing goofy mistakes by assigning responsibilities.”  Still others care about “keeping cost in focus” and “leveraging the portfolio.”  Like any sufficiently complex project, the project for creating a uniform governance model has many stakeholders.

At the end of the day, the governance project will decide “who is responsible” for many if not most of the key decisions that need to be made.  The governance model doesn’t say “here’s your answer.”  It says “here’s how to get your problem answered.” 

There are two natural problems with defining any project that has the responsibility for assigning responsibility: It’s an opportunity for rearranging the deck chairs to your liking.  Everyone will be tempted to move ‘hard stuff’ onto someone else’s plate.  In addition, every problem that needs solving will be tossed into the bucket.  Without a filter, the governance project will accept these problems as part of its scope.  That is nuts.

So, like any multi-headed hydra, one clear method stands out for filtering out the scope… start with the strategy that the CIO wants to accomplish, and figure out the specific tactical approach that includes “oversight,” “review” and “alignment”.  From there, you can more easily see which of the various needs actually belong in the ‘bucket’ marked ‘governance’ and which more rightfully belong in project management or software quality control.

For example, is governance a mechanism by which the portfolio comes together in a rational manner without excessive cost?  If so, then things like “buy vs. build” belongs in the bucket, but things like “Agile vs. Waterfall” are clearly outside.  (before I’m flamed: Being outside the bucket doesn’t mean that there isn’t a pressing need to solve the problem… it just means that this isn’t the project that will solve it.)

So in order to solve this conundrum, and build a project with a snowball’s chance of success, I need to start with the IT strategies and work my way down. 

Otherwise, the train will be laden with so much junk, that it will never leave the station.


Speaking to "yes"

By |2006-10-01T13:29:00+00:00October 1st, 2006|Enterprise Architecture|

About ten years ago, a salesman used an old trick on me.  He asked a series of questions designed to elicit a ‘yes’ response.  He did this in front of a room of carefully selected prospects.  Gradually, one or two folks started responding to his questions, and finally, by the time his carefully crafted speech was over, he had the whole room nodding and responding.  He snuck about $1,000 our of my wife and I for a bogus travel and vacation plan. 

I saw the same technique this week when a corporate leader stood in front of a room and asked a series of questions.  At first, I felt that they were condescending.  After all, we knew the answers.  It’s not like we needed to be convinced.  But after a few minutes, some folks were answering ‘yes’ and nodding.  It occurred to me that this leader was using the same technique.

Selling?  perhaps.  Selling is just a set of techniques used to get people motivated to do something that you want them to do.  It doesn’t just mean money.  It means support, loyalty, obedience… whatever it is you want people to do.  You can get there through selling.

I don’t know if you can sustain the ‘sell’ but you may not have to.  This leader probably doesn’t have to maintain the sell.  This leader got the support that mattered at the time that mattered.  And perhaps that’s just as important.

So, as much as I admit to feeling a little offended by this person’s choice of speaking technique, I have to also admit that it was effective.  It worked.  This person is a leader for a good reason and is likely to succeed.

Interesting, the things you learn by watching others lead.