//June

Canonical Model, Canonical Schema, and Event Driven SOA

By |2007-06-12T16:15:00+00:00June 12th, 2007|Enterprise Architecture|

One thing I’ve been thinking and talking about for the past few weeks is the relationship between four different concepts, a relationship that I didn’t fully grasp at first but have become more convinced of as time wears on.  Those terms are:

  • Enterprise Canonical Data Model
  • Canonical Message Schema
  • Event Driven Architecture
  • Business Event Ontology

I understood a general relationship between them, but as time has passed and I’ve been placing my mind directly in the space of delivering service oriented business applications, the meanings have crystalized and their relationship has become more important.  First, some definitions from my viewpoint.

  • Enterprise Canonical Data Model – The data we all agree on.  This is not ALL the data.  This is the data that we all need to agree on in order to do our business.  This is the entire model, as though the enterprise had one and only one relational database.  Of course, it is impossible for the enterprise to function with a single database.  So, in some respect, creating this model is an academic exercise.  It’s usefulness doesn’t become apparent until you add in the following concepts, so read on.
     
  • Canonical Message Schema – When we pass a message from one application to another, over a Service Oriented Architecture or in EDI or in a batch file, we pass a set of data between applications.  Both the sender and the reciever have a shared understanding of what these fields (a) data type, (b) range of values, and (c) semantic meaning.  The first two we can handle with the service tools we have.  The third one is far and away the hardest to do, and this is where most of the cost of point-to-point integration comes from: creating a consistent agreement between two applications for what the data MEANS and how it will be used. 
     
  • Event Driven Architecture – a style of application and system architecture characterized by the development of a set of relatively independent actors who communicate events amongst themselves in order to achieve a coordinated goal.  This can be done at the application level, the distributed system level, the enterprise level, and the inter-enterprise level (B2B and EDI).  I’ve used this at many levels.  It’s probably my favorite model.  At the application level, I once participated in coding a component that interacted in an EDA application that ran in firmware on a high-speed modem.  At the system level, I helped design a system of messages and components that controls the creation of enterprise agreements.  At the enterprise level, I worked for numerous agencies, in my consulting days, to set up EDI transactions to share business messages between different business partners.
     
  • Business Event Ontology — A reasonably complete list of business events, usually in a heirarchy, that represents the points in the overall business process where two “things” need to communicate or share.  I’m not referring to a single event, but rather to the entire list.  Note that a business event is not the same as a process step.  An event may trigger a process step, but the event itself is a “notification of something that has occurred,” not the name of the process we follow next.

I guess what escaped me, until recently, was how closely related these concepts really are.

The way I’m approaching this starts from the business goal: use data to drive decisions.  Therefore, we need good data.  In order to have good data, we need to either integrate our applications or bring the data together at the end.   Either way, if the data is used consistently along the way, we will have a good data set to report from at the end. 

To create that consistency, we need the Enterprise Canonical Data Model.  Creating this bird is not easy.  It requires a lot of work and executive buy-in.  Note that the process of creating this model can generate a lot of heated discussions, mostly about variations in business process.  Usually the only way to mitigate these discussions is to create a data model that contains either none of the variations between processes, or contains them all.  Neither direction is “more correct” than the other.

However, in order to integrate the applications, either along the way or at the end of the data-generation processes, we need to use a particularly constrained definition of Canonical Schema: the Enterprise Canonical Message Schema is a subset of the Enterprise Canonical Data Model that represents the data we will pass between systems that many people feel would be useful. Note that we added a constraint over the definition above.  Not only are we sharing the data, but we are sharing the data from the Enterprise CDM. 

By constraining our message schema to the elements in the Enterprise Canonical Data Model, we radically reduce the cost of producing good data “at the end” because we will not generate bad data along the way.  The key word is “subset.”  In order to create a canonical schema without a canonical data model, you are building a house on sand.  The CDM provides the foundation for the schema, and creating the schema first is likely to cause problems later.

Therefore, for my friends still debating if we should do SOA as a “code first” or “schema first” approach, I will say this: if you want to actually share the service, you have no choice but to create the service “schema first” and even then, only AFTER a sufficiently well understood part of the canonical data model is described and understood.

And for my friends creating schemas that are not a subset of the overall model, time to resync with the overall model.  Let’s get a single model that we all agree on as a necessary foundation for data integration.

The next relationship is between the Canonical Message Schema and the Event Driven Architecture approach.  If you build your application so that you are sending messages, and you want to create autonomy between the components (goodness), you need to send data that has a well understood interpretation and as little ‘business rule baggage” as you can get away with.  What better place than the Canonical Data Model to get that understanding?  Now, this is no longer an academic exercise.  Creating the enterprise level data model provides common understanding, so that these messages can have clear and consistent meaning.  That is imperative to the notion of Event Driven Architecture, where you are trying to keep the logic of one component from bleeding over into another. 

The business event ontology defines the list of events that will occur that require you to send data.  Creating an ontology requires that you understand the process well enough to generalize the process steps into common-held sharable events.  To get this, the data shared at the point of an event should be in the form of an Enterprise Canonical Message Schema.

Therefore, to summarize the relationship:

   Business Events occur in a business, causing an application to send a Canonical Message to another application.  The Canonical Message Schema is a subset of the Canonical Data Model.  Event Driven Architecture is most efficient when you send a Canonical Message Schema message between components.  This provides you with more consistent data, which is better for creating a business intelligence data warehouse at the end.

Some agility notes:

The list of business events in a prospect ontology may include things like “receive prospect base information”, “receive prospect extended information”, “prospect questionnaire response received”, “prospect (re)assigned”, “prospect archived”, “prospect matched to existing customer”, “prospect assigned to marketing program,” etc. It is not a list of process steps.  Just the events that occur as inputs or outputs.

Clearly, this list can be created in iterations, but if it is, you need to make sure that you capture all of the events that surround a particular high level process and not just focus from technology.  In other words, the business processes of “qualify prospect” or “validate order” may have many business events associated with them, and those events may need to touch many applications and people.  If you decide to focus on “qualify prospect” first, then understand all of the events surrounding “qualify prospect” before moving on to “validate order,” but if both processes hit your Customer Relationship Management system, focus on the process, not the system. 

 

Showing up can be the hardest part

By |2007-06-12T12:42:00+00:00June 12th, 2007|Enterprise Architecture|

Not an architecture post, so if you are looking for technical content, skip this post.

This week, I am in Nashville Tennessee at the Gartner Application Architecture, Development and Integration conference and the Gartner Enterprise Architecture conference.  I’ll post seperately on content, and ideas, that I’m going to adopt.  I may even disagree with an analyst or two (yiikes!) but I’m really enjoying this content.  For those folks who work in Enterprise Architecture or in any derivation of strategic architecture, I heartily recommend this conference.

Travel to get here is a story that I am compelled to tell, for the sheer red tape of it.

Last year, I was going to come to the Gartner conference.  It was in San Diego and I had purchased tickets on Alaska Air.  I didn’t get to go, so my ticket from Alaska air was just sitting on my desk, waiting to be used.  This year, with the conference in Nashville, I called the travel agent and asked to pay the change fee to use it.  No go.  Alaska doesn’t fly to Nashville, and their code partner, American Air, wasn’t going to accept the ticket.  The agent told me that to use last year’s ticket would cost $1,300.  To buy a new one was less than $500.  Clearly, it was cheaper to throw away last year’s ticket!  That was 90 minutes I’m not getting back. 

So I booked my flight on American Airways.  It was not a direct flight.  I would change planes in Houston.  Fortunately, I had only a 60 layover.  The travel site failed to register my frequent flier number, but I figured I’d take care of that at the Airport. 

So I got to Seattle Tacoma airport about 75 minutes before flight time, normally plenty of time to catch a flight.  Except that this was Sunday, and the cruise ships had let off a huge group of travelers all wishing to return home.  The airport was packed.  It took nearly 45 minutes to check my bag and another 15 minutes to get through security.  I got to the gate just as they were due to begin boarding.  Whew.

No boarding.  We just sat.  After a few minutes, the gate agent announced that the flight time was delayed by two hours.  There was a part not working in the cockpit of the plane.  The airline was calling other airlines to see if one of them had the part on hand (not kidding… they went begging for parts).  Many passengers just sat.  I decided not to sit.  I went to the gate agent and asked to move the connecting flight to a later flight.  That way, if I got to Houston late, I wouldn’t miss my connection.  No problem.

The agent promised to make an announcement in 20 minutes.  After 30 minutes, I figured they were going to cancel the flight and, wanting to get a jump on all the passengers who were now waiting in line at the gate desk, I called my travel agent and asked for another flight.  Had to cancel the entire round trip and rebook on Northwest airways.  Turns out the flight on Northwest was going to be cheaper anyway.  While I was on the phone, the American Air flight was cancelled.  100 cell phones lit up at once.  I already had my alternate ticket.  Good call. 

However, I had to get my bags from baggage claim and go recheck in to Northwest.  The flight was two hours away.  It would be close.

Baggage claim didn’t take long.  Maybe 20 minutes.  So I go back up to check in to Northwest.  Cruise traffice was even heavier, and since Northwest flies international out of Seattle, there was a LOT of folks in line.  The line was HUGE.  Almost an hour in that line.  My plane was about to board and my bags were finally on the belt.  Time to sprint to the flight…

Oh, wait… security.  Again.

This time, I had purchased the flight that day.  This time, I got the special treatment.  I got to be patted down and have my bag inspected.  So five minutes before boarding begins, and I’m begging with the TSA agent to let me skip through the frequent flier line to go around an hour-long security line.  She takes one look at my boarding pass, sees the SSSSSS that says “he’s in for a fine time” and sends me through.

TSA is great.  I love these guys. I don’t care what anyone else says.  They are professional, quick, thorough, and they keep me safer, by a long shot, than the patchwork quilt of security that was in place five years ago.  Thank God the democrats didn’t back down with Bush opposed creating the TSA.

As efficient as they were, I got out of there in 10 minutes.  Flight was boarding… in the South gate.  I needed to ride a subway to get to the plane.  So I’m sprinting to the subway station.  (Not a pretty sight).  I had a pepsi in my bag.  I leaked.  On my paperback book.  So here I was, running through the terminal, dripping brown soda in a steady stream behind me. 

Got to the gate and checked in.  Got on the flight, panting and sweaty. 

And then sat.  This flight had a mechanical problem too.  We sat for 40 minutes at the gate, in a hot plane, before they got it fixed.  Great.  I still had a connection, this time in Memphis.  The layover was, once again, an hour, and there were no later flights.  If I missed the connection, I’d be spending the night in Memphis.

Got to Memphis.  I bolt out of the plane (leaving behind my windbreaker), and head for the other flight at top speed, once again tearing through the terminal.  Got to the other gate… an no need to rush… that flight had been delayed for TWO HOURS.  The plane hadn’t arrived in Memphis yet.

AAARRGH!

The next flight arrived and we got to Nashville fine, but a trip that was supposed to last a few hours turned into an odyssey I won’t soon forget. 

What is the REST high-order bit?

By |2007-06-06T13:34:00+00:00June 6th, 2007|Enterprise Architecture|

Harry Pierson asks a great question in his post on REST (A REST Question).  I’ll summarize his excellent post this way: what makes something RESTful?  Is it the protocol or is it the constraints in the architectural style?

My take.

Rest is succeeding where SOAP has had a hard time.  Clearly, the REST folks are doing something right.  We want to bring some of that “right thinking” in to SOA initiatives. 

The thing is this: there is an interrelationship between the REST architectural style and the REST protocol and mechanisms.  In a sense, each has had some influence on the other.  But I’m going to take a stand and pick the ‘most important one:’

I believe that the REST IFaP is the high order bit.

In case you may not be a regular reader of my blog, an IFaP is a grouping of attributes (Identifier, Format, Protocol) that, when viewed as a unit, forms the basis for Middle Out Architecture.  Each of the successful Internet standards, from HTTP to SMTP, has an IFaP at the heart of it.  IFaP is the generalization that allows for adoption, and in this business, adoption is the key indicator of success.

The question that Harry asked was this: if we use the REST style but we drop HTTP, is it RESTful? 

No.

The HTTP request and response mechanism is part of the core IFaP for REST.  Therefore, if we want to maintain the adoption, and therefore, the success of REST, we cannot do it without using URI and HTTP.  It is not clear to me if the REST world is more aligned with JSON or XML for format, but it is clear that these are the top two standards.

My opinion, of course, is mine alone.

Waterscrum vs. Scrummerfall

By |2007-06-04T17:27:00+00:00June 4th, 2007|Enterprise Architecture|

We love to make up words. 

First, we got Scrummerfall.  This is the negative term coined by Brad Wilson whereby Scrum is combined with Waterfall to produce an unsustainably poor process, quickly abandoned.  As Brad coined the term:

“The worst case scenario, in my experience, is embedding Waterfall inside of Scrum. This often manifests in what I call the One-Two-One pattern: one week of design, two weeks of coding, one week of test and integration. I’ve yet to see a team that was long term successful with such a system, especially if they are strongly rooted in historical Waterfall. As often as not, they will abandon Scrum after just a few sprints, claiming that it failed to provide anything but pain. Worse, that’s often the extent of their foray into agile. They “tried that agile stuff” and failed, so they’re sour on it.”

Now, we get Waterscrum.  This term, coined by Kevin Neher, is something a bit more positive.  This refers to the notion of using Scrum as a process in an organization that uses waterfall-based checkpoints to manage risk.  As Kevin defines it:

WaterScrum: co-dependent Waterfall and Scrum projects.  Similar to trying to play (American) football and soccer on the same field at the same time, requiring extra levels of coordination and timing.

Both terms define different aspects of the same problem.  When an organization moves from Waterfall to Agile processes, they have to change more than just how to track the delivery of code.  They have to change the practices of coding as well as the methods used to govern investment and manage risk.  It is not an easy transition, and in the modern age of metrics and scorecards, it is doubly difficult since scorecards demand a number, and that demand sometimes constrains creativity away from processes that don’t produce the number in the same way (or at all).

For example, in our IT setting, we have a number of Scrum projects.  Our normal risk management process requires that the business sign off on the design for the application before coding begins.  In an agile process, this is somewhat silly, since there is a much more blended delivery of both code and design much earlier in the process.  The working mechanism we’ve come up with is that the Scrum team still has to have a ‘baseline’ but that it can occur at the end of the first or second sprint, where high level design is largely understood for a near term release (a few months away at most) and proof of feasability can be demonstrated in functioning code (and hopefully a small number of architectural models).

That accomplishes the goal of managing and reducing risk while still allowing the teams to proceed using an agile approach.  The challenge is not getting the governance folks to accept the notion of allowing coding to proceed without approval.  The challenge is getting the agile teams to step up to getting that approval after two sprints to continue on their path. 

I caution this much: if your team wants to go the route of using agile, make certain you do not shrink from the responsibility of providing feedback to the risk management and governance folks.  Otherwise, you may deliver value but manage to get a black eye anyway, for failing to cross the right benchmarks at the right time.