I’m renewing my call, now over a year old, for creating a single model for integrating all open, shared services.

I’ll talk about what this is, and then what benefits we get.

A Shared Global Integration Model

The idea behind a shared model is that we can take an abstract view of the way systems "should" interact.  We can create the idea of a bus and a standard approach to using it.

Shared Framework

If we have a standard model, then we can allow a customer, say an Enterprise IT department, to purchase the parts they need in a best of breed manner.

So let’s say that in Contoso, we have two systems that provide parts.  In the diagram below, Customer and Product and Marketing functions are provided by one package, while Accounting and Manufacturing and Order Management are provided by another.  They are integrated using a message bus.

Sample Framework

The advantage of a standard approach comes when change occurs.  (Yes, Virginia, Change does occur in IT 😉

Let’s say that a small upstart Internet company creates a set of services that provides some great features in Customer management.  This is healthy competition.  (see the benefits, below).

Let’s say that the CIO of Contoso agrees and wants to move the company to that SaaS product for Customer management.

Sample Framework 2

That’s the vision. 

Of course, we can do this without standards.  So why would we want a standard?

The benefits of a standard approach

1. Increased Innovation and Investment

We lower the economic barriers for this product to exist in the first place.  We want a small upstart company to emerge and focus on a single module.  That allows innovation to flourish.

We, as an industry, should intentionally lower the "barrier to entry" for this company.  We need to encourage innovation.  To remove these barriers, that young company should not have to create all of the modules of a system in order to compete on any one part.  They should be able to create one module, and compete on the basis of that module. 

2. Quality Transition tools and reduced transition costs

A standard approach allows the emergence of tools to help move and manage data for the transition from one system to another.  The tools don’t have to come from the company that provides the functionality.  This allows both innovation and quality.  A great deal of the expense of changing packages comes from the data translation and data movements required.  Standard tools will radically reduce these costs.

3. Best of breed functionality for the business

We want our businesses to flourish.  However, these are commodity services.  Providing accounting services to a business rarely differentiates that business in its market.  On the other hand, the failure to do supporting functions well can really hurt a company.  There is no reason for the existence of an IT department that cannot do this well.  By using standards, we create a commodity market that allows IT to truly meet the needs of the business by bringing in lower cost services for non strategic functions.

4. Accelerate the Software-as-a-Service revolution

We, in Microsoft, see a huge shift emerging.  Software as a Service (SaaS) will change the way that our customers operate.  We can sit on the sidelines, like the railroad industry did in the early 1900’s, as the emergence of the automobile eventually replaced their market proposition in the US and many other countries.  Or we can invest in the revolution, and give ourselves a seat at the table.  We plan to have a seat at that table. 

A shared set of service standards can radically accelerate the transition to the SaaS internet.  That is what I’d like to see happen.

A dependency on a shared information model

This movement starts with a shared information model, but not a single canonical schema or shared database.  We need to know the names of the data document types, the relationships between them, and how we will identify a single data document.  (I use the vague term "data document" intentionally, so allow me to avoid "defining myself into a corner" at this early stage.)

By having a shared information model, we can create the "thin middle" that forms the foundation for an IFaP, and middle-out architecture.

I care about this.  I believe that IT folks should lead the way, not stand by and let vendors define the models and then leave us to run around like crazy people to figure out how to integrate them.  I’d LOVE to see the "integration consulting industry" become irrelevant and unnecessary. 

It is time.

By Nick Malik

Former CIO and present Strategic Architect, Nick Malik is a Seattle based business and technology advisor with over 30 years of professional experience in management, systems, and technology. He is the co-author of the influential paper "Perspectives on Enterprise Architecture" with Dr. Brian Cameron that effectively defined modern Enterprise Architecture practices, and he is frequent speaker at public gatherings on Enterprise Architecture and related topics. He coauthored a book on Visual Storytelling with Martin Sykes and Mark West titled "Stories That Move Mountains".

21 thoughts on “Towards a shared global integration model”
  1. Interesting post. The NGOSS and OSS/J initiatives by the TeleManagement Forum show that it is actually possible (though not easy) to construct such models and I find them really helpful in my everyday job despite being one of the "integration consulting industry" 🙂

  2. Hi Uros,

    In MS, we use the NGOSS and TM Forum information in our own planning and development cycles, even though we are not a Telecom.  Part of my renewed call comes from recent experience with these models.

    But TM Forum is focused on telecom only.  There is scant documentation of the commodity services that would form the ‘sweet spot’ for shared internet services and SaaS.  

    It’s a different approach, but NGOSS proves that it is possible.

    — N

  3. Isn’t this what various standards organisations have tried (are still trying) to establish:

    * UBL

    * CCTS (UN/Ceefact)

    — and many other domain specific standards such as FixML.

    What about things like RDF, SDO – how do they fit/ not fit here?

    Also – how would a shared "Integration Model" differ from a shared "Composition Model" ala SCA? I note that JJ Dubray has been calling for Microsoft to jump on board with SCA. What say you Nick?

    Note: I’m pretty neutral about SCA – and I’m also skeptical about these efforts to standardize all business data (have you ever read Bill Kent’s classic "Data and Reality"?). But it would be interesting to get your perspective on all these existing and previous attempts to solve these problems and why you think something else is needed.

  4. Hi Murray,

    Interesting set of questions.  Are you asking for a white paper for me to compare every one of the DATA standards that you mentioned?

    I’m talking about a standard architecture, with component responsibilities, semantics, and interfaces… not to stop at the data model, but to start there.  

    SCA is a deployment architecture… not a standard business architecture.  SAP is closer to the vision I’m talking about than the Java world is.

    The fact that there is variation doesn’t mean that there should be, or that variation makes sense.  100 years ago, bicycles were all hand-made, so if you want to repair your bicycle, you took it to a person who, essentially, created a new part from scratch to replace the broken one.  

    Now, we use standardized parts.  Perhaps we lost something… it’s possible, but it’s also possible that we are better off without that particular bit of custom variation.  Economics certainly say that we are, as evidenced by the relative number of custom bicycles to manufactured ones.

    It is time for a standardized architecture to take the next level of sophisticatioh, to sit on top of standardized information models.  It is time to create the commodity market that will remove a LOT of variation and choice, and will drop the cost, and price, of IT business software to a tiny fraction of what it costs today, while increasing the opportunities for individual innovation to astronomical heights.  

    It is time, my friends, to leave the craft halls behind.

  5. Nick:

    I would like to side with you on:

    >> It is time for a standardized architecture to take the next level of sophistication, to sit on top of standardized information models.

    However, I would you would agree that cannot be achieved going through the Caudine forks of standardization. It can only be achieve through evolution.

    The fittest will be reused and become a de facto standard. In order to achieve such a mechanism you need to establish a programming model that enable you to factor your code in "reusable" parts on one side and allow you to compose them into solutions on the other side.

    Today neither Java/C# or Inbound-only SOAP framework will allow you to that. This is why it is not happening.

    I have explained in this article a strategy for the factoring of truly reusable components: http://www.infoq.com/articles/seven-fallacies-of-bpm

    Even in mechanical engineering they have interface patterns that result in composition. Without composition there, like anywhere else, we would still be building bikes one at a time.

    JJ-

  6. Hi JJ,

    Thank you.

    I agree that evolution is needed, but I’m siding with Tim Berners-Lee on the value of standards on this one. I’m taking the long view, and if you do, then a 15-year standard-adoption-revision cycle is not only quite rational, but clearly needed.  We haven’t started the wheel rolling… that’s all.

    Neither of us have ever been the kind of person to duck from a problem because it is difficult.  

    I also think that we cannot evolve the programming models unless we put evolutionary pressure on them.  Nothing evolves without serious evolutionary pressure.  An effort to develop the interfaces that allow such revolutionary change would provide one impetus.  

    Mechanical engineers did not develop composition patterns because they were fun to think about.  They developed them because industry needed to use them.  If we work to create the demand, vendors (including my own company) will have something to drive their cycles of innovation.

    By calling for this process to start, we begin to create that demand.

  7. Then the part that I don’t understand is that for the most part this/these standard(s) already exist. What you are describing looks a lot like the Open Applications Group, the OMG, lots of industry specific groups like ACORD,…

    There are also some industry models that exist: IAA for the insurance industry, IFW for the bank…

    These standards even though they are well designed (IMHO) have not suppered any kind of reuse like the one you are talking about. Let me be clear, these standards have delivered a lot of value, but they have failed to deliver a set of "reusable" components.

    I would argue that this is a problem of programming model more than it is a problem of standardization. It is not until you can compose component in your programming model that you can reuse them.

    The Synchronous Client/Server factoring of business logic is the most detrimental to reuse (IMHO). We will not see the kind of reuse that you are talking about until we set components free to "inter-act".

    However, I think we are in a catch 22 situation with respect to the evolution of the programming model. A lot of people in our industry are drumming that "re-use" does not work, therefore we need to charge ahead on improving productivity to keep building similar parts at a cheaper price.

    Except for IBM, I don’t know any company who has really embraced (as in put their money where their mouth is) a composite programming model like the one SCA offers. All the other software vendors, including Microsoft, somehow dismissing he need to change the programming model towards a process/service/resource centric model.

    I mean, you have almost all the components: WF/WCF/MEF (Entity Framework), why don’t you put the pieces together with an assembly model. You must have some feedback from your customers that they want to assemble several WF state machines together. That is not rocket science. Why not send a strong message across the industry that "re-use" is indeed possible, that productivity can be greatly improved not by paving the cow path but being innovative at the programming model level …

    Frankly I don’t get it. All I can imagine is old grumpy architects and developers that are holding us back in antiquated programming models because they spent all their life building it and they can’t swallow that they were wrong, so wrong.

    When will this stop?

    Why do you think we have a better chance now at creating such standards? That’s the point I am missing I guess.

    JJ-

  8. Hi JJ,

    First off, ACORD, if I understand correctly, is similar to Rosetta.  The organization is focused mostly on the value chain and therefore B2B although you certainly CAN use the data standards within the enterprise.  

    This approach puts the business transaction first.  Since a business transaction, in an integrated environment, must derive from a consistent data model, and since most standards organizations do not start with a standard data model, then the business transactions produced, in most cases, are of limited value within the company.  That is because the business will communicate, between departments, using data models that reflect it’s own information architecture.  Once you cross the boundaries of the wall, then the business translates their information into a business transaction.

    Of course, this is tedious.  Without a common data model, then every transaction has to be carefully hand-crafted, and it takes a long time to create each one.  That is why standards bodies exist… because they must.

    Since these bodies focus on the B2B opportunities, the areas they tackle are the areas of key value to their value proposition.  In insurance, they share claims.  In supply chain, they share shipments and bill-of-materials.  These are the stuff of the value-edge.

    They are NOT the stuff of the commodity center, where SaaS vendors get their biggest value story.  The commodity center is the place where commodity transctions matter.  Things like "human resource" and "travel expense request" are the interesting messages for the commodity hub at the center, because they are the transactions that should be outsourced.

    So I’m not calling for some new industry consortium to create yet-another-data-communication standard.  I’m talking about a base information model and base integration model, and system descriptions for 100 different business application categories, complete with details of how each and every one will integrate with it’s partners.  It’s a shared architecture, one that any enterprise can use to outsource core (commodity) transactions.

    That represents a level of pre-integration that cannot be done with the standards as they exist today.

    And I’m witholding my opinion on SCA  vs. .Net.  I will say this: a packaging model will solve some problems, but it won’t solve this one.

  9. Nick,

    sorry, I should have made my comments clearer. The first two examples of standard bodies I used was: The Open Applications Group and the OMG (domain models) which are clearly not a B2B.

    So let me rephrase what I said: what you are suggesting to do looks a lot like the Open Applications Group specification: OAGIS. Have you looked at the OAGIS integration scenarios? the WSDLs? the schemas?

    Incidentally, for me B2B is an arbitrary boundary. A retail can buy a supplier and yet all information flow will remain similar, the transition from B2B to EAI will be a legal one, not a technical one (there is of course some different quality of service when you compare B2B and EAI information interchange, but that’s not the point we are discussing).

    Again, in the article that I wrote on the Seven Fallacies of Business Process Execution, I explain that Resource Lifecycle Services are extremely reusable, this is really what you are talking about that needs to be standardized (for the most part it is already done in OAGIS). This architecture blueprint I propose enable an organization to keep the control of its human tasks (workflow), decision services, and outsource systems of record and master data management.

    Now if you look at the latest article from Mike Edwards on SCA: http://www.infoq.com/articles/async-sca

    and you spend some time looking at how a bidirectional interface is handled in traditional programming model:

    (I could have picked WCF duplex model just as well)

    <interface.java interface="services.invoicing.ComputePrice"

    754 callbackInterface="services.invoicing.InvoiceCallback"/>

    and you look at how a Java implementation compares to a BPEL implementation to manage the long-running interactions between resources, then it should be clear that without a dramatic change in the programming model, nothing will change, reuse will not happen.

    So again, my question is what do you intend to do different from the Open Applications Group? And assuming you would do something different could yo explain why re-use will now occur?

    (and of course IBM’s IAA or IFW are not B2B either).

    JJ-

  10. Hi JJ,

    I am familiar with both OAGIS and SCA.  The fact that I’m not responding to your questioning on SCA does not imply that I fail to understand.  

    OAGIS, if you look closely, provides many models for integrating systems.  You are encouraged to "pick the model that is the closest to what you need" and then use the transactions according to that model.

    I like the envelope from OAGIS.  I like the verb-noun model.  I like the seperation of data from command (a pattern also used in REST, BTW).  I am not going to suggest otherwise.

    However, if Joe picks one model for Order to Cash for his business, and Mary picks another model for Order to Cash for her business, then a service provider that wants to write a service that both Joe and Mary can purchase has to support both models.

    And therein lay the rub.  

    OAGIS is a transactional standard.  It is a nice one.  But it is not an architectural model.

    What I am talking about will define the boundaries of the system… say the personnel system and the ERP system.  One standard interface.  Async, of course (especially since you like SCA :-).

    One model to integrate them.  One definition of the boundaries.  Clear, consistent, and standard service names.  

    There will be a HUGE amount of flexibility WITHIN the system boundary, but no flexibility at all on the system boundary.  It is defined.  It is done.  It is standard.

    If any of you think this is draconian, dear reader, consider this: the electrical system in your house is exactly this.  Completely standard, and yet you can plug in that new 42-inch flat screen TV into the same system that was designed almost 100 years ago.  

    Standardization of the system did not prevent or obstruct innovation at all.  On the contrary, it empowered it.  

    So, JJ, I’m not talking about a transaction model like OAGIS, and I’m not talking about an async callback to proxy mechanism like SCA.  They are interesting and useful, but insufficient by themselves to meet the needs of the next generation of software.  

    They are small, evolutionary innovations.

    I’m calling for a large, revolutionary, disruptive shift.

    Nothing less will do.

  11. Nick:

    sorry, you know I am stubborn. But IMHO, OAGIS is exactly what you are talking about.

    >> OAGIS is a transactional standard.  It is a nice one.  But it is not an architectural model.

    On the contrary. OAGIS is very close to the REST style, except for URI because it is "component" based. It has a uniform interface, it has a uniform envelope (the equivalent of APP)

    >> What I am talking about will define the boundaries of the system… say the personnel system and the ERP system.  One standard interface.

    This is exactly what the OAGIS has done (again, from what I understand you say)

    Oracle is using OAGIS extensively as their interface to their ERP, CRM, SCM… OAGIS is organized in components rather than solutions (order management, manufacturing…)

    BTW, the uniform interface of the OAGIS does not support … actions or events…

    >> Standardization of the system did not prevent or obstruct innovation at all.  On the contrary, it empowered it.  

    Nick, I don’t like this kind of trivial comparison. I can argue to the contrary that this interface was poorly designed. When you think about it when new devices were brought into the home, the "copper" interface could not adapt: telephone signal had to have its own wire, cable (for the TV) had to have its own wire…

    It is actually so bad an analogy at the "power" level: it could not adapt to the "low voltage" revolution. How many adapters do you have in your home today? At what cost for society? All these incompatible adapters are "little" hacks that had to be implemented because the "general" interface was inappropriate.

    I could continue on-and-on, for instance, this interface is not "bi-directional". They never thought people could produce power ((:-)). Yet today, one of the challenges for Solar power is the flexibility of having a bi-directional interface. It costs a lot of money to do that.

    I think your analogy, proves exactly the opposite of your point.

    Don’t you think, again, that the technology with which you define and operate this interface is a key success factor.

    >> I’m calling for a large, revolutionary, disruptive shift.

    Me too, but you need to build the road, before you can build the cars (I like this analogy a lot better).

  12. Hi JJ,

    Roads and cars have evolved together (at least in the US).  The standards for roads today are considerably different than they were when I was a kid, and from what I understand, are quite a bit different from what they were when cars were first invented to replace the horse and buggy.

    I like OAGIS. They do create interfaces that support components.  The problem is that they don’t constrain the components.  SAP defined their components and then uses OAGIS.  Another company defines their components and uses OAGIS.  But if a customer integrates with OAGIS, that doesn’t mean that they can swap out their other system for SAP… because they may have used a DIFFERENT OAGIS INTERFACE than SAP did.  

    For every problem, OAGIS has more than one answer.

    And that is why it doesn’t work to meet the needs I’m describing.  OAGIS is a great thing… a step one on a long road.  I’m saying that it is time to take step 2… to work with vendors to constrain the components, define their boundaries as standards, and create OAGIS-style transactions around them.

    What matters more is not the transaction.  

    It’s the boundary.

    (Not buying the cable tv and telephone argument, nor am I all that worried about the need to adapt the system to meet changes.  Everything needs to have versions… even the partitioning I’m suggesting.  The reasons the power system has not adapted has more to do with the politics of electricity and the personalities in the industry, not the limitations of the technology.  BTW: in my area, every home can sell power back to the power company.)

    If you want to focus on making the power company more adaptable, while I focus on getting standardized electric plugs, that’s fine.  We are not in conflict.  

Leave a Reply

Your email address will not be published. Required fields are marked *

ten + fourteen =

This site uses Akismet to reduce spam. Learn how your comment data is processed.