I was called on, today, to justify a technical decision that was ‘smelly’ that resulted from one of my goals.  I’m not particularly surprised.  If I were to see a really odd implementation, I would first question the design, and then the requirements that fed it.  In this post, I will share the situation and my response.  I hope it helps others implementing SOA applications.

Backstory: we have a legacy system.  Works well.  However, we are integrating a series of applications and one of the things we are trying to do is to remove ‘multiple masters’ of data.  That means breaking up legacy systems to find overlaps, where two apps master the same data, and require one to consume from the other.  Fewer masters, cleaner data, better integration.  The SOA promise.

The challenge is that someone has to go first.  Someone has to break up their app into services and deliver those services even if (a) the only one who will use the service is the app’s user interface, and (b) we plan to “version” the service so that instead of pointing to the local database, we will consume another system’s service… one that either doesn’t exist or isn’t ready for us yet.  Effectively, we have to create a service that we are planning to kill off.

Of course, breaking up an app isn’t easy.  One of the tasks is to break up the database.  You cannot have two services that behave in a decoupled manner if they are wound up tightly in the database and stored procedures.  So I asked for two “logical” databases where one now exists, because I have two services that are being delivered by the legacy app, one of which is likely to move later. 

Time for the challenge: the question I got was this: why do we need to break up the database into two databases?  Doesn’t make sense!  Inefficient!  No Referential Integrity! What gives?  (I paraphrases to make it sound more hysterical than it was.  I’m in that kind of mood.)

My response was careful.  Instead of dictating the design (I’m an architect, remember), I dictated the REQUIREMENTS that I will put on the services design and allow the software team to actually create a structure that works. 

So here’s my requirements of the services.  I’ll call the system ZIPPO to keep from quoting the project name.  I’ll relabel the two services to say that they provide Gadget information and Gadget Supplier information.  The rest of this post is my response.

I see value in:

  • Delivering ZIPPO in such a way that it consumes services that WILL exist somewhere else, even if they don’t YET exist somewhere else.  This is largely done by creating a service (locally) with the expectation that the service may move or redirect in the future.
  • Delivering a service that our user interface will consume with the expectation that the service could be consumed by other systems in the future.  Note that many of our products, including SQL Server’s management tools and Sharepoint Portal Server’s management tools, have the exact same design idea.  The APIs that they expose are the exact same ones that their own tools use.  No exceptions.  This is brilliant and a model for us to copy.
  • Keeping the services decoupled to the most rational extent possible.  Changes in one service need to have KNOWN impacts on other services.  If two services are tightly coupled in terms of business functionality, then we need a declared, visible, and open mechanism for describing that coupling.  There is no such thing as ‘perfectly decoupled.’  What this means
    • Services are responsible for exposing the data that they master at the service level (via both event publication and query response).
    • Master Data Management patterns should leverage the service interface to collect and distribute changes that have occurred in master data tables.
    • The implementation of one service needs to have no “back door” interaction with the implementation of another service. 

So, in your question, I was hearing you ask if we should move tables from one database to two.  I’d like to clarify that by saying that, first and foremost, the design is up to you, as long as you can align to the above concepts.  Secondly, it is appropriate for a table that exists as “master” in one database to be copied as “read only” in another.  We do this all the time.  The copy process itself is being gradually moved towards an eventing model and away from SQL jobs, but the net result is the same.

On the other hand, I don’t want you to leave the MASTERING of gadget supplier information in the same store as the MASTERING of new gadgets unless you can demonstrate that you have no back-end interactions between these tables (including referential integrity, cascading delete, etc).  That object is easier met with different databases, but one db is fine if you can pull it off.

By Nick Malik

Former CIO and present Strategic Architect, Nick Malik is a Seattle based business and technology advisor with over 30 years of professional experience in management, systems, and technology. He is the co-author of the influential paper "Perspectives on Enterprise Architecture" with Dr. Brian Cameron that effectively defined modern Enterprise Architecture practices, and he is frequent speaker at public gatherings on Enterprise Architecture and related topics. He coauthored a book on Visual Storytelling with Martin Sykes and Mark West titled "Stories That Move Mountains".

4 thoughts on “What would you say are the requirements that all services should meet?”
  1. Nick,

    Excellent treatment of a much neglected topic. I have a question about one line:

    If two services are tightly coupled in terms of business functionality, then we need a declared, visible, and open mechanism for describing that coupling.

    I agree wholeheartedly with that statement, but I’m not sure how you are addressing the issue. I’ve been thinking a lot about how to describe coupling that is driven by business requirements. Not only are there no tools that support this (that I’m aware of), I’m not aware of an appropriate vocabulary for this discussion. I think this is a key (perhaps "the" key) to building effective services. I’m hoping you can expand on how you’re dealing with this issue.

  2. Perhaps it’s a good idea to "inverse" your total design. Yes, the promise of SOA is better integration and cleaner data. But the promise of SOA also is decoupling. The problem with the current SOA-hype is that we are tightly coupling our systems in stead of decoupling. I mean it is all about reuse of functionality, so "calling" foreign services. That is a way of tightly coupling. When you inverse your grand design, the initiative for data exchange is not the consuming application, but the producing application. You leave your data persistency redundant and concentrate on synchronisation. Have your master publish its changes and have your interested slaves subscribe to it. It decouples your applications and it reduces the load on the data supplier and the network with no scalability issues. You can add and remove slaves as much as you like without effecting any of the other applications. And the same applies to replacing the master. The slave’s database can be viewed as a kind of cache which is automatically synchronized with the master in real-time and at the same time is completely independent of the availability of the master. A SOAP oriented ESB infrastructure gives you full reliability and security of the process. And above that the dataflows are a source of real-time business activity monitoring if you like that.

    My vision is that you can obtain decoupling, reliability and performance at the (low) cost of redundant data persistency. When you stick to "calling services" you gain benefits at the level of application construction, but at the cost of higher loads, less predictable performance and scalability.  

    What do you think?  

  3. Hello Jack,

    My design does not need to be inverted to meet your requirements because my design and yours are identical.   I’m taking a single legacy application and breaking it into multiple services with a consuming SOBA.  At the system level, it is very simple, since there is very little interaction across services. In other words, there is no need for publish-subscribe because, in this simple viewpoint, each data entity only really exists in one place.

    However, when you start to expand the field of view a bit (imagine you are looking through a camera, and you press the WIDE ANGLE button… more comes into view), you can see that the criteria I gave in my "decoupling" requirement above exactly describe what you are saying:

    1. I asked that services remain decoupled, with no back door interactions.  

    2. I asked that services INFORM others of changes in their data. This goes to exactly what you are saying: publish changes and have others subscribe to them…

    3. Master data (both consumed and exposed) needs to work through a Master Data Management scheme (which is done will as a messaging infrastructure using SOA, in a wide variety of cases).

    Basically, we have described the same thing.  My prior posts do a better job of explaining my stand on MDM.

    Hope this helps,

    — Nick

  4. Hi JohnCJ,

    I think your question merits another blog post, rather than a simple reply.  I’ll try to put some words around this idea.

    — Nick

Leave a Reply

Your email address will not be published. Required fields are marked *

14 − 3 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.