I was called on, today, to justify a technical decision that was ‘smelly’ that resulted from one of my goals. I’m not particularly surprised. If I were to see a really odd implementation, I would first question the design, and then the requirements that fed it. In this post, I will share the situation and my response. I hope it helps others implementing SOA applications.
Backstory: we have a legacy system. Works well. However, we are integrating a series of applications and one of the things we are trying to do is to remove ‘multiple masters’ of data. That means breaking up legacy systems to find overlaps, where two apps master the same data, and require one to consume from the other. Fewer masters, cleaner data, better integration. The SOA promise.
The challenge is that someone has to go first. Someone has to break up their app into services and deliver those services even if (a) the only one who will use the service is the app’s user interface, and (b) we plan to “version” the service so that instead of pointing to the local database, we will consume another system’s service… one that either doesn’t exist or isn’t ready for us yet. Effectively, we have to create a service that we are planning to kill off.
Of course, breaking up an app isn’t easy. One of the tasks is to break up the database. You cannot have two services that behave in a decoupled manner if they are wound up tightly in the database and stored procedures. So I asked for two “logical” databases where one now exists, because I have two services that are being delivered by the legacy app, one of which is likely to move later.
Time for the challenge: the question I got was this: why do we need to break up the database into two databases? Doesn’t make sense! Inefficient! No Referential Integrity! What gives? (I paraphrases to make it sound more hysterical than it was. I’m in that kind of mood.)
My response was careful. Instead of dictating the design (I’m an architect, remember), I dictated the REQUIREMENTS that I will put on the services design and allow the software team to actually create a structure that works.
So here’s my requirements of the services. I’ll call the system ZIPPO to keep from quoting the project name. I’ll relabel the two services to say that they provide Gadget information and Gadget Supplier information. The rest of this post is my response.
I see value in:
- Delivering ZIPPO in such a way that it consumes services that WILL exist somewhere else, even if they don’t YET exist somewhere else. This is largely done by creating a service (locally) with the expectation that the service may move or redirect in the future.
- Delivering a service that our user interface will consume with the expectation that the service could be consumed by other systems in the future. Note that many of our products, including SQL Server’s management tools and Sharepoint Portal Server’s management tools, have the exact same design idea. The APIs that they expose are the exact same ones that their own tools use. No exceptions. This is brilliant and a model for us to copy.
- Keeping the services decoupled to the most rational extent possible. Changes in one service need to have KNOWN impacts on other services. If two services are tightly coupled in terms of business functionality, then we need a declared, visible, and open mechanism for describing that coupling. There is no such thing as ‘perfectly decoupled.’ What this means
- Services are responsible for exposing the data that they master at the service level (via both event publication and query response).
- Master Data Management patterns should leverage the service interface to collect and distribute changes that have occurred in master data tables.
- The implementation of one service needs to have no “back door” interaction with the implementation of another service.
So, in your question, I was hearing you ask if we should move tables from one database to two. I’d like to clarify that by saying that, first and foremost, the design is up to you, as long as you can align to the above concepts. Secondly, it is appropriate for a table that exists as “master” in one database to be copied as “read only” in another. We do this all the time. The copy process itself is being gradually moved towards an eventing model and away from SQL jobs, but the net result is the same.
On the other hand, I don’t want you to leave the MASTERING of gadget supplier information in the same store as the MASTERING of new gadgets unless you can demonstrate that you have no back-end interactions between these tables (including referential integrity, cascading delete, etc). That object is easier met with different databases, but one db is fine if you can pull it off.