//September

If you don't lead, others will

By |2006-09-23T02:30:00+00:00September 23rd, 2006|Enterprise Architecture|

You can smell it in the air.  When a good idea’s time has come, many people will start working on it, almost spontaneously.  That’s what happens when smart people have the right to make choices and decisions.  Good things happen.

This happens at Microsoft.  More often that I can explain.  Not everyone sees it.  Not everyone knows where to look, but I’ve seen it… more than once.

I’ve seen an idea just ‘arrive,’ get tested, and fall away.  If the idea is a good one, it comes back and comes back and comes back until more and more people start to BELIEVE in it.  The idea takes over.  After a while, no one can remember what it was like before that idea was reality.  This was the way it was before code analysis tools appeared.  The idea appeared and was tried many times: automatically search code for defects and weak practices (think LINT on steroids).  After a while, it caught on, and now, a decade later, I can’t find anyone who remembers what it was like to write a major product without code analysis tools.

Yes, in a healthy culture of innovation, a good idea will keep coming back. 

However, when the time comes, if your team has a role to play, jump in.  Don’t hang out and wait to see if the idea was any good.  If you fail to take the lead, others will take it away from you.

Case in point: Enterprise Architecture is a fairly new thing in Microsoft IT. While there have been architects for a long time, the current model, where architects actually have a say in large projects and can direct the adoption of expensive but valuable infrastructure… this is relatively new.  Shortly after EA started to get their traction, one of the embedded architects started to form an Architectural Review Board (ARB) in his area.  He backed off because he didn’t have the governance support, but the idea was right.

Now, other folks want to create Architectural Review Boards.  The idea has arrived.  It is time for the Enterprise ARB to take shape.  It is time to develop the decision rights, processes, and pro forma models that produce value by creating and enforcing excellence in architecture.

Now is the time for EA to step in and take a leadership role in creating these boards.  If we don’t, the project management team will.  Now, don’t get me wrong.  Our PM team is staffed with very smart birds.  But it is our leadership that is needed.  If we fail to act, the board that they create could be weak and potentially ineffective.  Or worse, it could make decisions that aren’t architecturally sound or based on principles.

As they say, timing is everything.  It is time to lead.

Lead and harness the passion in those around you

By |2006-09-22T11:35:00+00:00September 22nd, 2006|Enterprise Architecture|

Architects don’t write code.  That’s the first thing that a developer notices when he or she moves into this job.  But there is another change… substantial yet subtle… Architects don’t accomplish anything without having someone else ‘doing the real work.’

This is not that different from project managers and development managers.  You have to move from the individual contributor role to that of leader and, to an extent, manager.  However, this is not leadership by “I said so.”  Very few developers report directly to an architect.  This is leadership by, well, leadership.  You have to influence the decisions of others without having direct control over them.

Since you have no direct control, Leadership skills matter a great deal to the architect.  You have to show that the team has common goals, and sell those goals.  You have to share ideas, build credibility, set a direction and help each person to know how they can help the team to reach it.

Find the passionate among you.  Most people are passionate about something.  There is some aspect of their job that they love.  Find it.  Speak with them.  Go to lunch.  Share ideas.  Brainstorm.  Listen.

Find their passion and harness it.  Show them how their passion can become their job.  They will follow you into battle if they believe in the goal and their role in it and are passionate about their role.  Developers will follow if they believe you can lead. 

Definition of an architectural model

By |2006-09-17T12:17:00+00:00September 17th, 2006|Enterprise Architecture|

Definition: An architectural model is a rich and rigorous diagram, creating using available standards, in which the primary concern is to illustrate a specific set of tradeoffs inherent in the structure and design of a system or ecosystem.  We use architectural models to communicate with others and seek peer feedback.

Let’s look at that definition a little:

  • rich – for the topic you are describing, there should be sufficient information to describe the area in detail.  The information should not be lacking or vague.  Your goal is to minimize misunderstandings, not perpetuate them.  This can be taken too far, of course.  See my notes below on ‘primary concern.’
  • rigorous – you have applied a specific methodology to create this particular model, and the resulting model ‘looks’ a particular way.  Here’s the test of rigorousness: If two architects, in different cities, were describing the same thing, the resulting diagrams would be nearly identical (with the possible exception of visual layout, to a point).
  • diagram – I know that many folks will use the word “model” to refer to any abstraction that simplifies something for the sake of addressing a particular viewpoint.  I find that definition to be useful, but I’m specifically subclassing it to architectural diagrams.  In my humble opinion, if you cannot draw a picture to communicate the idea, you have a poor understanding of what it is you are trying to communicate.
  • standards – I can’t count the times I’ve seen a diagram and asked myself “what does that person mean?”  Standards work when everyone knows them and everone uses them.  That said, the standards we have are (a) not well known, (b) widely misused, and (c) still being challenged with new attempts at describing the same things, because the existing standard isn’t “good enough.”  I’m tired of this.  I want there to be a comprehensive set of diagrams that we can all learn about and leverage.  UML is a start, but it misses large areas of model needs.
  • primary concern – It is easy to be too detailed by including many different needs in a single diagram.  This should be avoided.  It is better to draw multiple diagrams, one for each use case where it will be used, than to draw a ‘mega diagram’ that is so rich in content that it requires a two-year course of study to understand it.  Remember this: when building houses, the architect delivers many different diagrams.  Each is used differently.  Frequently the final package of plans will include diagrams with the floor plan many times: framing plan, electrical plan, heating plan, plumbing, etc. They don’t just say: it’s a floor plan so 100% of the information that CAN go on a floor plan should be put there.  The plumbing subcontractor doesn’t need the details that the electrician cares about.  It is time for us, as software architecture professionals, to learn and convey this notion:  create a different diagram for each audience.
  • illustrate – we are communicating, and we are looking for feedback.  The goal of the diagram should be to answer a specific question and to share that answer with others to (a) see if they agree, and (b) guide their work.  So, know what it is you want to say, and whose work you intend to influence with it.
  • specific set of tradeoffs – the ATAM methodology describes a process whereby software architecture can be peer-reviewed for appropriateness.  It does this by starting with a basic notion: there is no such thing as a ‘one-size-fits-all’ design.  We can create a generic design, but then we need to alter it to specific situations based on the business requirements.  In effect, we make tradeoffs.  The diagram should make those specific tradeoffs visible.  Therefore, before you create the diagram, be prepared to describe, in words, which tradeoffs you are attempting to illustrate in this model.  If you cannot make a list of the tradeoffs you are illustrating, close your modeling too.  You’ve jumped ahead.
  • tradeoffs inherent in the structure and design – a component is not a tradeoff.  You won’t normally get to put a tradeoff in a box and put that box on your model.  Tradeoffs are the first principles that produced your design models, and therefore, when you describe or defend a particular tradeoff, you should be able to refer to your models to defend your position.
  • system or ecosystem – modeling in general can be done at different levels of abstraction.  It is reasonable to talk about the architecture of a specific application.  It is also reasonable to talk about the systems of applications needed to delivery a complete business process (like order-to-cash).  It is not reasonable, however, to talk about architecture within a single component.  That is design, not architecture.

 

I don’t know why I felt compelled to write this definition today.  I guess it’s just Sunday.  Alas.

A distributed systems' logical data model

By |2006-09-14T20:24:00+00:00September 14th, 2006|Enterprise Architecture|

There’s lots of different ways to describe data.  I’ve seen data models that attempt to describe, conceptually, all of the data relationships for lines of business, marketing programs, fulfillment programs, etc.  Conceptual data models are useful, primarily because they give you a starting point to work with the business to first understand, and then communicate, how the data can represent the business’ requirements.

Normally, when creating a system, we drop down to a logical data model for that system.  We indicate the “data on the inside” and the “data on the outside”.  Effectively, the diagram starts with a large ‘box’.  Inside the box are entities needed by the application.  Outside are the entities that come from somewhere else but are referenced by the application.

One challenge, however, that appears to be stumping one of my team mates is how to create the conceptual model when there is not one system, but two or three systems that communicate.  Effectively, we are talking about a distributed system, with distributed data.  The data is not distributed because of geography, but rather in order to foster loose coupling.

This is a different way to look at the design of a system than is typically seen, but I feel pretty strongly that it is an important aspect, and one that we need to be fairly formal about.

I approach the systems from the standpoint of the business processes first, and the use cases second.  For example, if you are creating a system that facilitates the creation of a standard business contract, it is entirely reasonable to break down the process into steps, where each step is performed by different roles. 

First step would be to define a marketing or fulfillment program that the contract will be tied to.  Second would be to create legal clauses that can be fit into the document.  Third would be to create a template with rules for how the clauses are to be assembled for the particular contract type, and fourth would be to create the contract itself.  Different people perform each step.  Each step has distinct responsibilities.  You could, if you wish, create a seperate system for each.  In a SOA world, I think that you would create a set of services for each.

Each set of services is, in itself, an independent system.  In order to remain decoupled, the data may be referential, but not coupled.  Therefore, you may need to add a customer before you add an invoice, but there is NO reason that adding a customer should create data records directly in the order management database (I’m being a purist… Master Data Management is the ‘reality’ behind this situation).

So, if you are a developer who is used to creating a database with every bit of data that you think you will need in it, it can be quite a change to create not one, but many databases, bound together by master data that is copied locally on demand, and kept up to date by a cache engine (MDM).

Now, take one of those developers and ask him or her to create a data model that illustrates not “data on the inside” but “data in each room”.  That requires a different kind of thinking… because now, the problem of ‘master data’ becomes visible (and a little painful).

In this model, the Product data is brought across both for invoices and for shipments, but is it really the Product data that is in the shipments, or is it product and lot data.  In other words, it is one thing to ask “who did we ship soap to,” and another thing altogether to ask “who did we ship Lot 41 of tainted beef to?”

This distinction, between product and lot, becomes particular visible when you model your systems this way, but more importantly, you can see the lines that cross the boundary between systems, and you can place services on each line: get product, get lot, get invoice, get shipment

When designing the database, you will need to use a replication or cache or transactional store to insure referential integrity.

Claiming credit

By |2006-09-14T05:07:00+00:00September 14th, 2006|Enterprise Architecture|

I’m a collaborative person, and most of the time, I’m quite content to make sure that other folks get the credit for little victories that I participate in, especially when that improves my relationship with them.

However, every now and then, it is important to claim credit for a success, especially if I can leverage it into “making my manager look good.” 

Enterprise Architecture walks a tight rope.  We are seen as obstructionist and difficult by some, ineffective and pointless by others.  The key is to stay close to the middle: involved, valuable, empowering.  To be seen there, it is important… nay, critical… that big successes that Enterprise Architecture really did have an effect on are visible to senior staff and the CIO. 

That’s a challenge, because our culture is odd this way.  In the Microsoft culture, a failure is always one person’s fault, but a success is shared by all.  I don’t know if this is intentional, but in an environment filled with competitive, intelligent, shrewd business people, it is somewhat inevitable.  That makes it hard to hit that ‘balanced success’ point without looking like you are trying to steal someone else’s good press.

The answer is to say “we did this together… and this was OUR ROLE, and it was important.”  You can’t make one of the other folks look bad in that, because you need them to keep working with you, even if they fought against you every step of the way.  So success is still shared, but the role of the organization is recognized.

And valued.

EA needs all the positive press it can get.

Enterprise Architecture Interview Questions

By |2006-09-10T02:06:00+00:00September 10th, 2006|Enterprise Architecture|

I was reading some of the newsgroups and, for some reason, I’ve seen a LOT of requests lately about “interview questions for ‘blah-de-blah’ position.”  Just saw another for .Net developers.  Made me wonder, what would I consider is a good set of interview questions for an Enterprise Architect?

Let’s see… what would I look for in an enterprise architect?

  • Visual Thinking – the need to communicate with pictures rather than words
  • The ability to communicate complex ideas to widely different audiences.  Excellent written communication skills as well as the ability to both speak in small and large audiences.
  • A firm grasp of process engineering, lean or six sigma.
  • A reasonable grounding in the notions of business capability modeling and application-to-capability mapping (needed for simplification and redundancy review exercises).
  • The ability to lead architectural review sessions using the ATAM method of application architectural evaluation.
  • A firm foundation in current ideas in software architecture, including SOA, MDA, EDA, and basic OOD.  An understanding of the concept of pattern languages as well as deep knowledge of OO design patterns, architectural patterns, and messaging patterns.
  • A solid understanding of software development processes and methodologies: Agile, RUP, Spiral, Waterfall… and the ability to describe actual situations that may be appropriate for each one (yes, including Waterfall).
  • Reasonable experience in network infrastructure, including TCP networking, Firewalls, Routing, and Load Balancing. 
  • Solid understanding of encryption, authorization, authentication, and security mechanisms, especially the foundational elements of the Public Key Infrastructure.
  • Excellent knowledge of data management, including operational uses of RDBMS, Extract-Translate-Load operations, business intelligence data management, and data distribution / caching strategies.

Of course, you can’t ask questions about every area, so you want to pick the particular areas that you think are indicative of the experience you need in the particular position, or which you think can lead to a tangent that may cover other areas.  For example, a discussion of Business Intelligence enablement in an application infrastructure will hit on ETL, BI data management, and operational data management, but may also tangent into a good discussion of authorization at the data level, which can segue into a discussion of general authentication and authorization discussions.

Similarly, a discussion of networking and load balancing can lead to a tangent on HTTPS and right into the Public Key Infrastructure.

Public speaking is probably the hardest to wrangle in an interview.  You can pull off ‘small group presentation’ by having them draw, on a white-board, the conceptual application architecture of a system that they are familiar with and walk you through the system, discussing tradeoffs, strengths, weaknesses, and things you’d do differently if you could rewind time.  (Note: if they fail to reflect on any regrets, flip the ‘bozo bit.’  No one needs an arrogant or self-righteous Enterprise Architect).

Probably the least necessary is the ability to describe software development methodologies.  Architects can be interested in, even passionate about, Scrum or RUP or TDD, but they have little to do with the ability to do the job. 

So, while I didn’t describe any specific questions, I hope that someone looking to interview an Enterprise Architect can benefit from this profile and notes.  At the end of the day, one company’s ideal candidate is a poor fit for another.   At this level, the interview needs to be driven by proven abilities and past experiences, and not book learning.

multiple architectures – many ghosts in the machine

By |2006-09-10T01:24:00+00:00September 10th, 2006|Enterprise Architecture|

I got an interesting comment to my post about a persistent data grid… that the idea is interesting when considered in context with an ESB.  (I assume this particular TLA stands for Enterprise Service Bus).  I don’t know if the person leaving the comment meant to say that they are essentially the same, or just complimentary.  If he thought that I meant the same thing, then I failed to be clear.

The thing about the ESB is that it places the messages “into the cloud.”  The persistent data grid places cached data “into the cloud.”  Different, but complimentary.

When I was describing this idea to two other architects the other day, one asked “what happens on update?  Does the cache update from the message?”  The answer is no.  The message may intend for data to be updated.  I may even command that data be updated, but until the data is actually updated in the source system, it has no place in the cache.

In a very real sense, while the data grid may leverage an ESB as a portion of its architecture, it is seperate from it.  The distributed data, which behaves in a manner that should allow very fast data performance, even at great distances, is not a message.  Intelligent and seamless routing and distribution is essential but does not deliver large datasets at great distances. 

While I cannot know for certain if my idea would, I can tell you that ESB, of and by itself, does not.  So, in this situation, at least two architecturs are needed.

Add to that the need for business intelligence.  In a BI world, the data needs to be delivered “as of a particular date” in order to be useful for the recipient analytic systems.  This is because this ‘date relevance’ is needed to get proper roll-ups of data in order to create a truly valid snapshot of the business. 

For example, if you have one system recording inventory levels, another recording shipments in transit, and another showing sales in stores, you need to know that your data in the analytics represents “truth” as of a particular time (say Midnight GMT).  Otherwise, you may end up counting an item in inventory at 1am, in a shipment at 7am, and sold by 10am.  Count it thrice… go ahead.  Hope you don’t value your job, or your company’s future. 

That requires data pulls that represent data as of a particular time, even if the pull happens a considerable time later.  For example, we may only be able to get our data from the inventory system at midnight local time, let’s say Pacific Standard Time, when the server is not too busy.  That’s about eight hours off of GMT.  The query has to pull for GMT.

This type of query is not well suited for a data-grid style cache, and while the message can travel through the ESB, the actual movement of the data is probably best handled by an ETL (Extract Translate Load) process using an advanced system like SQL Server Integration Services (the replacement for SQL DTS).

Alas, in our data architecture, I’ve described no less than three different data movement mechanisms.  Yet I still have not mentioned the local creation of mastered data.  If the enterprise architecture indicates that a centralized CRM system is the actual ‘master’ system for customer data, then the CRM will use local data access to read and write that data.  That is a fourth architecture.

OK… so where do reports get their data?  That’s a fun one.  Do they pull directly from the source system?  If so, that’s a direct connect.  What if the source system is 10,000 miles away?  Can we configure the cache system to automatically refresh a set of datasets for the timely pull of operational reporting data?  That would be a variation on my persistent data cache: the pre-scheduled data cache refresh.  This would require a seperate data store from the active cache itself.  This amounts to data architecture number five.

Recap… how many data architectures do we need, all running at once?

  • Message-based data movement
  • Cached data ‘in the cloud’
  • Business Intelligence data through large ETL loads
  • Direct data connections for locally mastered data
  • Prescheduled data cache refresh for operational reporting

That’s a lot.  But not unreasonably so.  Heck, my Toyota Prius has a bunch of different electric motors in it, in addition to the one engaged in the powertrain.  Sophisticated systems are complex.  That is their nature.  

So when I go off on ‘simplification’ as a way to reduce costs, I’m not talking about an overly simplistic infrastructure.  I’m talking about reducing unneeded redundancy, not useful sophistication.  It is just fine to have more than one way to move data.

On Work Life Balance, burnout, and EA

By |2006-09-09T03:12:00+00:00September 9th, 2006|Enterprise Architecture|

James McGovern’s post on Work Life Balance got me thinking.  What is the value proposition for companies to spend time on this issue.  Many do, including my own employer.  I asked a person I respect, a couple of months ago, what the real ROI behind work-life balance was, and he said ‘preventing or slowing down the process of burnout.’ 

I’ve seen lots of techies burn out over the years.  I’ve burned out of a few jobs myself.  Question to consider: did I burn out because I failed to keep to some elusive notion of work-life balance?

in a word: no.

Each time I burned out, it was because I expected something different from the job than I actually got from the job.  It was not because I was spending too much time, or because I was addicted to working day and night.  I’ve had jobs where I spent 24 hours a day (literally) and did not burn out, and jobs where I spent eight hours a day and started to burn out on the first day.

I’m tough to please: I expect the chance to be creative.  I expect teamwork.  I expect folks around me to listen to my opinions, even if they don’t act on them.  These are steep expectations.  If I’m in an environment where these expectations are not met, I burn out.  It’s as simple as that.

Burnout has many symptoms.  One is stress.  Another is passive-agressive behavior or self-defeating behavior.  Another is loss of motivation.  I’ve seen them all.  In situations where I burned out, I’ve done them all.  Not this time, but in the past.  It’s hard when it comes on.  You feel pretty helpless.

Harder still, what do do when a friend burns out… 

I work in a department that is rapidly changing. The work that we all thought we were going to do, a year or more ago, when most of us joined the team, has changed radically. 

Some folks expected the department not to change.  Others expected it to change in some way, but it changed in another way.  These folks are showing signs of stress, or of checking out.  Those who have stayed upbeat, and rolled with the punches, are showing signs of wear but are still standing tall.  That said, we’ve lost a few seriously good architects this year.  That is hard.

In conclusion, I don’t buy the notion of ‘work-life balance’ if it is supposed to prevent burnout.  Burnout is caused not by the imbalance between work and life… but by the imbalance between expectations and reality.

The event driven Persistent Data Grid

By |2006-09-08T22:43:00+00:00September 8th, 2006|Enterprise Architecture|

Not a web control, I’m talking about the notion of applying grid computing to large scale distributed data provisioning.  I’d like to suggest a pattern and see if anyone can tell me if a product provides this, or if this is described elsewhere.  I’d like to buy it.

The data grid is not a new concept.  (see http://www.gemstone.com/solutions/gridcomputing.php and http://www.gigaspaces.com/pr_ce.html )  This idea allows you to create very fast delivery of data across a distributed infrastructure.  This is useful for grid computing applications that allow massive multiparallel execution in a simplified environment, where data bottlenecks can starve your virtual supercomputer and completely screw up your ability to deliver.

One assumption of the data grid is that the memory is large enough to store and retrieve the data.  What if there is a LOT of data… Gigabytes.  What if it is not feasable to keep it in memory?  For example, if someone were to query the Microsoft customer database, and ask for all customers in Kansas, they’d get millions of records.  Simply creating an infrastructure that can respond to such a request prevents us from using memory structure… but only for requests for very large amounts of data. 

Requests for fairly small amounts of data can easily be served from memory.

Therefore, small data domains should be served from memory.  They can be preloaded and made ready by distributing the domains to many servers “in the cloud.”  However, the in-memory data grid is not enough.

Let’s assume that I have customers all over the world, and I need to deliver gigabytes of fresh, real time, data to all of them.  The source systems can live anywhere.  The consuming systems should not need to know where.  Data Grids are good, but don’t cover the need for large data stores. 

I need to combine the data replication of RDBMS systems with the speed and distributed nature of the Data Grid.  Add to that: I’d prefer for it to be event driven (although I can write an event adapter for a source system that cannot, of and by itself, generate events).

So the notion works like this: I distribute, around the world, a set of database servers, highly redundant and reliable.  On top of them, I place data grid servers (one, two, twenty, whatever).  That creates a data grid cluster.  I put in a directory service that allows an app to start up anywhere and find the nearest data grid cluster.

When a source application creates a new data element, it sends an event to the nearest data grid cluster informing it of the primary keys and some base data.  That element is replicated around the world, first to memory and then to persistent storage.  Depending on policies, the grid clusters can request full data for the data item from the source system, or they can wait until full data is requested by an app.

The local grid cluster is highly redundant and persistent.  Members all contribute memory to storing different data elements, but all of the data is stored in persistent storage as well.  That way, if a data request needs large sums of data, then the data grid can force an ETL process between it’s persistent data store and the requestors database system, potentially moving millions of rows of data without having to package each row in an XML transaction, send it across the wire, and interpret it into a local database.  This Database-Refresh style request is what really differentiates this pattern from a ‘standard’ Data Grid.

OK.  I want one.  I’m working to understand and define this, and figure out how much of this is out of the box with SQL Server 2005.

ROI: every app gets rapid access to millions of rows of data, worldwide, without needing to know the source for the data, or the parameters of the source system’s ability to feed that data to the data cache infrastructure.  Basically, every app loses it’s data access layer “into the cloud.”

Do you know of a product that provides a persistent distributed in-memory data cache based on event-driven data propogation models, preferably using canonical schemas that makes its data available over web services? 

When to stay redundant

By |2006-09-08T11:02:00+00:00September 8th, 2006|Enterprise Architecture|

Jack Van Hoof asked, in the reply to yesterday’s post, if it is always a good idea to reduce redundancy.  His question centered around business flexibility: if the business wants to grant independence to a division, or sell it off, doesn’t it make sense to keep things seperate?

As Jack correctly points out, this is a business concern.  Rightly so.  There are excellent business reasons for both redundancy and simplification. 

I would add, however, my humble opinion.  (it is my blog, after all).  The majority of duplication and overlap is not in place because IT was so well aligned with the needs of the business that it intentionally kept the systems seperate.  Redundancy is usually a function of either organizational boundaries or funding sources or both.  Granted, this could end up serving the needs of the business, but sometimes it does not.

Microsoft has not been broken up.  We are still growing in revenue and expanding into new markets.  In the markets we compete in, we are usually pretty successful, although not always, and we are far from ‘number one’ in a wide array of competitive spaces.  We need to innovate and succeed in each one to serve the stockholders.  So we need flexibility.

But to Innovate and Succeed, we need to know more about who our customers are, why they buy our products, what they think about them, and what we can do to better meet their needs.  A great deal of ‘information’ is encoded as ‘data’ in literally hundreds of incompatible systems.  If Microsoft IT wants to do the company a favor, we should start by getting the heck out of the way.

That means reducing redundancy in information about out customers, partners, products, services, and programs.  That’s our big fight.  But that isn’t the big fight of every corporation.

If your company makes bread mix and grain-based industrial additives (I know one that does), it makes sense to keep the divisions seperate.  Customers are different, marketing is different, future of each division is likely to be different. 

To be fair, in this organization, I’d expect to see different IT teams reporting to different company presidents or vice presidents that literally don’t speak to one another very much (except in the financial integration needed to be a public company).

On the other hand, if you are working at the Government of a State or Province (like Washington State or Manitoba), you have a single set of customers (your residents and taxpayers) that you need to reach out to.  I’d expect to see a good bit more integration and a good but less redundancy.  That is rarely the case.  Normally, entities at the provincial government level are extremely independent and share only basic financial systems, not ‘customer’ data, ‘service’ data, or even ‘location’ data.  I’d say that this situation is ripe for simplification.

Another thing often overlooked: I’d like to see less redundancy between federated systems between local and state government.  Integration should allow a local government agency to access data from state databases in real-time, message oriented, transactional service interfaces.  Basic data.  Primary key kinds of stuff.  I’m not saying to make this data ‘public’ but I am saying that our governments act so independently that it increases costs to us, the taxpayer.  This has happened in the USA in Justice-oriented data streams, and is happening in health-related areas to a limited extent, but not in other areas like revenue collection and social services.   It’s a mess, and it costs money to keep it broken.

The same goes for IT in a large corporation.  The balanced scorecard needs to reflect a balance that makes sense for each company.  There is no “universal” balanced scorecard.

So, Jack, I agree.  Each company needs to make strategic choices about how to keep their marketing, sales, support, services, and customers data.  The point is to make sure that the choices are actually strategic, and not just a function of organizational structure or funding sources.  Sometimes, organizational structure works against the strategies, not in favor of them.  As long as we are aware of this possibility, and keep our eyes open for it, we can make good decisions on when to consolodate functions, systems, and data.