Is placing the assembly in the database next?

By |2005-09-14T10:11:00+00:00September 14th, 2005|Enterprise Architecture|

In SAP implementations, the ABAP code that performs the functions of various business processes is stored in the database, or so I’m told.  I was having a discussion a few days back with an architect who works closely with SAP, and has for years.  His take: now that SQL Server Yukon can call .net code easily, why not start placing byte code directly into the database.

The advantages are interesting:  the deployment of a system is a matter of placing data into a database.  Version control takes on a whole new meaning for folks not used to this concept.  You can create a utility for placing a version into production, and if there is a problem, rolling back to a prior version is a matter of flagging records in the database as “inactive” or deleting them.  Database backups are also system backups.  Deployment requires data movement utilities, but can happen to multiple locations easily.

This doesn’t mean that the assemblies have to run on the database server.  SAP uses application servers extensively.  The code stored in the database could be installed to the app server and could run there without difficulty.  Kind of “database driven one click deployment”.

It’s an interesting idea.  I’m sure that theres an article in there for someone who wants to write it up for one of the dev magazines: how to put your assemblies into SQL Server and call them. 

Developer accountability? How about PM accountability!

By |2005-09-10T13:39:00+00:00September 10th, 2005|Enterprise Architecture|

There’s some current talk in the Agile community about making developers more accountable in the agile process.  Apparently, the problem is that developers will commit to delivering too many features in an iteration, and if they slip on the features, they say “so what… we’ll do that in the next iteration.”  That is laziness, plain and simple.

I’m actually going to side-step that issue completely.  That issue happens, and folks are talking about it.  Good.  However, I want to add another issue to the table: project managers who are incompetent, and then blame the dev team for failure.

I’m ranting here folks.  This is a reaction to the notion that we should hold developers accountable (which I agree with) in a community that doesn’t recognize the role of a PM.  However, agile evangelists aside, the rest of us have to live with project managers, and they can be a tremendous asset or a massive liability. 

Here are some of the counter-productive behaviors I’ve observed. 

Focusing on tasks, not functionality – where do I start?  I could rant for days on this self-defeating pattern, yet I have seen nearly every project manager fall into this mentality during project work, some for a day or so but others stay there for the duration of the project.  Even if you shake the really egregious ones by the collar and scream, it may not help (except that you may feel better for 20 seconds or so), because this is a mentality.  Some PMs think that focusing on tasks is the RIGHT thing to do, and it is not.  The customer doesn’t want a task to be completed.  They want functionality to be delivered. 

The net effect of this focus on tasks: marking a task complete (and rewarding folks for it) when a developer says “it is done” without demonstrating functionality, quality, unit tests, and compliance to standards and interfaces.  The PM is the enforcer who must understand that the customer buys these things, and if they don’t insure that these things are in the product, they will not be.

Focusing on process instead of people – This is a novice mistake, but I’ve seen it a lot in high-ceremony environments like PSP/TSP and XP.  The process is important.  It is how everyone comes together on the problem space.  But the people are important too.  Leave room to bend the rules if the people will benefit.  Make a connection to the individuals.  Listen to their needs.  Understand their schedules.  If a person needs to leave a 4:30 to pick their kids up from daycare, don’t schedule the team meeting at 4pm! 

Counting yourself outside or above the delivery team – People are not perfect.  When the list of tasks is estimated at the beginning of a project, it will not be correct.  There will be tasks missed from the plan.  There will be questions that need to be answered “out of order.”  There will be times when you really need to get the customer in the room, even if it makes you look bad, because the developers don’t have what they need.  Swallow your pride.  You are the servant of the project.  The developers are not your master, but if the developers need something, and the alternative is a sacrifice of quality or time or design, then jump.  That’s your cue to really shine.  Be a part of the team.  It’s your delivery too.

Celebrating completion instead of correctness – You are responsible for a large part of the culture of a project team.  You can set the tone for how people communicate, how they share, and how they feel when a release goal is hit.  Change your outlook away from “milestones of activity” and towards “user acceptance” by throwing a celebration for the team when the user is satisfied with something.  In one shop I was in, we would put a stuffed monkey in the cube of the person who had achieved acceptance.  It would move frequently.  In another, we decorated the doors, or handed out colorful banners.  Building joy around correctness and acceptance will build a culture of quality and team cameraderie. 

Failing to understand the mathematics of human achievement – I saw a project team that was filled with young developers commit to achieving 8 productive hours in a day on a highly-visible, time criticial, three month project.  When I heard this, I yelled.  I went to dev management (I wasn’t on the team) and complained.  I went to the PM.  I even went to the customer and said that this committment was absurd.  All simply said “the developers believe it is possible and we will hold them accountable if they don’t meet it.”  WRONG.  If you agree to doing the impossible, (jump onto a train moving at 90 miles per hour), I would be a FOOL for saying “I’ll hold you responsible if you don’t.”  The impossible is still the impossible.  A 200% improvement in productivity over the average, for a 3 month period, is impossible.  You have to know enough to know when someone is making an absurd committment, and you have to stop them.  It is your project too.  Blame cannot be allowed to roll downhill when this kind of mistake is made. 

So, folks, if we want to hold a conversation about holding developers accountable, let’s also hold a conversation about holding project managers accountable.  Let’s find a way to measure the PM as well, and let’s hold them accountable for failing these points.  It is their responsibility too.

Ajax and Soap, again

By |2005-09-09T10:49:00+00:00September 9th, 2005|Enterprise Architecture|

I’m flattered by all the attention my statements are getting on comparing Ajax with SOA web services.  Another one popped up over night: Dare Obasanjo  with the statement: “This is probably one of the most bogus posts I’ve ever seen written by a Microsoft employee.”

So first off, a disclaimer: I’m an employee of Microsoft, but I do not speak for the company in any official capacity.  That said… my turn…

With all due respect to Mr. Obasanjo, a service that delivers data to a front end (whether it is for use by an Ajax page or a small rich-client app) is not a SOA web service.  I hate to have to point out the obvious, but alas, I must.  That is my point.  The fact that Mr. Obasanjo missed that point is led to the statement above.  I am not saying that Ajax cannot use SOAP.  I am not saying that Ajax should use WS_*.  I am not saying that lightweight services as used by front-ends are “bad” or “not really important.”  I am simply saying that they have nothing to do with SOA.

His example is that, on his site, there is a web service that he uses to display movies in the Seattle area.  It returns XML that his Ajax page formats and displays.  Cudos. 

Now let’s look at Service Oriented Architecture.  SOA is not really an Application-level concept.  It is an EAI-level concept.  SOA is not used to make front-ends talk to their back ends.  Web services can be used for this, but as I have pointed out before, simply using web services does not mean you are using Service Oriented Architecture.. 

Let’s look at Service Oriented Architecture for a moment. Actually read the paper I reference.  You’ll notice some statements that completely contradict the view that Ajax plays in the SOA space.  Excerpts below:

  • Precise, published specification functionality of service interface, not implementation.
  • Formal contract between endpoints places obligations on provider and consumer.
  • Functionality presented at a granularity recognized by the user as a meaningful service.

From their description, it is clear that a service that is so finely-tuned as to be useful for a front end is unlikely to be useful as a SOA service.  My statement is that, in fact, it would not be useful.  This is because, in a SOA environment, the transactions that pass between systems need to be encapsulated, fully self-describing, secure, reliable, and related to the business process in which they are involved.  This is simply too much overhead for consumption by a front-end.

Therefore, Ajax interfaces, while useful from the front end standpoint, do not need to be managed from the standpoint of discoverability, transaction management, workflow, business rules, routing, or any of the other aspects of enterprise architecture that must be applied in a SOA environment.  The original post that I objected to maintained that Ajax services would need to be managed in this way and, in fact, would tax IT departments because these services will be frequently used.  That was the disagreement that Mr. Obasanjo failed to recognize.

My position remains unchanged: Ajax interfaces escape from this level of scrutiny because they are not used to connect systems to each other… they are used to connect the front-end to the back-end. 

And that isn’t SOA.

Is SQL XML a bad idea?

By |2005-09-08T22:19:00+00:00September 8th, 2005|Enterprise Architecture|

I worry that we may have created a monster.

The fact that you can now submit an XML document to SQL Server and it will break it apart for you, storing the parts into tables… it’s a pretty compelling idea.  However, it also ties the persistence mechanism (SQL) to the communication mechanism (the XML document).

As an Architect, I spend my life trying to SEPERATE these two things, and then we introduce a technology that does this by its very nature.  In fact, it is not all that easy to seperate them even if you really really really really want to.

Net effect: in a distributed system (with one central data store and N child systems that feed it), it is now easy to justify coupling the databases together by placing the same table into both the central system and the feeder systems, and just use SQL XML to both generate and consume the transmission document.  Except that now, any change to the center has to be made to the children… even if the change doesn’t pertain to them.

This kind of coupling is bad.  It violates the loose couping that SOA is engineered to provide. 

Maybe there is a way to do this better.  I’ve got some studying to do…

Killing the Helper class, part two

By |2005-09-07T13:14:00+00:00September 7th, 2005|Enterprise Architecture|

Earlier this week, I blogged on the evils of helper classes.  I got a few very thoughful responses, and I wanted to try to address one of them.  It is far easier to do that with a new entry that trying to respond in the messages.

If you didn’t read the original post, I evaluated the concept of the helper class from the standpoint of one set of good principles for Object Oriented development, as described by Robert Martin, a well respected author and speaker.  While I don’t claim that his description of OO principles is “the only valid description as annointed by Prophet Bob”, I find it extremely useful and one of the more lucid descriptions of fundamental OO principles available on the web.  That’s my caveat.

The response I wanted to address is from William Sullivan, and reads as follows:

I can think of one case where helper classes are useful… Code re-use in a company. For instance, a company has a policy on how its programs will access and write to the registry. You wouldn’t want some products in the company saving its data in HKLM/Software/CompanyName/ProductName and some under …/Software/ProductName and some …/”Company Name”/”Product Name”. So you create a “helper class” that has static functions for accessing data in the registry. It could be designed to be instantiatable and extendable, but what would be the advantage? Another class could implement the companies’ policy on encryption, another for database access;[clip]

If you recall, my definition of a helper class is one in which all of the methods are static.  It is essentially a set of procedures that are “outside” the object structure of the application or the framework.  My objections were that classes of this nature violate two of the principles: the Open Closed Principle, and the Dependency Injection Principle.

So, let’s look at what a company can do to create a set of routines like William describes. 

Let’s say that a company (Fabrikam) produces a dozen software systems.  One of them, for our example, is called “Enlighten”.  So the standard location for accessing data in the registry would be under HKLM/Software/Fabrikam/Enlighten.  Let’s look at two approaches: one using a helper class and one using an instantiated object:

class HSettings
   static String GetKey(String ProductName, String Subkey)
   {   // — interesting code  

class FSettings
     private string _ProductName;
     public FSettings (String ProductName)
     {   _ProductName = ProductName;
     public String GetKey(String Subkey)
     {  // nearly identical code

Calling the FSettings object may look to be a little more effort:

public String MyMethod()
{   FSettings fs = new FSettings(“Enlighten”);
    string Settingvalue = fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue

as compared to:

public String MyMethod()
{   string Settingvalue = HSettings.GetKey(“Enlighten”,”Mysetting”);
    //Interesting code using Settingvalue

The problem comes in unit testing.  How do you test the method “MyMethod” in such a way that you can find defects in the ‘Interesting Code’ without also relying on any frailties of the code buried in our settings object.  Also, how to test this code without there being any setting at all in the registry?  Can we test on the build machine?  This is a common problem with unit testing.  How to test the UNIT of functionality without also testing any underlying dependencies. 

When a function depends on another function, you cannot easily find out where a defect is causing you a problem.  A defect in the dependency can cause a defect in the relying code.  Even the most defensive programming won’t do much good if the dependent code returns garbage data.

If you use the Dependency Injection Principle, you can get code that is a lot less frail, and it easily testable.  Let’s refactor our “FSettings” object to inherit from an interface.  (This is not something we can do for the HSettings class, because it is a helper class).


Interface ISettings
     public String GetKey(String Subkey);

class FSettings : ISettings // and so on

Now, we refactor our calling code to use Dependency Injection:

public class MyStuff {
private ISettings _fs; public MyStuff() {
    _fs = new FSettings(“Enlighten”);
public SetSettingsObject(ISettings ifs)
    _fs = ifs;
public String MyMethod()
{    string Settingvalue = _fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue

Take note: the code in MyMethod now looks almost identical to the code that we proposed for using the static methods. The difference is important, though. First off, we seperate the creation of the dependency from it’s use by moving the creation into the constructor. Secondly, we provide a mechanism to override the dependent object.

In practical terms, the code that calls MyMethod won’t care. It still has to create a ‘MyStuff’ object and call the MyMethod method. No parameters changed. The interface is entirely consistent. However, if we want to unit test the MyMethod object, we now have a powerful tool: the mock object.

class MockSettings : ISettings
     public MockSettings (String ProductName)
     {   if (ProductName != “Enlighten”)
        throw new ApplicationException(“invalid product name”);
     public String GetKey(String Subkey)
     {  return “ValidConnectionString”;

So, our normal code remains the same, but when it comes time to TEST our MyMethod method, we write a test fixture (a method in a special class that does nothing but test the method). In the test fixture, we use the mock object:

class MyTestFixture
     public void Test1 ()
     {   MyStuff ms = new MyStuff();
         MockSettings mock = new MockSettings();
            // now the code will use the mock, not the real one.
        // call the method… any exceptions?


What’s special about a test fixture? If you are using NUnit or Visual Studio’s unit testing framework, then any exceptions are caught for you.

This powerful technique is only possible because I did not use a static helper class when I wanted to look up the settings in my registry.

Of course, not everyone will write code using unit testing. That doesn’t change the fact that it is good practice to seperate the construction of an object from it’s use. (See Scott Bain’s article on Use vs. Creation in OO Design).  It also doesn’t change the fact that this useful construction, simple to do if you started with a real object, requires far more code change if you had started with a helper class.  In fact, if you had started with a helper class, you may be tempted to avoid unit testing altogether. 

I don’t know about you, but I’ve come across far too much code that needed to be unit tested, but where adding the unit tests would involve a restructuring of the code.  If you do yourself, and the next programmer behind you, a huge favor and simply use a real object from the start, you will earn “programmer’s karma” and may inherit some of that well structured code as well.   If everyone would simply follow “best practices” (even when you can’t see the reason why it’s useful in a particular case), then we would be protected from our own folly most of the time.

So, coming back to William’s original question: “it could be designed to be instantiable and extendable, but what’s the advantage?”

The advantage, is that when it comes time to prove that the calling code works, you have not prevented the use of good testing practices by forcing the developer to use a static helper class, whether he wanted to or not. 

Coding Dojo concept: one kata for each common design pattern

By |2005-09-07T02:37:00+00:00September 7th, 2005|Enterprise Architecture|

Time to combine two basic ideas: the idea of the coding dojo and the idea of design patterns as an essential element of development training.

For those of you who haven’t seen my previous posts on a coding dojo, the concept is that a karate dojo is a safe place to come and practice elemental skills in a supportive but corrective environment.  The karate master presents problems and assists as each student practices and demonstrates their skills at solving the problem repeatedly.  These “problems” to be solved repeatedly, formally, are the katas.

So, you come to a meeting once or twice a month to get together with other developers.  You work in a pair.  You get a problem statement and a set of unit tests to start.  Your job: meet the needs of the app by getting the unit tests to pass.

One pair works on the projector. 

I also believe that we are well served by practicing the basic design patterns.  Things like strategy, facade, decorator, bridge, observer, and chain of responsibility.  These basic structures are worth practicing.  We improve our understanding of OO code simply by following the kata.  Practice.  Hone.  Concentrate.

So, if we combine the two, perhaps that would be better.  What if we create 10 kata for each of the basic design patterns and a couple of architectural patterns?  Order them at random.  Practice.  Hone.  Concentrate.  Improve.

This idea could have some legs.  Hmmmmm……

Why Ajax can be safely ignored for a SOA adoption program

By |2005-09-06T18:18:00+00:00September 6th, 2005|Enterprise Architecture|

While it is interesting that a wide variety of consulting and product companies have tried to brand themselves as “the” experts on Service Orientation, there are a few examples of good sites that, although sharing corporate sponsorship, managed to describe SOA principles in a way that is fairly neutral.  The important thing to remember, even when using these sites, is that the opinions expressed in them are not standard, even if well described. 

Therefore, when a recent exchange between myself and Dion Hinchcliffe got rolling, Mr. Hinchcliffe pointed to a nice site at serviceorientation.org and stated that interoperability is not one of the SOA principles, and therefore my argument could be dismissed.  The two problems with this argument are, of course, (a) that the principles on the site do not represent consensus, and that (b) interoperability is specifically required by one of the principles on the site (service contract).

The core disagreement is on this point: does an enterprise that is implementing a SOA environment need to be concerned about the use of Ajax tools?  Mr. Hinchcliffe asserts that Ajax tools will use services, and therefore will drive the implementation of an SOA environment.  My assertion is that Ajax tools will use fine-grained application interfaces, not re-usable services, and therefore will not have any effect, positive or negative, on the implementation of a SOA environment.

The reason for this is simple: Ajax is too light-weight to play in the SOA world.  Ajax controls cannot meet or enforce a contract.  Ajax controls cannot use discovery protocols.  They must be tightly coupled with their services due to many considerations, including browser-enforced data security, in addition to the lack of discovery capabilities.  Ajax cannot compose a composable service request.  All Ajax requests will be simple, by nature. 

The requirements for an Ajax interface are speed of execution, small size of response, and very specific interaction behavior.  Loose coupling is not a requirement for Ajax services.  I would state that loose coupling is nearly an impossibility for Ajax interfaces.

The requirements for a web service are reliability, compliance to contract, loose coupling (in the sense of coding to contract and service discoverability) and services provided at the level of composability.  This last one is the most important point.  A composable service is one that can be understood by the business to be composed of atomic units of functionality.  The problem with the notion of an Ajax site consuming an enterprise web service is that the atomic units are TOO BIG to be useful at the front end.  Therefore, in order to create a composable service, the smallest unit of composition is not appropriate for the use of the Ajax site. 

In conclusion: it is completely safe to assume that Ajax sites will not consume enterprise web services.

Are Helper Classes Evil?

By |2005-09-06T09:45:00+00:00September 6th, 2005|Enterprise Architecture|

First off, a definition: A helper class is a class filled with static methods.  It is usually used to isolate a “useful” algorithm.  I’ve seen them in nearly every bit of code I’ve reviewed.  For the record, I consider the use of helper classes to be an antipattern.  In other words, an extraordinarily bad idea that should be avoided most of the time.

What, you say?  Avoid Helper Classes!?!  But they are so useful!

I say: they are nearly always an example of laziness.  (At this point, someone will jump in and say “but in Really Odd Situation ABC, There Is No Other Way” and I will agree.  However, I’m talking about normal IT software development in an OO programming language like C#, Java or VB.Net.  If you have drawn a helper class in your UML diagram, you have probably erred).

Why laziness?  If I have to pick a deadly sin, why not gluttony? 🙂

Because most of us in the OO world came out of the procedural programming world, and the notion of functional decomposition is so easy that we drop to it when we come across an algorithm that doesn’t seem to “fit” into our neat little object tree, and rather than understand the needs, analyze where we can get the best use of the technology, and place the method accordingly, we just toss it into a helper class.  And that, my friends, is laziness.

So what is wrong with helper classes?  I answer by falling back on the very basic principles of Object Oriented Programming.  These have been recited many times, in many places, but one of the best places I’ve seen is Robert Martin’s article on the principles of OO.  Specifically, focus on the first five principles of class design. 

So let’s look at a helper class on the basis of these principles.  First, to knock off the easy ones: 

Single Responsibility Principle — A class should have one and only one reason to change — You can design helper classes where all of the methods related to a single set of responsibilities.  That is entirely possible.  Therefore, I would note that this principle does not conflict with the notion of helper classes at all.  That said, I’ve often seen helper classes that violate this principle.  They become “catch all” classes that contain any method that the developer can’t find another place for.  (e.g. a class containing a helper method for URL encoding, a method for looking up a password, and a method for writing an update to the config file… This class would violate the Single Responsibility Principle).

Liskov Substitution Principle — Derived classes must be substitutable for their base classes — This is kind of a no-op in that a helper class cannot have a derived class. (Note my definition of a helper class is that all members are static).  OK.  Does that mean that helper classes violate LSP?  I’d say not.  A helper class looses the advantages of OO completely, an in that sense, LSP doesn’t matter… but it doesn’t violate it.

Interface Segregation Principle — Class interfaces should be fine-grained and client specific — another no-op.  Since helper classes do not derive from an interface, it is difficult to apply this principle with any degree of seperation from the Single Responsibility Principle. 

Now for the fun ones:

The Open Closed Principle — classes should be open for extension and closed for modification — You cannot extend a helper class.  Since all methods are static, you cannot derive anything that extends from it.  In addition, the code that uses it doesn’t create an object, so there is no way to create a child object that modifies any of the algorithms in a helper class.  They are all “unchangable”.  As such, a helper class simply fails to provide one of the key aspects of object oriented design: the ability for the original developer to create a general answer, and for another developer to extend it, change it, make it more applicable.  If you assume that you do not know everything, and that you may not be creating the “perfect” class for every person, then helper classes will be an anathema to you.

The Dependency Inversion Principle — Depend on abstractions, not concrete implementations — This is a simple and powerful principle that produces more testable code and better systems.  If you minimize the coupling between a class and the classes that it depends upon, you produce code that can be used more flexibly, and reused more easily.  However, a helper class cannot participate in the Dependency Inversion Principle.  It cannot derive from an interface, nor implement a base class.  No one creates an object that can be extended with a helper class.  This is the “partner” of the Liskov Substitution Principle, but while helper classes do not violate the LSP, they do violate the DIP. 

Based on this set of criteria, it is fairly clear that helper classes fail to work well with two out of the five fundamental principles that we are trying to achieve with Object Oriented Programming. 

But are they evil?  I was being intentionally inflammatory.  If you read this far, it worked.  I don’t believe that software practices qualify in the moral sphere, so there is no such thing as evil code.  However, I would say that any developer who creates a helper class is causing harm to the developers that follow. 

And that is no help at all.

Whose name is in the namespace?

By |2005-08-22T20:02:00+00:00August 22nd, 2005|Enterprise Architecture|

There’s more than one way to group your code.  Namespaces provide a mechanism for grouping code in a heirarchical tree, but there is precious little discussion about the taxonomy that designers and architects should use when creating namespaces.  This post is my attempt to describe a good starting place for namespace standards.

We have a tool: namespaces.  How do we make sure that we are using it well?

First off: who benefits from a good grouping in the namespace?  I would posit that a good namespace taxonomy benefits the developers, testers, architects, and support teams who need to work with the code.  We see this in the Microsoft .Net Framework, where components that share an underlying commonality of purpose or implementation will fall into the taxonomy in logical places. 

However, most IT developers aren’t creating reusable frameworks.  Most developers of custom business solutions are developing systems that are composed of various components, and which use the common shared code of the .Net Framework and any additional frameworks that may be adopted by the team.  So, the naming standard of the framework doesn’t really apply to the IT solutions developer. 

To start with, your namespace should start with the name of your company.  This allows you to easily differentiate between code that is clearly outside your control (like the .Net framework code or third-party controls) and code that you stand a chance of getting access to.  So, starting the namespace with “Fabrikam” makes sense for the employees within Fabrikam that are developing code.  OK… easy enough.  Now what?

I would say that the conundrum starts here.  Developers within a company do not often ask “what namespaces have already been used” in order to create a new one.  So, how does the developer decide what namespace to create for their project without know what other namespaces exist?  This is a problem within Microsoft IT just as it is in many organizations.  There are different ways to approach this.

One approach would be to put the name of the team that creates the code.  So, if Fabrikam’s finance group has a small programming team creating a project called ‘Motor’, then they may start their namespace with: Fabrikam.Finance.Motor.  On the plus side, the namespace is unique, because there is only one ‘Motor’ project within the Finance team.  On the down side, the name is meaningless.  It provides no useful information.

A related approach is simply to put the name of the project, no matter how creatively or obscurely that project was named.  Two examples: Fabrikam.Explan or even less instructive: Fabrikam.CKMS.  This is most often used by teams who have the (usually incorrect) belief that the code they are developing is appropriate for everyone in the enterprise, even though the requirements are coming from a specific business unit.  If this includes you, you may want to consider that the requirements you get will define the code you produce, and that despite your best efforts, the requirements are going to ALWAYS reflect the viewpoint of the person who gives them to you.  Unless you have a committee that reflects the entire company providing requirements, your code does not reflect the needs of the entire company.  Admit it.

I reject both of these approaches. 

Both of these approaches reflect the fact that the development team creates the namespace, when they are not the chief beneficiary.  First off, the namespace becomes part of the background quickly when developing an application.  Assuming the assembly was named correctly or the root namespace was specified, the namespace becomes automatic when a class is created using Visual Studio (and I would assume similar functionality for other professional IDE tools).  Since folders introduced to a project create child levels within the namespace, it is fairly simple for the original development team to ignore the root namespace and simply look at the children.  The root namespace is simply background noise, to be ignored.

I repeat: the root namespace is not useful or even important for the original developers.  Who, then, can benefit from a well named root namespace?

The enterprise.  Specifically, developers in other groups or other parts of the company that would like to leverage, modify or reuse code.  The taxonomy of the namespace could be very helpful for them when they attempt to find and identify functional code that implements the rules for a specific business process.  Include the support team that knows of the need to modify a function, and needs to find out where that function is implemented.

So, I suggest that it is more wise to adopt an enterprise naming standard for the namespaces in your code in such a way that individual developers can easily figure out what namespace to use, and developers in other divisions would find it useful for locating code by the functional area.

I come back to my original question: whose name is in the namespace?  In my opinion, the ‘functional’ decomposition of a business process starts with the specific people in the business that own the process.  Therefore, instead of putting the name of the developer (or her team or her project) into the namespace, it would make far more sense to put the name of the business group that owns the process.  Even better, if your company has an ERP system or a process engineering team that had named the fundamental business processes, use the names of the processes themselves, and not the name of the authoring team.

Let’s look again at our fictional finance group creating an application they call ‘Motor.’ Instead of the name of the team or the name of the project, let’s look to what the application does.  For our example, this application is used to create transactions in the accounts receivable system to represent orders booked and shipped from the online web site.  The fundamental business process is the recognition of revenue. 

In this case, it would make far more sense for the root namespace to be: Fabrikam.Finance.Recognition (or, if there may be more than one system for recognizing revenue, add another level to denote the source of the recognition transactions: Fabrikam.Finance.Recognition.Web)

So a template that you can use to create a common namespace standard would be:



  • CompanyName is the name of your company (or division if you are part of a very large company),
  • ProcessArea is the highest level group of processes within your company.  Think Manufacturing, Sales, Marketing, CustomerService, Management, etc.
  • Process is the name of the basic business process being performed.  Use a name that makes sense to the business.
  • Point could be the name of the step in the process, or the source of the data, or the customer of the interaction.  Avoid project names.  Definitely avoid the name of the group that is writing the code.

In IT, we create software for the business.  It is high time we take the stand that putting our own team name into the software is a lost opportunity at best, and narcissistic at worst.

On Atlas/Ajax and SOA

By |2005-08-20T16:38:00+00:00August 20th, 2005|Enterprise Architecture|

I ran across a blog entry that attempts to link Atlas/Ajax to SOA.  What absolute nonsense!

The technology, for those not familiar, is the use of XMLHTTP to link fine-grained data services on a web server to the browser in order to improve the user experience.  This is very much NOT a part of Service Oriented Architecture, since the browser is not a consumer of enterprise services.

So what’s wrong with having a browser consume enterprise web services?  The point of providing SOA services is to be able to combine them and use them in a manner that is consistent and abstracted from the source application(s).  SOA operates at the integration level… between apps.  To assume that services should be tied together at the browser assumes that well formed architecturally significant web services are so fine-grained that they would be useful for driving a user interface.  That is nonsense.

For an Atlas/Ajax user interface to use the data made available by a good SOA, the U/I will need to have a series of fine-grained services that access cached or stored data that may be generated from, or will be fed to, an SOA.  This is perfectly appropriate and expected.  However, you cannot pretend that this layer doesn’t exist… it is the application itself!

In a nutshell, the distinction is in the kinds of services provided.  An SOA provides coarse-grained services that are self-describing and fully encapsulated.  In this environment, the WS-* standards are absolutely essential.  On the other hand, the kinds of data services that a web application would need in an Atlas/Ajax environment would be optimized to provide displayable information for specific user interactions.  These uses are totally different. 

If I were to describe the architecture of an application that uses both Atlas/Ajax and SOA, I would name each enterprise web service.  All of the browser services would be named as a single component that provides user interface data services.  The are at different levels of granularity.

Atlas/Ajax, for better or worse, is an interesting experiment in current U/I circles.  Perhaps XMLHTTP’s time has finally come.  However, A/A it will have NO effect on whether SOA succeeds or fails.  Suggesting otherwise demonstrates an amazing lack of understanding of both.