/Tag: C# programming advice

Should the name of a department be encoded in a namespace?

By |2006-11-29T13:26:00+00:00November 29th, 2006|Enterprise Architecture|

One thread of discussion going through our internal community is this: should the .Net namespace include the name of the IT team that created it?  There are two camps:

Camp 1: Declare the Owners of the Code

We have a structure with about ten different IT teams, each assigned to different areas of the Microsoft business.  Each team has a unique identity, and for the most part, unique standards.  This camp wants the name of the IT team included in the namespace. 

So if a project in the Services IT team (SIT, connected to Microsoft Consulting Services) creates an employee object (tied in the HR database), it may have the namespace of:  MS.IT.SIT.Employee

If the Human Resources IT (HRIT) team were to create similar code, it would have the namespace of: MS.IT.HRIT.Employee

The reasoning goes like this: no matter how much we want to create code for the enterprise, the fact remains that a specific team will create the code and will continue to maintain it. Therefore, when using someone’s code, it is imperative that they are able to quickly and easily find out whose code they are using in the event of a bug or the need for extension.  Therefore, the name of the owner should be in the code.

Camp 2: Declare the Business Process but not the owner

We have a centrally defined ‘business process framework’ that identifies a heirarchy of overall business processes, both primary and supporting.  Primary process families are things like “Create Product, Market, Sell, Fullfill, Support” while secondary process families are things like “HR, Legal, IT, Finance”.

This camp says: put the process family name into the namespace, but not the name of the team.  This will allow code developed by different groups, but supporting the same processes, to come together in the same heirarchy.

Back to our example.  If the Services IT team was using the Employee objects to encapsulate services-specific rules, then perhaps the namespace for those classes would be: MS.IT.Support.Employee.  On the other hand, if they were creating base code to access the HR database, those classes should be in MS.IT.HR.Employee.

The Human Resources IT team would use MS.IT.HR.Employee most of the time, since presumably, the rules they are implementing would cross all of the corporate employees.

The reasoning goes like this: The point of shared corporate code is that one team can rely on another for their knowledge.  A single namespace tied to process families allows a more natural grouping of the functionality that we all have to rely upon.  The ownership of the code is managed in a seperate tool.  (note: the tool already exists for managing ‘who owns the code in what part of the namespace heirarchy.’  The .Net Framework team uses it extensively). 

So, the challenge is, which namespace approach is better?

Personally, I think that Camp 2 is correct. Reasons:

  • As long as we place the name of IT teams into namespaces, we encourage the development of duplicate code to do the same things.  If I see my team name in the namespace, but no code to do what I want, I’ll feel free to add it, even if the same code exists somewhere else.
  • Another downside to Camp 1:  We would be encouraging the notion that “someone else’s code” is to be avoided at all costs.  Developers will feel less confident about using the code from someone else’s team if they see their team name in the namespace.
  • Organizationally, we won’t develop the needed muscles for managing a namespace of functionality that crosses multiple teams’ needs.  The product groups do this, and MS IT should as well.

Of course, I’m just one opinionated SOB among a long list of opinionated peers.  Convincing people of the value of one approach over another is going to take time.  Whatever compromise comes out, I’ll support (assuming it allows healthy practices to grow). 

What is your opinion?  Should teams put their names in a namespace?


Should our next generation of languages require us to declare the applications' architecture?

By |2006-11-14T12:39:00+00:00November 14th, 2006|Enterprise Architecture|

As languages ‘improve’ over time, we see a first principle emerge:

Move responsibility for many of the ‘good practices’ into the language itself, allowing the language (and therefore the people who use it) to make better and more consistent use of those practices.

With assembler, we realized that we needed a variable location to have a consistent data type, so in comes variable declaration.  We also want specific control structures like WHILE and FUNCTION.  As we moved up into C and VB and other 3GLs, we started wanting the ability to encapsulate, and then to create objects.  OO languages emerged that took objects into account.

Now that application architecture is a requirement of good application design, why is it that it that the languages don’t enforce basic structural patterns like ‘layers’ and standard call semantics that allow for better use of tracing and instrumentation?  Why do we continue to have to ‘be careful’ when practicing these things?

I think it may be interesting if applications had to declare their architecture.  Classes would be required to pick a layer and the layers would be declared to the system, so that if the developer accidentally broke his own rules, and had the U/I call the data access objects directly, instead of calling the business objects, for example, then he or she could be warned.  (With constructs to allow folks to override these good practices, of course, just as today you can create a static class which gives you, essentially global variables in an OO language).

What if an application had to present it’s responsibilities when asked, in a structured and formal manner?  What if it had to tie to a known heirarchy of business capabilities, as owned by the organization, allowing for better maintenance and lifecycle control? 

In other words, what would happen if we built-in, to a modern language, the ability of the application to support, reflect, and defend the solution architecture?

Maybe, just maybe, it would be time to publish the next seminal paper: “Use of unconstrained objects considered harmful!”

Just how to best describe an interface

By |2006-08-22T02:51:00+00:00August 22nd, 2006|Enterprise Architecture|

We have a pretty good solution for portal code interface in our team.  We’ve been using a home-grown portal for about a half-dozen years and it has grown to be fairly sophisticated.  We have role-based-security, page auth, object auth, data item auth, row level auth, and user attributes.  It’s pretty sophisticated.

Now, we have to describe it to another team that wants to understand it.  Problem is… it’s homegrown and most folks just use the modules that someone else wrote to do most of the basic things.  So it is not that easy to find a concise and descriptive document already in existence.

So we write one… right?

OK… what do we put in it?  I’m serious. 

If you have a code interface, and you need to describe it from a simple API standpoint, it may be simple enough to extract the documentation using .Net, but if you want to actually ‘describe’ it to a dev team so that they can consider the features of your interface, and perhaps put a few into their product, you need a much richer description than MSDN.

Interaction diagram.  Class diagram.  Some text describing the use cases.  Notes on data structures, dependencies, and type inheritance. 

That’s about 6 pages of description for a simple interface… 50 for a complex one.  Is that the right level of detail?  If you were a developer, and someone else had created an interface and you want to evaluate your tool, figure out how hard it may be to add the features from that interface into your library… how much detail would you want to see?

Interesting tool for schema-first design

By |2006-03-05T02:56:00+00:00March 5th, 2006|Enterprise Architecture|

I guess it goes without saying that you cannot communicate in a language unless at least two people are using it.  That was always the problem with Esperanto… interesting to learn, hard to find someone to converse with.  WSDL is kinda like that.

One of the four tenants of SOA is that we share contract and not class… but most developers attempting to make services don’t really do that.  They develop a class interface, abstract it into a WSDL description and share it, without ever making the MENTAL distinction that they are making it into a contract.

From a design standpoint, I know that a best practice is only a practice if someone is practicing it.   If the tools prevent us, at the design stage, from describing our interfaces in purely abstract terms, then we aren’t practicing. 

So, the following tool shows up linked from another blog.  (I’ll skip the intermediary) 


This tool is a Visual Studio add-in that allows us to better use WSDL as a design tool, not just something that is output from VS after the class is created.

Maybe, now, it will be just a little easier to convince folks to actually practice the creation of the contract.

Killing the Helper class, part two

By |2005-09-07T13:14:00+00:00September 7th, 2005|Enterprise Architecture|

Earlier this week, I blogged on the evils of helper classes.  I got a few very thoughful responses, and I wanted to try to address one of them.  It is far easier to do that with a new entry that trying to respond in the messages.

If you didn’t read the original post, I evaluated the concept of the helper class from the standpoint of one set of good principles for Object Oriented development, as described by Robert Martin, a well respected author and speaker.  While I don’t claim that his description of OO principles is “the only valid description as annointed by Prophet Bob”, I find it extremely useful and one of the more lucid descriptions of fundamental OO principles available on the web.  That’s my caveat.

The response I wanted to address is from William Sullivan, and reads as follows:

I can think of one case where helper classes are useful… Code re-use in a company. For instance, a company has a policy on how its programs will access and write to the registry. You wouldn’t want some products in the company saving its data in HKLM/Software/CompanyName/ProductName and some under …/Software/ProductName and some …/”Company Name”/”Product Name”. So you create a “helper class” that has static functions for accessing data in the registry. It could be designed to be instantiatable and extendable, but what would be the advantage? Another class could implement the companies’ policy on encryption, another for database access;[clip]

If you recall, my definition of a helper class is one in which all of the methods are static.  It is essentially a set of procedures that are “outside” the object structure of the application or the framework.  My objections were that classes of this nature violate two of the principles: the Open Closed Principle, and the Dependency Injection Principle.

So, let’s look at what a company can do to create a set of routines like William describes. 

Let’s say that a company (Fabrikam) produces a dozen software systems.  One of them, for our example, is called “Enlighten”.  So the standard location for accessing data in the registry would be under HKLM/Software/Fabrikam/Enlighten.  Let’s look at two approaches: one using a helper class and one using an instantiated object:

class HSettings
   static String GetKey(String ProductName, String Subkey)
   {   // — interesting code  

class FSettings
     private string _ProductName;
     public FSettings (String ProductName)
     {   _ProductName = ProductName;
     public String GetKey(String Subkey)
     {  // nearly identical code

Calling the FSettings object may look to be a little more effort:

public String MyMethod()
{   FSettings fs = new FSettings(“Enlighten”);
    string Settingvalue = fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue

as compared to:

public String MyMethod()
{   string Settingvalue = HSettings.GetKey(“Enlighten”,”Mysetting”);
    //Interesting code using Settingvalue

The problem comes in unit testing.  How do you test the method “MyMethod” in such a way that you can find defects in the ‘Interesting Code’ without also relying on any frailties of the code buried in our settings object.  Also, how to test this code without there being any setting at all in the registry?  Can we test on the build machine?  This is a common problem with unit testing.  How to test the UNIT of functionality without also testing any underlying dependencies. 

When a function depends on another function, you cannot easily find out where a defect is causing you a problem.  A defect in the dependency can cause a defect in the relying code.  Even the most defensive programming won’t do much good if the dependent code returns garbage data.

If you use the Dependency Injection Principle, you can get code that is a lot less frail, and it easily testable.  Let’s refactor our “FSettings” object to inherit from an interface.  (This is not something we can do for the HSettings class, because it is a helper class).


Interface ISettings
     public String GetKey(String Subkey);

class FSettings : ISettings // and so on

Now, we refactor our calling code to use Dependency Injection:

public class MyStuff {
private ISettings _fs; public MyStuff() {
    _fs = new FSettings(“Enlighten”);
public SetSettingsObject(ISettings ifs)
    _fs = ifs;
public String MyMethod()
{    string Settingvalue = _fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue

Take note: the code in MyMethod now looks almost identical to the code that we proposed for using the static methods. The difference is important, though. First off, we seperate the creation of the dependency from it’s use by moving the creation into the constructor. Secondly, we provide a mechanism to override the dependent object.

In practical terms, the code that calls MyMethod won’t care. It still has to create a ‘MyStuff’ object and call the MyMethod method. No parameters changed. The interface is entirely consistent. However, if we want to unit test the MyMethod object, we now have a powerful tool: the mock object.

class MockSettings : ISettings
     public MockSettings (String ProductName)
     {   if (ProductName != “Enlighten”)
        throw new ApplicationException(“invalid product name”);
     public String GetKey(String Subkey)
     {  return “ValidConnectionString”;

So, our normal code remains the same, but when it comes time to TEST our MyMethod method, we write a test fixture (a method in a special class that does nothing but test the method). In the test fixture, we use the mock object:

class MyTestFixture
     public void Test1 ()
     {   MyStuff ms = new MyStuff();
         MockSettings mock = new MockSettings();
            // now the code will use the mock, not the real one.
        // call the method… any exceptions?


What’s special about a test fixture? If you are using NUnit or Visual Studio’s unit testing framework, then any exceptions are caught for you.

This powerful technique is only possible because I did not use a static helper class when I wanted to look up the settings in my registry.

Of course, not everyone will write code using unit testing. That doesn’t change the fact that it is good practice to seperate the construction of an object from it’s use. (See Scott Bain’s article on Use vs. Creation in OO Design).  It also doesn’t change the fact that this useful construction, simple to do if you started with a real object, requires far more code change if you had started with a helper class.  In fact, if you had started with a helper class, you may be tempted to avoid unit testing altogether. 

I don’t know about you, but I’ve come across far too much code that needed to be unit tested, but where adding the unit tests would involve a restructuring of the code.  If you do yourself, and the next programmer behind you, a huge favor and simply use a real object from the start, you will earn “programmer’s karma” and may inherit some of that well structured code as well.   If everyone would simply follow “best practices” (even when you can’t see the reason why it’s useful in a particular case), then we would be protected from our own folly most of the time.

So, coming back to William’s original question: “it could be designed to be instantiable and extendable, but what’s the advantage?”

The advantage, is that when it comes time to prove that the calling code works, you have not prevented the use of good testing practices by forcing the developer to use a static helper class, whether he wanted to or not. 

Are Helper Classes Evil?

By |2005-09-06T09:45:00+00:00September 6th, 2005|Enterprise Architecture|

First off, a definition: A helper class is a class filled with static methods.  It is usually used to isolate a “useful” algorithm.  I’ve seen them in nearly every bit of code I’ve reviewed.  For the record, I consider the use of helper classes to be an antipattern.  In other words, an extraordinarily bad idea that should be avoided most of the time.

What, you say?  Avoid Helper Classes!?!  But they are so useful!

I say: they are nearly always an example of laziness.  (At this point, someone will jump in and say “but in Really Odd Situation ABC, There Is No Other Way” and I will agree.  However, I’m talking about normal IT software development in an OO programming language like C#, Java or VB.Net.  If you have drawn a helper class in your UML diagram, you have probably erred).

Why laziness?  If I have to pick a deadly sin, why not gluttony? 🙂

Because most of us in the OO world came out of the procedural programming world, and the notion of functional decomposition is so easy that we drop to it when we come across an algorithm that doesn’t seem to “fit” into our neat little object tree, and rather than understand the needs, analyze where we can get the best use of the technology, and place the method accordingly, we just toss it into a helper class.  And that, my friends, is laziness.

So what is wrong with helper classes?  I answer by falling back on the very basic principles of Object Oriented Programming.  These have been recited many times, in many places, but one of the best places I’ve seen is Robert Martin’s article on the principles of OO.  Specifically, focus on the first five principles of class design. 

So let’s look at a helper class on the basis of these principles.  First, to knock off the easy ones: 

Single Responsibility Principle — A class should have one and only one reason to change — You can design helper classes where all of the methods related to a single set of responsibilities.  That is entirely possible.  Therefore, I would note that this principle does not conflict with the notion of helper classes at all.  That said, I’ve often seen helper classes that violate this principle.  They become “catch all” classes that contain any method that the developer can’t find another place for.  (e.g. a class containing a helper method for URL encoding, a method for looking up a password, and a method for writing an update to the config file… This class would violate the Single Responsibility Principle).

Liskov Substitution Principle — Derived classes must be substitutable for their base classes — This is kind of a no-op in that a helper class cannot have a derived class. (Note my definition of a helper class is that all members are static).  OK.  Does that mean that helper classes violate LSP?  I’d say not.  A helper class looses the advantages of OO completely, an in that sense, LSP doesn’t matter… but it doesn’t violate it.

Interface Segregation Principle — Class interfaces should be fine-grained and client specific — another no-op.  Since helper classes do not derive from an interface, it is difficult to apply this principle with any degree of seperation from the Single Responsibility Principle. 

Now for the fun ones:

The Open Closed Principle — classes should be open for extension and closed for modification — You cannot extend a helper class.  Since all methods are static, you cannot derive anything that extends from it.  In addition, the code that uses it doesn’t create an object, so there is no way to create a child object that modifies any of the algorithms in a helper class.  They are all “unchangable”.  As such, a helper class simply fails to provide one of the key aspects of object oriented design: the ability for the original developer to create a general answer, and for another developer to extend it, change it, make it more applicable.  If you assume that you do not know everything, and that you may not be creating the “perfect” class for every person, then helper classes will be an anathema to you.

The Dependency Inversion Principle — Depend on abstractions, not concrete implementations — This is a simple and powerful principle that produces more testable code and better systems.  If you minimize the coupling between a class and the classes that it depends upon, you produce code that can be used more flexibly, and reused more easily.  However, a helper class cannot participate in the Dependency Inversion Principle.  It cannot derive from an interface, nor implement a base class.  No one creates an object that can be extended with a helper class.  This is the “partner” of the Liskov Substitution Principle, but while helper classes do not violate the LSP, they do violate the DIP. 

Based on this set of criteria, it is fairly clear that helper classes fail to work well with two out of the five fundamental principles that we are trying to achieve with Object Oriented Programming. 

But are they evil?  I was being intentionally inflammatory.  If you read this far, it worked.  I don’t believe that software practices qualify in the moral sphere, so there is no such thing as evil code.  However, I would say that any developer who creates a helper class is causing harm to the developers that follow. 

And that is no help at all.

Whose name is in the namespace?

By |2005-08-22T20:02:00+00:00August 22nd, 2005|Enterprise Architecture|

There’s more than one way to group your code.  Namespaces provide a mechanism for grouping code in a heirarchical tree, but there is precious little discussion about the taxonomy that designers and architects should use when creating namespaces.  This post is my attempt to describe a good starting place for namespace standards.

We have a tool: namespaces.  How do we make sure that we are using it well?

First off: who benefits from a good grouping in the namespace?  I would posit that a good namespace taxonomy benefits the developers, testers, architects, and support teams who need to work with the code.  We see this in the Microsoft .Net Framework, where components that share an underlying commonality of purpose or implementation will fall into the taxonomy in logical places. 

However, most IT developers aren’t creating reusable frameworks.  Most developers of custom business solutions are developing systems that are composed of various components, and which use the common shared code of the .Net Framework and any additional frameworks that may be adopted by the team.  So, the naming standard of the framework doesn’t really apply to the IT solutions developer. 

To start with, your namespace should start with the name of your company.  This allows you to easily differentiate between code that is clearly outside your control (like the .Net framework code or third-party controls) and code that you stand a chance of getting access to.  So, starting the namespace with “Fabrikam” makes sense for the employees within Fabrikam that are developing code.  OK… easy enough.  Now what?

I would say that the conundrum starts here.  Developers within a company do not often ask “what namespaces have already been used” in order to create a new one.  So, how does the developer decide what namespace to create for their project without know what other namespaces exist?  This is a problem within Microsoft IT just as it is in many organizations.  There are different ways to approach this.

One approach would be to put the name of the team that creates the code.  So, if Fabrikam’s finance group has a small programming team creating a project called ‘Motor’, then they may start their namespace with: Fabrikam.Finance.Motor.  On the plus side, the namespace is unique, because there is only one ‘Motor’ project within the Finance team.  On the down side, the name is meaningless.  It provides no useful information.

A related approach is simply to put the name of the project, no matter how creatively or obscurely that project was named.  Two examples: Fabrikam.Explan or even less instructive: Fabrikam.CKMS.  This is most often used by teams who have the (usually incorrect) belief that the code they are developing is appropriate for everyone in the enterprise, even though the requirements are coming from a specific business unit.  If this includes you, you may want to consider that the requirements you get will define the code you produce, and that despite your best efforts, the requirements are going to ALWAYS reflect the viewpoint of the person who gives them to you.  Unless you have a committee that reflects the entire company providing requirements, your code does not reflect the needs of the entire company.  Admit it.

I reject both of these approaches. 

Both of these approaches reflect the fact that the development team creates the namespace, when they are not the chief beneficiary.  First off, the namespace becomes part of the background quickly when developing an application.  Assuming the assembly was named correctly or the root namespace was specified, the namespace becomes automatic when a class is created using Visual Studio (and I would assume similar functionality for other professional IDE tools).  Since folders introduced to a project create child levels within the namespace, it is fairly simple for the original development team to ignore the root namespace and simply look at the children.  The root namespace is simply background noise, to be ignored.

I repeat: the root namespace is not useful or even important for the original developers.  Who, then, can benefit from a well named root namespace?

The enterprise.  Specifically, developers in other groups or other parts of the company that would like to leverage, modify or reuse code.  The taxonomy of the namespace could be very helpful for them when they attempt to find and identify functional code that implements the rules for a specific business process.  Include the support team that knows of the need to modify a function, and needs to find out where that function is implemented.

So, I suggest that it is more wise to adopt an enterprise naming standard for the namespaces in your code in such a way that individual developers can easily figure out what namespace to use, and developers in other divisions would find it useful for locating code by the functional area.

I come back to my original question: whose name is in the namespace?  In my opinion, the ‘functional’ decomposition of a business process starts with the specific people in the business that own the process.  Therefore, instead of putting the name of the developer (or her team or her project) into the namespace, it would make far more sense to put the name of the business group that owns the process.  Even better, if your company has an ERP system or a process engineering team that had named the fundamental business processes, use the names of the processes themselves, and not the name of the authoring team.

Let’s look again at our fictional finance group creating an application they call ‘Motor.’ Instead of the name of the team or the name of the project, let’s look to what the application does.  For our example, this application is used to create transactions in the accounts receivable system to represent orders booked and shipped from the online web site.  The fundamental business process is the recognition of revenue. 

In this case, it would make far more sense for the root namespace to be: Fabrikam.Finance.Recognition (or, if there may be more than one system for recognizing revenue, add another level to denote the source of the recognition transactions: Fabrikam.Finance.Recognition.Web)

So a template that you can use to create a common namespace standard would be:



  • CompanyName is the name of your company (or division if you are part of a very large company),
  • ProcessArea is the highest level group of processes within your company.  Think Manufacturing, Sales, Marketing, CustomerService, Management, etc.
  • Process is the name of the basic business process being performed.  Use a name that makes sense to the business.
  • Point could be the name of the step in the process, or the source of the data, or the customer of the interaction.  Avoid project names.  Definitely avoid the name of the group that is writing the code.

In IT, we create software for the business.  It is high time we take the stand that putting our own team name into the software is a lost opportunity at best, and narcissistic at worst.

A Case For and Against the Enterprise Library

By |2005-08-01T22:58:00+00:00August 1st, 2005|Enterprise Architecture|

I’ve been an architect for a while now, but, as far as being an architect within the walls of Microsoft, today was day one.

Already, I’ve run into an interesting issue: when it is better to forgo the code of the Enterprise Library and roll your own, vs. using existing code.

Roll your own what?  Well, the MS Enterprise Library is a set of source code (in both VB.Net and C#) that provides an infrastructure for business applications.  The “blocks” that are provided include: caching, configuration, data access and instrumentation, among others.

I know that many people have downloaded the application blocks.  I don’t know how many people are using them.  I suspect far fewer.

I took a look at the blocks myself, and my first impression: unnecessary complexity.  Big time.  This is what comes of creating a framework without the business requirements to actually use it.  To say that the code has a high “bus” factor is a bit deceptive, because along with the code comes ample documentation that should mitigate the difficulty that will inevitably come from attempt to use them.

On the other hand, the code is there and it works.  If you have a project that needs a data access layer, why write a new one when a perfectly workable, and debugged, application block exists for it? 

Why indeed.  I had a long discussion with a developer today about using these blocks.  I will try to recount each of the discussion points:

  1. The blocks are complex and we only need to do something simple.  True: the blocks are complex, but the amount of time needed to do something simple is FAR greater than the amount of time needed to understand how to configure the blocks.  If you look at simple project impact, using something complex is still less expensive that writing something simple.
  2. We don’t know these application blocks, so it will take time to learn.  True: but if you write new code, the only person who knows it, when you are done, it you.  Everyone else has to read the documentation.  You’d be hard pressed to come up with better documentation than the docs delivered with the application blocks.
  3. The code we write will meet our needs better because we are doing “special” stuff.  False: the stuff that is done in the application blocks is pure infrastructure.  As an architect, I carry the mantra: leverage existing systems first, then buy for competitive parity, and lastly build for competitve advantage.  You will not normally provide your employer with a competitive advantage by writing your own code in infrastructure.  You are more likely to get competitive advantage by using the blocks, since they will be less expensive with capabilities right out of the box.
  4. We don’t need all that code.  True.  Don’t use the functionality you don’t need.  The cost is very low to ignore the functionality you don’t need.  More importantly, writing your own code means debugging your own code.  If you leverage the code that is there, you will not have to debug it.  That saves buckets of time.
  5. Our code can be tuned and is faster than the code in the Enterprise Library.  The code in the Enterprise Library is tuned for flexibility, not speed.  This is true.  However, when you first write your own code, it is slow.  It gets faster when you tune it.  Why not jump right to the tuning step?  Put in the EL for the component you are interested in, run a stress test against it, and fine-tune the code to speed it up.  You have unit tests already in place to prove that your tuning work won’t break the functionality (highly valuable when doing perf testing). 

Please… can someone else come up with any better arguments for NOT using the application blocks in the enterprise library?  I’m not seeing one.


Atlas = Ajax = asp.net 2.0 script callbacks and more

By |2005-07-08T09:53:00+00:00July 8th, 2005|Enterprise Architecture|

The marketplace of ideas is an amazing place.  When Microsoft came up with the notion of Remote Scripting (many years ago), the Netscape folks scoffed.  At the time, folks looked at MS and said, “This is a war, and I won’t use a feature from the big bad wolf!”  The notion of asynchronously updating part of a web page, while powerful, lay dormant for years.

Sure, IE has kept the feature alive, but few folks used it.  Then, as soon as the Mozilla/Firefox folks decided to go ahead and embrace the notion, then it becomes safe for the public to use.  Only then is it “cross platform.”  Alas, the key was not to add the feature to our browser, but to add it to every browser.  (interesting).

The success of Gmail, and a marketing campaign by a consulting company, have led to some visibility.  There’s a new marketing term for this long-existing technique: Ajax.  Nice name.  Marketing, they get.

The great thing for MS platform developers: Just as the term will be gaining steam, Microsoft will release ASP.Net 2.0, which looks to have built-in support for it.  The product groups have come up with a competing name: Atlas.

So, special thanks to Jesse James Garrett for publicizing a feature of our new platform.  If you want to know more about implementing Ajax, both in ASP.Net 2.0 and in .Net 1.1, see this paper by Dino Esposito on the MSDN site.


If you want to know more about Atlas, see this blog entry from scottgu


It is nice to be ahead of the curve.

Having a High Bus Factor

By |2005-06-28T13:51:00+00:00June 28th, 2005|Enterprise Architecture|

A friend of mine pointed out an interesting post by Scott Hanselman that used a clever phrase: “having a High Bus Factor” which is to say: if the original developer of a bit of code is ever hit by a bus, you are toast.

The example that Scott gave was a particular regular expression that I just have to share.  To understand the context, read his blog.

private static Regex regex = new Regex(@”<[w-_.: ]*><![CDATA[]]></[w-_.: ]*>|<[w-_.: ]*></[w-_.: ]*>|<[w-_.: ]*/>|<[w-_.: ]*[/]+>|<[w-_.: ]*[s]xmlns[:w]*=””[w-/_.: ]*””></[w-_.: ]*>|<[w-_.: ]*[s]xmlns[:w]*=””[w-/_.: ]*””[s]*/>|<[w-_.: ]*[s]xmlns[:w]*=””[w-/_.: ]*””><![CDATA[]]></[w-_.: ]*>”,RegexOptions.Compiled);

I must admit to having developed code, in the (now distant) past that had a similar high bus factor.  Nothing as terse as the above example, thank goodness, but something kinda close.  On two occasions, actually.  I look back and hope that I have learned, but I’m not certain that I have. 

The trick here is that I do not know the developer who follows me.  He or she will know some basic and common things.  The problem lies deeper… It is where my expertise exceeds the ability of a maintenance developer to understand my code… that is where the break occurs.

So how do we avoid this?  How does a good developer keep from creating code with a High Bus Factor?

It isn’t documentation.  I have been using regular expressions for decades (literally) and the above code is wildly complicated, even for me.  No amount of documentation would make that chunk of code simple for me to read or maintain.

Pithy advice, like “use your tools wisely” won’t help either.  One could argue that regular expressions were not being appropriately used in this case, and in fact, the blog entry describes replacing it because it wasn’t performing well when larger files were being scanned.  That isn’t the point. 

I would state that any sufficiently powerful technique (whether regex, or the use of an advanced design pattern, or the use SQL XML in some clever way, etc) presents the risk of exceeding the ability of another developer to understand, and therefore, maintain it.

Where does the responsibility lie for insuring that dev team, brought in to maintain a bit of code, are able to understand it?  Is it the responsibility of the development manager?  The dev lead?  The original developers?  The architects or code quality gurus?  The unit tests? 

Is it incumbent upon the original dev team to make sure that their code does not have a High Bus Factor?  If so, how?

I’m not certain.  But it is an interesting issue.