/Tag: Coding Tips and Tricks

Test yourself: 25 most dangerous security programming errors

By |2009-05-31T15:04:44+00:00May 31st, 2009|Enterprise Architecture|

The SANS institute has published a list of the top 25 most dangerous programming errors.  Not only is this a must-read, but it is critical for architects, developers and testers, of all stripes, to be aware of these programming errors.  Unless and until we have platforms that simply prevent these errors, we can combat these security gaps best through education, careful testing, and responsible project delivery practices.

http://www.sans.org/top25errors/

How familiar are you with these mistakes? 

Would you be able to spot them in code you reviewed? 

Would you be able to prevent them in your own code? 

System Reliability requires Message Durability (immature WCF)

By |2007-05-30T08:31:00+00:00May 30th, 2007|Enterprise Architecture|

WCF is a very cool technology.  Microsoft has moved the goalposts in the messaging space with this one, and I’m a huge fan.  However, there is a limitation that is painful to live with: the lack of a routable, intermediable, declared message durability option.

Sure.  There’s MSMQ.  Great.  If you (a) control both ends, and (b) are willing to lose intermediability, (c) are happy with one-way communication and (d) have no network firewall or NAT issues.  Zowie. 

What about the real promise of WCF: to put cross-cutting concerns into the infrastructure, to let developers focus on the message while the infrastructure focuses on the transport.

Some folks are asking that WCF integrate with SSB.  I think that is a very short-sighted answer.  Messaging sits above storage on the stack, so the message infrastructure needs to control the storage.  SSB is written from the other angle: with storage first, and messaging below it.  It is possible to prop this up, but since SSB is doing a lot of the work that WCF is doing, the best way would be to take a different approach altogether.

In my opinion, we should be able to configure an adapter in WCF where we declare a durable endpoint and configure SQL Server (if we choose, or Oracle or MySQL) as the storage mechanism.  We can then rely on WCF to not only send the message but to send it in a way that it won’t be lost if the other side is down, or I crash before getting acknowledgement, etc.   ACID Transactions.  I know… I’m asking a lot.  Not more than others.  Consider me one more voice in the chorus. 

BTW: WCF does have reliable messaging… in memory.  It has durable messaging, in MSMQ.  The limitations of this approach were nicely described by Matevz Gacnik in a blog from last February.  Matevz is an MVP and I like his writing style.

 

Hero or Rebel

By |2007-03-13T03:04:00+00:00March 13th, 2007|Enterprise Architecture|

If you do what is needed, but not what you are told, are you a hero, or a rebel?

In software, as in life, there are situations where you have to choose. Literature is filled with stories where the daring young man is told to “stay put” and he rushes into danger to save the damsel, or the brave soul takes on injustice when all counsel her to “stay out of it.”

So when you are writing software, what room do you have to do the right thing?  If you are looking at a set of requirements, and it is clear that a use case is missing, do you write code for it?  Do you invent your own process because the customer clearly needs it?

I think, in a team environment, the answer is muddy.  Perhaps that’s why our heros of literature and the movies all prefer to “work alone.”  In a team environment, you have to trust the skills of those around you.  If a use case is missing, point it out, but if you write code, you may find that the customer Chose not to implement that use case, and you just risked the delivery of the project!  Perhaps the functionality belongs in another app, or perhaps the data is supposed to be stable so no interface is needed. 

Personally, I like team heros.  A “team hero” is a team member who offers up honest answers when asked, who praises team mates in public, and offers constructive feedback in private.  They get their work done efficiently so that they can help other team mates in need.  If a team member is not doing well, for whatever reason, they help rally the strength of the team to assist that team member over their hump.

A team hero doesn’t have to ask if it is OK to do what is right, because they would be asking the team, and the decision is already made.

So cudos to team heros: those few key players who, through sheer passion, are singularly responsible for both the quality and capability of the teams fortunate enough to have them. 

What about a Software Development Guild?

By |2007-01-31T09:00:00+00:00January 31st, 2007|Enterprise Architecture|

I work for Microsoft.  However, I wonder if the answer to deciding if a developer is ‘qualified’ wouldn’t be better decided outside these hallowed halls.  Specifically, should software development be self-regulating, like Doctors and Attorneys are?

This discussion came up on a Comp.Object newsgroup thread that started by asking about the Big Ball of Mud (BBoM) antipattern.  The discussion quickly descended into ‘whose fault’ it was that BBoM code had come into existence.  Both Robert Martin and myself took the stand that developers who write this code are delivering code more slowly, in the current sprint, then if they wrote well structured code.  E.G. “Quick and Dirty” is an oxymoron.

From there, the cnversation further evolved into: perhaps we should have a self-policing guild, like crafthalls of old, that would allow us to decide, for ourselves, who is qualified to carry the title “Software Engineer.”

Some folks immediately worried about politics and exclusivity, while others worried about creating a hurdle that the truly gifted among us would make no attempt at (no need to).  Plenty of issues.

My take: I’d like to see a more specific proposal for what a guild would entail.  I’m cautious but not opposed to the idea of Software Developers kicking out one of our own for delivering cr*p on a tight schedule.

What do you think?  Should we consider such a thing?

Build TDD adoption through Support-First efforts

By |2007-01-16T01:27:00+00:00January 16th, 2007|Enterprise Architecture|

I am convinced that Test-Driven Development is the single greatest hope that our industry has, as a whole, for improving the development and design of useful, practical, low-defect applications, and I find it frustrating that, in some places, it has taken off, while in other places, it remains a ‘nice idea.’

I saw a post on the newsgroups recently offering some ‘good tips’ for developers.  These tips, largely a nice collection of tried-and-true practices, seemed like “yet another attempt” to fix the problem of bad code by making developers more aware of what to do well.  Problem is that most developers don’t intentionally develop bad code.  They develop good code, and then, in support, that code is expected to flex in ways that the original designer did not intend.

Over time, the code is modified through Quick-Fix efforts that may, or may not, recognize the original design.  Opportunities for refactoring are not recognized because support teams are not paid to notice the original design intent… they are paid to fix the code.  In doing so, they make mistakes that build up over time.

I think that test first development, as a SUPPORT discipline, would be the best way to highlight the need for refactoring code when it needs it, and not later, after it has becomed a tangled mess.

So, if you are in an organization that has not yet taken up Test-Driven Development, consider convincing your Support team to place these two rules into effect:

1) No code from development will be accepted into support without at least 80% code coverage in unit tests, and

2) No fix may be checked in for production deployment without all unit tests also working, and all new code getting unit tests.

Nothing will drive unit tests faster than making it a requirement of the support team, and nothing will lower the cost of ownership faster than recognizing the correct time for refactoring.  I’m convinced that this small change can make a huge impact.

Declaring architecture in the code

By |2006-12-18T03:44:00+00:00December 18th, 2006|Enterprise Architecture|

Code sits below the architecture.  It is not an expression of architecture.  Code realizes architecture, but is constrained by it.  Therefore, it is entirely possible to declare the architecture in the code.

For example, let’s say that we are building a distributed system composed of a user interface and four services.  The user interface calls the services only.  One of the services recognizes a business event and transmits it to an EAI system for distribution to subscribers.

The services, architecturally, should remain isolated from one another.  We want to declare that… to constrain our code.  We also want to make sure that the user interface is not dependent upon any interface to the services except a declared public services interface.  This dependency needs to be declared and enforced.  In other words, no other dependency is allowed, especially the sharing of class across the service boundary.

So how would we declare these constraints in our code?  I would prefer to do this visually, because I don’t think that architecture can or should be described in some distorted text typical of most 3GL programming languages.  I have no problem if the visual image has an XML representation, or some other computer-readable standardized mechanism, but the developer should access the architecture visually.

In addition, the architecture should be a starting place for the code.  I would envision that when a developer opens the code for his app, he sees the diagram and uses it to navigate through the project or projects.  Since this is a distributed system, compilation of each component must be fairly seperate, since each component could potentially reside in different systems.

Unfortunately, it is 1am and I don’t have the patience for writing up the diagram at the moment.  I’ll slap together an example tomorrow.  I envision the code representation to look something like this (note: this is a revision of this post.  My first attempt was in XML but in hindsight, that wouldn’t be part of the code, it would be part of the IDE or config, so I rewrote to something that looks like C#):

architecture SCRIBE // system name is SCRIBE in this example
{
      endpoint CustomerEnterpriseService;
      endpoint InvoiceEnterpriseService;

      //by declaring DataLayer to be InProcess, that means that I intend
      // to have the users of this component call it directly in a DLL rather
      // than through service calls.  This is important to SOA.  
     component DataLayer(InProcess)
      {
          layer DataLayer;
          DataLayer.namespace = “contoso.scribe.service.data”;
      }

      component UserInterfaceModule
      {
           layer front, service, composition;
           front.namespace = “contoso.scribe.interface.front”;
           service.namespace = “contoso.scribe.service”;
           composition.namespace = “contoso.scribe.interface.proxy”;
           front.maycall(service);
           front.maycall(composition);
           composition.maycall(service);
           composition.channels.add(CustomerEnterpriseService);
           composition.channels.add(InvoiceEnterpriseService);
      }

      component CustomerServiceModule 
                       delivers CustomerEnterpriseService
      {
           layer facade, business;
           facade.namespace = “contoso.scribe.service.customer.facade”;
           business.namespace = “contoso.scribe.service.customer.business”;
           facade.maycall(business);
           business.maycall(DataLayer);
      }

      component CustomerServiceModule 
                       delivers  InvoiceEnterpriseService
      {
           layer facade, business;
           facade.namespace = “contoso.scribe.service.invoice.facade”;
           business.namespace = “contoso.scribe.service.invoice.business”;
           facade.maycall(business);
           business.maycall(DataLayer);
      }
}

In this example, wildly oversimplistic but illustrative, I have described two services and a user interface layer.  The u/i layer declares its composition layer and the fact that it is calling services.  (.channels.add)  The interactions can then be limited.  If a bit of code in the interface were to directly call code in ‘contoso.scribe.service.data’, then the system could warn the developer that the interface layer can only call the composition layer or the service layer.  The service layer can call nothing else.  The composition layer is allowed to interact using a service model.

Perhaps this ties in to Domain Specific Languages somewhat.  My problem is the notion of considering this to be something seperate from the C# or VB.Net languages themselves. 

We keep building things AROUND our programming languages, but I think we should not omit the potential gains by building WITHIN them as well, so that our compilers can warn a developer when they exceed the architecture. 

It also means that the architect can do a code review on a single document, the declared architecture, and feel reasonably assured that the document that he or she is looking at actually reflects the intent of the system, because the compiler is working for the architect, to enforce and reassert the architectural intentions if the developers make a mistake.

Should our next generation of languages require us to declare the applications' architecture?

By |2006-11-14T12:39:00+00:00November 14th, 2006|Enterprise Architecture|

As languages ‘improve’ over time, we see a first principle emerge:

Move responsibility for many of the ‘good practices’ into the language itself, allowing the language (and therefore the people who use it) to make better and more consistent use of those practices.

With assembler, we realized that we needed a variable location to have a consistent data type, so in comes variable declaration.  We also want specific control structures like WHILE and FUNCTION.  As we moved up into C and VB and other 3GLs, we started wanting the ability to encapsulate, and then to create objects.  OO languages emerged that took objects into account.

Now that application architecture is a requirement of good application design, why is it that it that the languages don’t enforce basic structural patterns like ‘layers’ and standard call semantics that allow for better use of tracing and instrumentation?  Why do we continue to have to ‘be careful’ when practicing these things?

I think it may be interesting if applications had to declare their architecture.  Classes would be required to pick a layer and the layers would be declared to the system, so that if the developer accidentally broke his own rules, and had the U/I call the data access objects directly, instead of calling the business objects, for example, then he or she could be warned.  (With constructs to allow folks to override these good practices, of course, just as today you can create a static class which gives you, essentially global variables in an OO language).

What if an application had to present it’s responsibilities when asked, in a structured and formal manner?  What if it had to tie to a known heirarchy of business capabilities, as owned by the organization, allowing for better maintenance and lifecycle control? 

In other words, what would happen if we built-in, to a modern language, the ability of the application to support, reflect, and defend the solution architecture?

Maybe, just maybe, it would be time to publish the next seminal paper: “Use of unconstrained objects considered harmful!”

Killing the Helper class, part two

By |2005-09-07T13:14:00+00:00September 7th, 2005|Enterprise Architecture|

Earlier this week, I blogged on the evils of helper classes.  I got a few very thoughful responses, and I wanted to try to address one of them.  It is far easier to do that with a new entry that trying to respond in the messages.

If you didn’t read the original post, I evaluated the concept of the helper class from the standpoint of one set of good principles for Object Oriented development, as described by Robert Martin, a well respected author and speaker.  While I don’t claim that his description of OO principles is “the only valid description as annointed by Prophet Bob”, I find it extremely useful and one of the more lucid descriptions of fundamental OO principles available on the web.  That’s my caveat.

The response I wanted to address is from William Sullivan, and reads as follows:

I can think of one case where helper classes are useful… Code re-use in a company. For instance, a company has a policy on how its programs will access and write to the registry. You wouldn’t want some products in the company saving its data in HKLM/Software/CompanyName/ProductName and some under …/Software/ProductName and some …/”Company Name”/”Product Name”. So you create a “helper class” that has static functions for accessing data in the registry. It could be designed to be instantiatable and extendable, but what would be the advantage? Another class could implement the companies’ policy on encryption, another for database access;[clip]

If you recall, my definition of a helper class is one in which all of the methods are static.  It is essentially a set of procedures that are “outside” the object structure of the application or the framework.  My objections were that classes of this nature violate two of the principles: the Open Closed Principle, and the Dependency Injection Principle.

So, let’s look at what a company can do to create a set of routines like William describes. 

Let’s say that a company (Fabrikam) produces a dozen software systems.  One of them, for our example, is called “Enlighten”.  So the standard location for accessing data in the registry would be under HKLM/Software/Fabrikam/Enlighten.  Let’s look at two approaches: one using a helper class and one using an instantiated object:

class HSettings
{
   static String GetKey(String ProductName, String Subkey)
   {   // — interesting code  
   }
}

class FSettings
{
     private string _ProductName;
     public FSettings (String ProductName)
     {   _ProductName = ProductName;
     }
     public String GetKey(String Subkey)
     {  // nearly identical code
     }
}

Calling the FSettings object may look to be a little more effort:

public String MyMethod()
{   FSettings fs = new FSettings(“Enlighten”);
    string Settingvalue = fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue
}

as compared to:

public String MyMethod()
{   string Settingvalue = HSettings.GetKey(“Enlighten”,”Mysetting”);
    //Interesting code using Settingvalue
}

The problem comes in unit testing.  How do you test the method “MyMethod” in such a way that you can find defects in the ‘Interesting Code’ without also relying on any frailties of the code buried in our settings object.  Also, how to test this code without there being any setting at all in the registry?  Can we test on the build machine?  This is a common problem with unit testing.  How to test the UNIT of functionality without also testing any underlying dependencies. 

When a function depends on another function, you cannot easily find out where a defect is causing you a problem.  A defect in the dependency can cause a defect in the relying code.  Even the most defensive programming won’t do much good if the dependent code returns garbage data.

If you use the Dependency Injection Principle, you can get code that is a lot less frail, and it easily testable.  Let’s refactor our “FSettings” object to inherit from an interface.  (This is not something we can do for the HSettings class, because it is a helper class).

 

Interface ISettings
{
     public String GetKey(String Subkey);

}
class FSettings : ISettings // and so on

Now, we refactor our calling code to use Dependency Injection:

public class MyStuff {
private ISettings _fs; public MyStuff() {
    _fs = new FSettings(“Enlighten”);
}
public SetSettingsObject(ISettings ifs)
{
    _fs = ifs;
}
public String MyMethod()
{    string Settingvalue = _fs.GetKey(“Mysetting”);
    //Interesting code using Settingvalue
}
}

Take note: the code in MyMethod now looks almost identical to the code that we proposed for using the static methods. The difference is important, though. First off, we seperate the creation of the dependency from it’s use by moving the creation into the constructor. Secondly, we provide a mechanism to override the dependent object.

In practical terms, the code that calls MyMethod won’t care. It still has to create a ‘MyStuff’ object and call the MyMethod method. No parameters changed. The interface is entirely consistent. However, if we want to unit test the MyMethod object, we now have a powerful tool: the mock object.

class MockSettings : ISettings
{
     public MockSettings (String ProductName)
     {   if (ProductName != “Enlighten”)
        throw new ApplicationException(“invalid product name”);
     }
     public String GetKey(String Subkey)
     {  return “ValidConnectionString”;
     }
}

So, our normal code remains the same, but when it comes time to TEST our MyMethod method, we write a test fixture (a method in a special class that does nothing but test the method). In the test fixture, we use the mock object:

class MyTestFixture
{
     public void Test1 ()
     {   MyStuff ms = new MyStuff();
         MockSettings mock = new MockSettings();
         ms.SetSettingsObject(mock);
            // now the code will use the mock, not the real one.
         ms.MyMethod();
        // call the method… any exceptions?

      }
}

What’s special about a test fixture? If you are using NUnit or Visual Studio’s unit testing framework, then any exceptions are caught for you.

This powerful technique is only possible because I did not use a static helper class when I wanted to look up the settings in my registry.

Of course, not everyone will write code using unit testing. That doesn’t change the fact that it is good practice to seperate the construction of an object from it’s use. (See Scott Bain’s article on Use vs. Creation in OO Design).  It also doesn’t change the fact that this useful construction, simple to do if you started with a real object, requires far more code change if you had started with a helper class.  In fact, if you had started with a helper class, you may be tempted to avoid unit testing altogether. 

I don’t know about you, but I’ve come across far too much code that needed to be unit tested, but where adding the unit tests would involve a restructuring of the code.  If you do yourself, and the next programmer behind you, a huge favor and simply use a real object from the start, you will earn “programmer’s karma” and may inherit some of that well structured code as well.   If everyone would simply follow “best practices” (even when you can’t see the reason why it’s useful in a particular case), then we would be protected from our own folly most of the time.

So, coming back to William’s original question: “it could be designed to be instantiable and extendable, but what’s the advantage?”

The advantage, is that when it comes time to prove that the calling code works, you have not prevented the use of good testing practices by forcing the developer to use a static helper class, whether he wanted to or not. 

Coding Dojo suggestion: the decorator kata

By |2005-08-09T04:50:00+00:00August 9th, 2005|Enterprise Architecture|

I ran across a posting by Robert Martin on the Coding Dojo and I admit to being intrigued.  I’m running a low-priority thread, in the back of my mind, looking for good examples of kata to use in a coding dojo.

Here’s one that I ran across in a programming newsgroup.

You have an app that needs to be able to read a CSV file.  The first line of the file specifies the data types of the fields in the remaining lines.  The data type line is in the format

[fieldname:typename],[fieldname:typename],…,[fieldname:typename]

For example:
[name:string],[zipcode:int],[orderdate:date],[ordervalue:decimal]

you must use a decorator pattern.  The decorator must be constructed using a builder pattern that consumes the data type line.  Output is a file in XML format

<file>
   <row><name>Joe Black</name><zipcode>90210</zipcode>… </row>
</file>

Any row that doesn’t match the specification will not produce an output line.  The output will pick up with the next line.  The file, when done, must be well-formed.

Of course, with a kata, the only thing produced at the start is the set of unit tests (and perhaps, in the interest of time, the frame of the classes from a model).  The rest is up to the participants.

Comments are welcome, of course.

Interesting problem in VS 2003 and how to fix it

By |2005-06-21T15:30:00+00:00June 21st, 2005|Enterprise Architecture|

A team member and I found an interesting problem yesterday that I thought I’d share.  We found the problem by luck, and the fix was weird.  Perhaps there is an easier fix out there.

The problem manifested itself this way:

We needed to build our five different components into different MSI files (don’t ask).  Each of the five components refers to one or two “base class” assemblies that are included in each MSI.  Previously, we had a single solution for each component that creates the assembly and then builds the MSI.  Most of the assemblies end up in the GAC.

We were running into problems where we would end up accidentially installing two copies of a base class component into the GAC.

Our solution was to create a single solution file that builds all of the assemblies and builds all of the MSI files.  This way, we could use project references and we’d only get one version of a dependent assembly in any MSI file.

The MSI for installing Assembly A is very similar to the MSI for installing Assembly B, because A and B are very similar.  They both inherit from the same base objects.  The problem was this: After creating the new solution file, and carefully checking every MSI, it appeared that we had it right: MSI-A would install Assembly A, while MSI-B would install Assembly B. 

We saved the project and checked it into version control.  Then ran our build script.  MSI-A would have Assembly A, and MSI-B would have Assembly A as well.  Assembly B was not included in any MSI at all!

Opening the project back up showed that, sure enough, MSI-B was defined to use the project output from project A, even though we specifically told it to use B.  Fixing the reference using Visual Studio didn’t help.  The moment we saved and reopened the solution, the MSI would once again show that it refers to the wrong Assembly.

The cause:

When project B was created, the programmer made a copy of all of the files of project A, and put them into another directory.  He changed the names a little and ran with it.  It never occured to him to open up the Project file and change the Project GUID for the new project.

The project GUID is a unique id for each project.  It is stored in the project file, but the solution files and the install projects use them as well.  Since we had two projects in the same solution that used the same GUID, then VS would just pick the first project with that GUID when building the MSIs.  As a result, we had two MSIs with Assembly A and none with Assembly B.

The answer that we went through was to open one of the two project files, in notepad, and change the Project GUID.  Then, go through every solution file that referenced that project file and change the referencing GUID value.  We had to be careful with our solution file that contained both projects, so that we left one project alone and added the other.

This worked.  The effect was odd.  I thought I’d post the problem and our solution in case anyone else makes the mistake of creating an entire project by copying everything from another project, and then putting them both in the same solution file.

Happy coding!