Thursday, December 29, 2005

The Perils of JavaSchools - Joel on Software

Once more, I feel good that I didn't waste more time in college. Joel roasts the dumbing down of Computer Science via Java (Ones? Zeroes? I only had undecided quanta!)

Thursday, December 22, 2005

Error 28704. Unable to connect to the Analysis server when upgrading to Team Foundation Server Beta 3 Refresh on SQL Server Standard Edition.

Error 28704. Unable to connect to the Analysis server when upgrading to Team Foundation Server Beta 3 Refresh on SQL Server Standard Edition. If you are upgrading a Team Foundation Server to Beta 3 Refresh on a Data Tier that has SQL Server Standard Edition then you will get this error:

Errors in the metadata manager. An error occurred when loading the Code Churn perspective, from the file, '\\?\C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Data\TFSWarehouse.0.db\Team System.1.cub\Code Churn.1.persp.xml'.
due to the fact that nothing cleans up the old Analysis Server's TFSWarehouse database. You can't do it inside any of the GUI managers because the Analysis Services service will not start up correctly (nice one!).

BUT THERE IS A WORK-AROUND.

When this error appears on-screen do this:
  1. Control Panel/Administrative Tools/Services.
  2. Stop the SQL Server Analysis Services service.
  3. Navigate in Explorer to the database directory where the Analysis Services databases are stored (usually C:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Data).
  4. Delete the file TFSWarehouse.0.db.xml and the directory TFSWarehouse.0.db
  5. Start the SQL Server Analysis Services service.
  6. See if you can now connect to the Analysis Services in SQL Server Management Studio (Object Explorer/Connect/Analysis Services/your server name). Typically there will be no Databases listed at this point.
  7. Click the Retry button on the setup screen.
This essentially eliminates the database that Team Foundation Server is going to setup anyway... so it should work fine.

Saturday, December 03, 2005

Google Sitemaps for ASP.NET 2.0

This is really cool! Google has long let you expose your site's "logical sitemap" structure so the crawler can better understand the site, details here. With ASP.Net 2.0, you can create an XML sitemap file that acts as a DataSource for the various Menu and MenuPath controls. Betrand Le Roy has done the cool work of mapping the ASP.Net sitemap file to an HTTP handler that will expose it in a format for Google to consume. Details here.

Tuesday, November 22, 2005

Failing Tests Meaningfully

Scott Bellware does a wonderful job of describing what it means to write unit tests that fail meaningfully. I recommend reading this at least as many times as it takes to shine the Scion.

Friday, November 11, 2005

Pattern for rethrowing exceptions caused during singleton initialization

If you are doing provider or other similar constructs that have late-constructed singleton behavior, you need to be sure that anyone using the singleton will not proceed with an erroneously constructed object. But how do you let them know something went wrong originally? Simply rethrow the original exception again! I found this pattern in the System.Web.Profile.ProviderBase class thanks to Reflector

public class Default : Singleton
{
    public Default()
    {
        // anything else you really want to happen in the instance constructor...
    }
}

public class Singleton
{
    private static bool s_Initialized;
    private static object s_InitializeLock;
    private static Exception s_InitializeException;
    private static Singleton s_Instance;

    static Singleton()
    {
        s_InitializeLock = new object();
        s_InitializeException = null;
        s_Initialized = false;
        s_Instance = null;
    }

    protected Singleton()
    {
        if (!s_Initialized)
        {
            InitializeStatic();
        }

        // anything else you really want to happen in the base constructor...
    }

    public void Create()
    {
        InitializeStatic();

        if (s_Instance != null)
        {
            return s_Instance;
        }

        lock (s_InitializeLock)
        {
            if (s_Instance == null)
            {
                s_Instance = new Default();
            }
            
            return s_Instance;
        }
    }
    
    private static void InitializeStatic()
    {
        if (s_Initialized)
        {
            if (s_InitializeException != null)
            {
                throw s_InitializeException;
            }
        }
        else
        {
            lock (s_InitializeLock)
            {
                if (s_Initialized)
                {
                    if (s_InitializeException != null)
                    {
                        throw s_InitializeException;
                    }
                    
                    return;
                }
                try
                {
                    // do real singleton initialization
                }
                catch (Exception ex)
                {
                    if (s_InitializeException == null)
                    {
                        s_InitializeException = ex;
                    }
                }
                
                s_Initialized = true;
            }
            
            if (s_InitializeException != null)
            {
                throw s_InitializeException;
            }
        }
    }
    
    internal static Singleton Instance
    {
        get
        {
            InitializeStatic();
            return s_Instance;
        }
    }
}

Tuesday, November 08, 2005

Profiled performance does not equal real-life performance.

Ian nails it* with this post to the DevelopMentor Dotnet-CX mailing list. Profiling does not give you a real view of the performance of a segment of code, nor does the performance of a segment of code reflect the performance of that code in real use. Don't optimize the performance of something unless you:

  • Have a reproducible test-jig for repeatable performance testing
  • Have an idea of the baseline performance
  • Know where the bottlenecks really are in the code
  • Can tell if the performance of the system gets better or worse with changes.
  • Have an idea of what performance is good enough
* apologies to Don Box. Edit: Ian blogged it here.

How and when are generics realized as real code?

Questions often come up regarding .Net 2.0's generics; how much code is shared, when are the specialized versions created, and how much does it cost? While I want to repeat the refrain I often use—You don't really need to know this—it is useful information. The short version:

  • There is one copy of the generic IL
  • The JIT creates specializations as they are needed
  • All reference-type specializations share one JITted copy
  • Each value-type spawns a separate specialization
For more information, please refer to this excellent post from Ognjen Bajić

Monday, October 31, 2005

Speeding access to properties by caching accessors.

When writing generic frameworks, such as O/R Mappers or UI Mapping Frameworks, you inevitably run into the need to access the members (fields or properties) of another class to bind data to the database call parameters or user interface controls. This process is essentially trivial in .Net and implementation exampled are everywhere. All of them rely on some use of the Reflection classes in .Net. I've heard too many complaints about the speed of systems leveled against the use of Reflection, so I spent a little effort getting things zippy on the two frameworks I use on a daily basis. I'm a huge proponent of not doing premature optimization, but when coding framework-level classes, it is good idea to practice good design principles and make the things that stand out during timing runs as quick as possible (while still being maintainable). As a sidebar, the frameworks I am using for O/R Mapping and generic UI are Paul Wilson's excellent ORMapper and UIMapper. ORMapper is very mature, and does most of what I need in an O/R Mapper. UI Mapper is still a 1.00 release, and I've got tons of changes for Paul once he gets time to play with it again, but he's been very open to changes in the past. I can stress how good a deal this software is, everything that Paul's got on the site in good C# code for $50! Anywho, the short of this is that I've got a MemberAccessor cache class that makes sure the reflection is done once, and all the MemberInfo and/or FieldInfo is cached so everything runs very quickly each subsequent use. This stuff works in service classes, WinForms and ASP.Net so don't worry about application or dependancies on things like HttpContext.Cache. Still to do is adding a Lightweight Code Generation to emit the calling stubs, which is a facinating new feature of .Net 2.0 based on the DynamicMethod class. Much of this is based on the ideas give in Joel Pobar's blog entry and his article on MSDN Here's the class, hit me up with any questions:

using System;
using System.Collections.Generic;
using System.Reflection;

namespace Phydeaux.Mapping.Utility
{
    public class MemberAccessor
    {
        internal RuntimeMethodHandle Get;
        internal RuntimeMethodHandle Set;
        internal RuntimeFieldHandle Field;

        internal bool HasGet
        {
            get { return (this.Get.Value != IntPtr.Zero); }
        }

        internal bool HasSet
        {
            get { return (this.Set.Value != IntPtr.Zero); }
        }

        internal bool HasField
        {
            get { return (this.Field.Value != IntPtr.Zero); }
        }

        internal bool Settable
        {
            get { return this.HasSet || this.HasField; }
        }

        internal bool Gettable
        {
            get { return this.HasGet || this.HasField; }
        }

        internal bool AnyDefined
        {
            get { return this.HasGet || this.HasSet || this.HasField; }
        }

        internal bool FullyDefined
        {
            get { return this.HasGet && this.HasSet && this.HasField; }
        }
    }

    public class MemberCacheKey : IEquatable
    {
        internal RuntimeTypeHandle TypeHandle;
        internal string Member;

        internal MemberCacheKey(Type type, string member)
        {
            this.TypeHandle = type.TypeHandle;
            this.Member = member;
        }

        public override bool Equals(object other)
        {
            // covers both null and same reference check...
            if (System.Object.ReferenceEquals(this, other))
                return true;

            return this.Equals(other as MemberCacheKey);
        }

        public override int GetHashCode()
        {
            return (TypeHandle.Value.GetHashCode() << 5) ^ Member.GetHashCode();
        }

        #region IEquatable Members
        public bool Equals(MemberCacheKey other)
        {
            // covers both null and same reference check...
            if (System.Object.ReferenceEquals(this, other))
                return true;

            if (other == null)
                return false;

            return TypeHandle.Equals(other.TypeHandle)
                && Member.Equals(other.Member);
        }
        #endregion

        public class Comparer :  IEqualityComparer
        {
            #region IEqualityComparer Members
            public bool Equals(MemberCacheKey x, MemberCacheKey y)
            {
                // covers both null and same reference check...
                if (System.Object.ReferenceEquals(x, y))
                    return true;

                if (x == null)
                    return false;

                return x.Equals(y);
            }

            public int GetHashCode(MemberCacheKey obj)
            {
                if (obj == null)
                    return 0;

                return obj.GetHashCode();
            }
            #endregion
        }
    }

    public class AccessorCache : Dictionary
    {
        const MemberTypes WhatMembers = MemberTypes.Field | MemberTypes.Property;
        const BindingFlags WhatBindings = BindingFlags.SetProperty | BindingFlags.SetField 
                                            | BindingFlags.GetProperty | BindingFlags.GetField 
                                            | BindingFlags.Public | BindingFlags.NonPublic
                                            | BindingFlags.Instance | BindingFlags.DeclaredOnly;

        public AccessorCache()
            : base(new MemberCacheKey.Comparer())
        {
        }

        public MemberAccessor GetAccessor(Type entityType, string member)
        {
            MemberCacheKey key = new MemberCacheKey(entityType, member);
            MemberAccessor accessor;

            if (!this.TryGetValue(key, out accessor))
            {
                if (BuildAccessor(entityType, member, out accessor))
                {
                    this.Add(key, accessor);
                }
                else
                {
                    throw ArgumentValidation.Decorate(
                        new UIMapperException("cannot build accessor")
                        , MethodBase.GetCurrentMethod(), entityType, member);
                }
            }

            return accessor;
        }

        private bool BuildAccessor(Type entityType, string member, out MemberAccessor accessor)
        {
            accessor = new MemberAccessor();
            return BuildAccessorRecursive(entityType, member, accessor);
        }

        private bool BuildAccessorRecursive(Type entityType, string member, MemberAccessor accessor)
        {
/// TODO build a LCG delegate like http://msdn.microsoft.com/msdnmag/issues/05/07/Reflection/default.aspx
            if (entityType == null || entityType == typeof(Object))
                return accessor.AnyDefined;

            MemberInfo[] members = entityType.GetMember(member, WhatMembers, WhatBindings);

            // look for a property
            foreach (MemberInfo someMember in members)
            {
                if (someMember.MemberType == MemberTypes.Property)
                {
                    PropertyInfo property = (PropertyInfo) someMember;

                    if (property.CanRead && ! accessor.HasGet)
                    {
                        accessor.Get = property.GetGetMethod(true).MethodHandle;
                    }

                    if (property.CanWrite && !accessor.HasSet)
                    {
                        accessor.Set = property.GetSetMethod(true).MethodHandle;
                    }
                }

                if (someMember.MemberType == MemberTypes.Field && !accessor.HasField)
                {
                    FieldInfo field = ((FieldInfo) someMember);
                    accessor.Field = field.FieldHandle;
                }
            }

            return accessor.FullyDefined
                || BuildAccessorRecursive(entityType.BaseType, member, accessor);
        }
    }        
}

Wednesday, October 26, 2005

Update to the integration between Team Foundation Server Version Control and Beyond Compare

Looks like I was overly pessimistic about Microsoft. James Manning at Microsoft (a Team Foundation Version Control guy) dropped in with a few comments to help things along. Adding the /title1=%6 /title2=%7 to the command line arguments in Visual Studio helps the screen display. Thanks James. Craig at Scooter Software has also addressed some of these issues. The just released 2.4 version of the ImageViewer from Scooter Software does understand the image file format without needing the extension adjustment (thanks Craig). I still would love to see Scooter Software update the engine to ignore all the stuff following the ; so we can not have the overly aggressive patterns in the Rules setup in Beyond Compare, or have Microsoft use subdirectories or something instead of the odd file names, but I'm certainly happy for now. UPDATE: James Manning from Microsoft informs me that they've fixed this in the post-beta-3 bits. Thanks, gang! UPDATE #2: The RC has this issue fixed, so you can change the filters back as needed. Also, James has a nice post on configuring for various tools here.

Tuesday, October 25, 2005

Microsoft Team Foundation Server and Beyond Compare

Simply put, I can't stand any other comparison tool than Beyond Compare. This is a product that definitely earns the name!. I would pay more than Abraxis Merge for Beyond Compare, but I don't have to, as they are also the most reasonably priced tool I've ever seen. I use Beyond Compare to do file differences, directory comparisons, heck I use it to upload the latest version of XBox Media Center to my hacked XBox (I use the pimped edition). I'm also using Microsoft Team Foundation Server's source control. Here's how to set Beyond Compare up as the comparison tool: In Visual Studio, click menu File / Options... Expand Source Control in the treeview. Click Visual Studio Team Foundation Server in treeview. Click button Configure User Tools... Click button Add... Enter .* in the Extension textbox Choose Compare in Operation combobox Click button ... next to Command textbox Browse to BC2.EXE in file chooser and click button Open Enter %1 %2 /title1=%6 /title2=%7 into the Arguments textbox. Ok on out. Microsoft's version of these instructions is here. UPDATE: James Manning has comprehensive instructions for other tools here. Now one issue remains, and hopefully either Microsoft or Scooter Software will do something about it. When using an external comparison tool Visual Studio checks both files out to a temporary directory. It appends a tag to indicate the file version. The suffix looks something like ;C19 where the C indicates this is a changeset, and the 19 is the changeset number. It's also possible to see an ;Lxxx which indicates a label xxx. For more information on the suffixes, see this at the Versionspecs section. UPDATE #2 and #3: Both Microsoft and Scooter have addressed this, the RC of VSTS now preserves the file extension; using the suggestions below will improve the display in BeyondCompare until then. This means the external diff tool command line is something like "BC2.EXE Foo.cs;C19 Foo.cs;C22". In Beyond Compare, the funky changeset tag interferes with the file-extension sniffing in Beyond Compare. The actual quick "files are different" compare thinks these are "any other file", so no rules are run. Secondly, when the viewer for different files is run, it also thinks these are "any other file", so no language-specific or file-extension specific plug-ins run. What is needed is to have Beyond Compare have the comparison and the viewer strip off everything after the last ";" from each filename before determining which rules to run and which viewer to launch. The other option is for Microsoft to not create these oddly named files and instead create a subdirectory for each file that reflects the version/changeset/label, etc. Personally, while I think that the filenames are odd, I'm hoping for a fix from Scooter Software. Why?

  1. Having the suffix on the filenames clearly indicates which one is which in the view header.
  2. Scooter Software moves like an ice-cube on a hot griddle.
  3. Microsoft, especially this close to release, moves like a glacier.
Edit 2005 Oct 25th 11:44CDT: Changed the command line arguments above and posted this note. UPDATE: James Manning from Microsoft has informed me that they've fixed the file name issue in the post-beta 3 bits. Thanks, gang. UPDATE #3: Confirmed fix in the RC

Monday, October 17, 2005

Validation and exceptions.

I was reading a blog entry (that I can't seem to find) here about exception strategies with regard to web service calls. I'll repeat the most important thing first: Only throw exceptions under exceptional circumstances! Do NOT use exceptions for flow-control". Okay, on to the real question, how to handle validation logic. What I usually do for business objects is to have a ArrayList Validate(bool throwError) method that accepts a boolean flag to determine if exceptions should be thrown (hold on there, Nelly! I'll explain) and returns a list of business rule violations. What I do is have client or service code call Validate passing false for the throwError parameter. In this case, the Validate method returns an ArrayList of enumerated values of business rules. Since they are enumerated values, you can easily switch on the enumeration returned and act accordingly. Additionally the value can very easily be localized by using a resource lookup to derive the error message given to the user while still logging the exact error. Now, if someone calls through to the Save method even in the presence of violations (or because they didn't call Validate), then I want to throw an exception, so I internally call Validate inside the Save method and pass true for the throwError parameter. When that error is thrown, the Exception.Data is first filled with all the enumerated business rule violations. This strategy insures that validation messages are available when desired via the Validate method, and that attempts to save bad data are stopped via an exception no matter what business object is being saved and no matter what the call depth. Lastly, the reuse of the Validate method and using an enumeration to encode the business rules allows code to handle problems and error messages to easily be localized. If anyone is interested in some code examples, tag up this post. Here's a quick sample:

enum BusinessRules
{
   UserNameRequired
   , UserNameMinimumLength
}

class BusinessObject
{
   public string UserName;

   public ArrayList Validate(bool throwError)
   {
      ArrayList errors = new ArrayList();

      if (this.UserName == null  this.UserName.Length == 0)
      {
         errors.Add(BusinessRules.UserNameRequired);
      }

      if (this.UserName.Length < ex =" new" errors =" businessObject.Validate(false);"> 0)
      {
      StringBuilder statusMessage = new StringBuilder();

      foreach (BusinessRule error in errors)
      {
         if (error == BusinessRules.UserNameMinimumLength)
         {
            // add stuff to make it long enough as an example of handling the error
            businessObject.UserName += " terse little bugger";
         }
         else
         {
            statusMessage.AppendFormat(GetResource(error), this.UserName);
         }
      }

      view.SaveButton.Enabled = false;
   }
   else
   {
      statusMessage = "Looking good!";
      view.SaveButton.Enabled = false;
   }

   UpdateStatus(statusMessage);
}

Tuesday, October 11, 2005

Nullable<>, SQL NULL, and reference null (what do you mean?)

An interesting post on Wesner Moise's blog shows that C# 3.0 and VB.Net 9.0 don't take the same view of Nullable<>. C# takes the idea that Nullable<> is to bridge between value and reference types, while VB takes the idea that Nullable<> is to mirror SQL's NULL. This is a huge difference and it will be a source of REAL problems, mark my words. Null Comparison

Wednesday, October 05, 2005

Movie Meme - Ajax style

Marc's Musings: Movie Meme I've added my hat to the rack over at twofifty.org, where a little Ajax give you a very responsive UI (if a bit unexplained) lets you check off what movies of the top 250 you've seen. The pop-up search of IMDB is sweet. Did I meantion there's an API? My selections

Friday, September 30, 2005

Samples are not the -right way- to do things.

Today, once again, I've seen the corruption of code caused by reading code examples and mindlessly parroting the style and not the substance. Every sample you see these days has code that using String.Format for simple concatenation. That's not what it's for, folks! String.Format is to allow formatting (not just conversion to String) and parameter rearrangement. Code like this is wasteful and silly:

private string CreateSelect()
{
   return String.Format("SELECT TOP {0} {1} FROM {2}", this.rowLimit, this.selectFields, this.fromTables);
}
This will needlessly have to scan the format string looking for the replacement parameters {x} and substituting the parameters positionally. This code should be:
private string CreateSelect()
{
   return "SELECT TOP " + this.rowLimit + " " + this.selectFields + " FROM " + this.fromTables;
}
Which C# will actually turn into this behind the scenes:
private string CreateSelect()
{
   String[] temp = new String[6] { "SELECT TOP ", this.rowLimit.ToString(), " ", this.selectFields, " FROM ", this.fromTables };
   return String.Concat(temp);
}
Which is tons faster. This sort of thing is important to get right in the lowest-level of your code, in things like a DAL or UI framework library.
Not that I'm whining...

Tuesday, September 27, 2005

Statistics aren't easy, good design is just as hard. Want to see both?

I've been a fan of Jeffrey Sax's work in the mathematical area for .Net for quite a while (since he and I helped out a developer on CodeProject with a Fractions library for .Net). Today his company, Extreme Optimization, released the Statistics Library for .Net. This is a major boon for those doing statistical work in .Net.
The Extreme Optimization Statistics Library for .NET contains classes for probability distributions, random number generation, hypothesis tests, and statistical models, including linear regression and analysis of variance. The library is easy to use without compromising performance or reliability.

  • Supports 25 probability distributions
  • Uses a General Linear Model (GLM) approach for unified treatment of regression and ANOVA models
  • Integrates inference and validation tests with statistical models
  • Provides 4 robust pseudo-random number generators
  • Is fully compliant with Microsoft's Design Guidelines for Class Library Developers.

That last point is something near and dear to my heart. All libraries should be written to comply with those guidelines. It could be argued that some of the guidelines are not perfect, but adhering to them leads to a library that feels like the FCL, making it easy to learn.
Brad Abrams, the lead program manager on the Microsoft .NET Framework team and co-author of Framework Design Guidelines : Conventions, Idioms, and Patterns for Reusable .NET Libraries has this to say about Extreme Optimization and its products:
I have made it my mission to institutionalize the value of good API design. I strongly believe that this is key to making developers more productive and happy on our platform. It is clear that Extreme Optimization values good API design in their work, and takes to heart developer productivity and synergy with the .NET framework.
In short, I strongly recommend this product I also strongly recommend Brad Abrams' new book Framework Design Guidelines.

Tuesday, September 20, 2005

Dean Edwards, you are my hero!

While looking for yet another CSS hack-fix for IE, I stumbled upon a really sweet site. Dead Edwards has built two really nice tools (and more). First there is his IE7 Compatibility fixes library, which basically makes all recent versions of IE (before 7) not have the commonly CSS layout bugs. This is cool. The other package is cssQuery, which adds the ability to select DOM elements using the CSS selectors in both CSS1 and CSS2 syntax. Who needs getElementsByTagName now?

Friday, September 16, 2005

Exception handling in .Net (some general guidelines)

Based on a generic query on the Advanced .Net mailing list I dumped this:

Regarding Exceptions themselves

  1. Do NOT catch exceptions you don't know how to handle - this means correct for the problem or do some unwind work. If you know what to do when an Exception happens, catch. If not, DON'T.

  2. Do NOT catch exceptions to merely translate the to another type of exception (even if you do set the InnerException property). If you don't have something to add, don't catch. This is wrong:
    catch (Exception ex)
    {
       // do something interesting (see 1)
       throw new MyCoolException("nothing of value here", ex);
    }
  3. Do NOT catch an exception then throw it again, rather do a "rethrow". This is wrong:
    catch (Exception ex)
    {
       // do something interesting (see 1)
       throw ex; // StackTrace now loses everything below this level
    }
    Rather do this:
    catch (Exception ex)
    {
       // do something interesting (see 1)
       throw; // notice there's no ex! The original StackTrace is left intact.
    }
  4. Do NOT derive all of your exceptions from System.ApplicationException or System.SystemException (or some other base level exception of your creation), as you are not adding value by doing so. If you want to standardize your exception creation, write builder methods to create specific exceptions in regular "forms".

  5. If you do have contextual information you can add to an exception, DO SO. Use the Exception.Data collection, that's what it is there for! You can add the values of interesting parameters to you methods. This is especially useful in the context of a database layer or other low-level library. You can squirrel-away the SQL and all the parameters. Only do this if you think that these details will be useful for post-mortem diagnosis. If the information you log is transitory it will NOT help tracking down errors from logs. This is (mostly) good:
    catch (Exception ex)
    {
       ex.Data.Add("SQL", command.Text);
       ex.Data.Add("key", myKey); // or enumerate command Parameters collection
       throw; // (see #3)
    }
  6. If you add things to the Exception.Data collection, make sure that you don't conflict with what is already there as this is a HashTable. I use the catching-class's name to scope the values. This is much better than #5:
    catch (Exception ex)
    {
       ex.Data.Add(String.Format("{0}.{1}.SQL", System.Reflection.MethodBase.GetCurrentMethod().DeclaringType.FullName, System.Reflection.MethodBase.GetCurrentMethod().Name), command.Text);
       throw; // (see #3)
    }
  7. If you are catching exceptions to do some undo-work (like rolling back transactions, closing handles, etc.) Strongly consider creating a very small wrapper for that resource and implementing IDisposable on it. Then you can have a using clause to hide the try { } finally { } block.

  8. Never catch System.Exception, System.SystemException or System.ApplicationException without doing a rethrow except at the top level logging handler. This also applies to the the catching of non-CLS exceptions (don't throw them in the first place). In CLR 2.0, those will be wrapped in RuntimeWrappedException anyway.

On logging exceptions

  1. Always put a top-level exception catch that logs the exception and don't make (or allow) anyone else do it.

  2. If you are going to handle the exception (see #1 above), then you can log the one you caught with an "informational" level, then handle it and DO NOT throw/rethrow unless the handling is only undo/release (see #3, #7 above).

  3. Always log at the highest level possible, anything not handled as in #B above is an "error" level.

  4. When logging a message, you should assign a correlation ID to the log entry and make sure that any user-displayed message contains that correlation ID so they can report an error and you can look it up in the logs

  5. Don't log ThreadAbortExceptions unless you really care! In ASP.Net applications, those happen on Response.Redirect("xxx", true) calls. You can do:
    if (exception as ThreadAbortException == null)
    {
       // log it.
    }

Where to log exceptions

  1. Event logs are easliy monitored by WMI et al.

  2. Event logs can "fill up" and truncating them or setting the attributes so that they automatically truncate requires administrative rights, so you could get errors that you cannot log.

  3. You should really create your OWN event log and not use the generic Application event log, so you can set it up correctly from the get-go.

  4. Text files need to be placed somewhere the user has rights (don't you DARE put it on C:\)!

  5. If you use text files, disks can fill up so you can get errors that you cannot log.

  6. To monitor a text file, you need access to the file (net share, FTP, etc..) and you have to parse them to filter.

  7. If you care to do backups of the logs, you have that option and responsibility

  8. XML files == text files for the purposes of this discussion. They might be easire to parse for monitoring, but all other comments are the same.

  9. For database logging the rules are similar to XML files, excepting you can leverage the backup and space management policies in place for the database itself.

  10. Databases can be easily monitored remotely by doing simple SELECTs which makes the parsing and processing easier.

  11. Databases can fail... what do you do if the logging database is down? You'll need a backup plan to log THAT error (typically event), and possibly want to redirect the original log entry elsewhere (either file/event).

  12. If you want centralized reporting of log messages, you should also consider using a store-and-forward repository like SMTP or MSMQ to send the messages to somewhere else. If course you need to be able to log transport errors somewhere.

  13. If you do store-and-forward, make sure you build in heartbeat log entries so you can tell if messages are getting through (from the receipient's view).

Final thoughts

  1. There is no one-best solution... your requirements and restrictions govern to much to make blanket statements. If you can't require administrative rights, event logs are bad. If you can't do file shares, text files are bad, etc.
  2. Defensive-in-depth is important. If you want to catch all errors, you need to make an attempt to log errors in you error logging somewhere.

  3. It doesn't matter where you log messages if you are not monitoring the log.

  4. If it is possible to keep the AppDomain alive then, go ahead. For ASP.Net applications, that's usually done by forwarding the user to an error page.
[Edit: 16 Sept, 2005 based on comments from Peter Ritchie and Paul Mehner, added links to relevent sources and clarifications to System.ApplicationException and System.SystemException]

Friday, September 02, 2005

Mini-Microsoft: Microsoft Financials: "And then?"

I think I figured out where the Microsoft Vista name came from. In bit of delicious irony, I think it came from MiniMicrosoft. Check the quote:

No real break-out product announcements on the horizon, other than the ever-so-cool immediate money losing XBox 360. And just how many times can you talk about releasing Yukon and Whidbey? Maybe FY07 we'll have a new vista to stand from and show how the next Windows and next Office will save our stock.

Tuesday, August 30, 2005

I know what I'm doing this weekend...

My advice, stay out of Soulard for a couple weeks. Fun, fun, fun

Movie Meme

From Brad
Italicize the ones you've seen and Bold the ones you actually liked. I saw TOO many.

1. Titanic (1997) - $600,779,824 2. Star Wars (1977) - $460,935,665 3. E.T. the Extra-Terrestrial (1982) - $434,949,459 4. Star Wars: Episode I - The Phantom Menace (1999) - $431,065,444 5. Spider-Man (2002) - $403,706,375 6. Lord of the Rings: The Return of the King, The (2003) - $377,019,252 7. Passion of the Christ, The (2004) - $370,025,697 8. Jurassic Park (1993) - $356,784,000 9. Shrek 2 (2004) - $356,211,000 10. Lord of the Rings: The Two Towers (2002) - $340,478,898 11. Finding Nemo (2003) - $339,714,367 12. Forrest Gump (1994) - $329,691,196 13. Lion King, The (1994) - $328,423,001 14. Harry Potter and the Sorcerer's Stone (2001) - $317,557,891 15. Lord of the Rings: The Fellowship of the Ring, The (2001) - $313,837,577 16. Star Wars: Episode II - Attack of the Clones (2002) - $310,675,583 17. Star Wars: Episode VI - Return of the Jedi (1983) - $309,125,409 18. Independence Day (1996) - $306,124,059 19. Pirates of the Caribbean (2003) - $305,411,224 20. Sixth Sense, The (1999) - $293,501,675 21. Star Wars: Episode V - The Empire Strikes Back (1980) - $290,158,751 22. Home Alone (1990) - $285,761,243 23. Matrix Reloaded, The (2003) - $281,492,479 24. Shrek (2001) - $267,652,016 25. Harry Potter and the Chamber of Secrets (2002) - $261,970,615 26. How the Grinch Stole Christmas (2000) - $260,031,035 27. Jaws (1975) - $260,000,000 28. Monsters, Inc. (2001) - $255,870,172 29. Batman (1989) - $251,188,924 30. Men in Black (1997) - $250,147,615 31. Toy Story 2 (1999) - $245,823,397 32. Bruce Almighty (2003) - $242,589,580 33. Raiders of the Lost Ark (1981) - $242,374,454 34. Twister (1996) - $241,700,000 35. My Big Fat Greek Wedding (2002) - $241,437,427 36. Ghost Busters (1984) - $238,600,000 37. Beverly Hills Cop (1984) - $234,760,500 38. Cast Away (2000) - $233,630,478 39. Lost World: Jurassic Park, The (1997) - $229,074,524 40. Signs (2002) - $227,965,690 41. Rush Hour 2 (2001) - $226,138,454 42. Mrs. Soubtfire (1993) - $219,200,000 43. Ghost (1990) - $217,631,306 44. Aladdin (1992) - $217,350,219 45. Saving Private Ryan (1998) - $216,119,491 46. Mission:Impossible II (2000) - $215 class=GramE>,397,30 47. X2 (2003) - $214,948,780 48. Austin Powers in Goldmember (2002) - $213,079,163 49. Back to the Future (1985) - $210,609,762 50. Austin Powers: The Spy Who Shagged Me (1999) - $205,399,422 51. Terminator 2: Judgment Day (1991) - $204,843,350 52. Exorcist, The (1973) - $204,565,000 53. Mummy Returns, The (2001) - $202,007,640 54. Armageddon (1998) - $201,573,391 55. Gone with the Wind (1939) - $198,655,278 56. Pearl Harbor (2001) - $198,539,855 57. Indiana Jones and the Last Crusade (1989) - $197,171,806 58. Toy Story (1995) - $191,800,000 59. Men in Black II (2002) - $190,418,803 60. Gladiator (2000) - $187,670,866 61. Snow White and the Seven Dwarfs (1937) - $184,925,485 62. Dances with Wolves (1990) - $184,208,848 63. Batman Forever (1995) - $184,031,112 64. Fugitive, The (1993) - $183,875,760 65. Ocean's Eleven (2001) - $183,405,771 66. What Women Want (2000) - $182,805,123 67. Perfect Storm, The (2000) - $182,618,434 68. Liar Liar (1997) - $181,395,380 69. Grease (1978) - $181,360,000 70. Jurassic Park III (2001) - $181,166,115 71. Mission: Impossible (1996) - $180,965,237 72. Planet of the Apes (2001) - $180,011,740 73. Indiana Jones and the Temple of Doom (1984) - $179,870,271 74. Pretty Woman (1990) - $178,406,268 75. Tootsie (1982) - $177,200,000 76. Top Gun (1986) - $176,781,728 77. There's Something About Mary (1998) - $176,483,808 78. Ice Age (2002) - $176,387,405 79. Crocodile Dundee (1986) - $174,635,000 80. Home Alone 2: Lost in New York (1992) - $173,585,516 81. Elf (2003) - $173,381,405 82. Air Force One (1997) - $172,888,056 83. Rain Man (1988) - $172,825,435 84. Apollo 13 (1995) - $172,071,312 85. Matrix, The (1999) - $171,383,253 86. Beauty and the Beast (1991) - $171,301,428 87. Tarzan (1999) - $171,085,177 88. Beautiful Mind, A (2001) - $170,708,996 89. Chicago (2002) - $170,684,505 90. Three Men and a Baby (1987) - $167,780,960 91. Meet the Parents (2000) - $166,225,040 92. Robin Hood: Prince of Thieves (1991) - $165,500,000 93. Hannibal (2001) - $165,091,464 94. Catch Me If You Can (2002) - $164,435,221 95. Big Daddy (1999) - $163,479,795 96. Sound of Music, The (1965) - $163,214,286 97. Batman Returns (1992) - $162,831,698 98. Bug's Life, A (1998) - $162,792,677 99. Harry Potter and the Prisoner of Azkaban (2004) - $161,963,000 100. Waterboy, The (1998) - $161,487,252

Friday, August 26, 2005

I'm so bummed right now...

Looks like Rio is dead. Now I don't know what to do... my current Karma needs a minor power-switch repair (which I'm doing this weekend), and the Circuit City extended warranty expires in March. I was hoping to see the new Karma replacement hit the stores before then, but since D&M sold off all the IP and engineers to Sigmatel, that wasn't likely. Now with Rio shuttering, it's certainly not going to happen. So, sometime before March, I need to find a good HDD DAP that Circuit City carries (or sell the gift card). I'm on the lookout for a good 30GB+ music player (video and stuff is not a feature), with USB host mode or an SD slot (to get stuff off my camera). ID3 tag based navigation and gapless. Any ideas?

Wednesday, August 24, 2005

A much better way to handle CSS hacks

When you're doing CSS in the real world, you have to handle the CSS bugs in various browsers. But don't embed them in your real style-sheets and clutter everything up. Rather, have a single CSS that has all the work-arounds, and decorate your <html> tag with multiple classes that pull in all the work-around rules. Next, use javascript to handle the injection of those classes automatically at page load! When bugs become patterns - A look at CSS Hacks

Tuesday, August 23, 2005

StringBuilder.Length is not read-only!

When you wander the blogs, sometimes the comments are more interesting or useful than the posting. Today's case-in-point is James Curran's observation that StringBuilder.Length is not a read-only value... how many times have you built a comma (or any other delimiter) seperated string by optionally prepending the delimiter in the loop, or trimming off the leading (or trailing) delimiter after the loop. James says just truncate the final delimiter by adjusting Length! góða nótt. My Ajax.NET Library

Blasts from the past... the old is new again.

Catching up on a newly subscribed blog for Derrick Coetzee, I came across a great post about Bloom Filters. I immediately recognized that use from the Borland Turbo Lightning product that I used in the mid to late 80s. It was an awesome TSR that loaded and then spell-checked whatever text proceeded the cursor at every space (or other work separator). A simple "beep" told you something was suspicious and a hot-key would popup the suggestions. It worked by scraping the screen RAM (in text mode, of course) and thus worked in EVERYTHING. I wonder if such a tool is available for Windows? Of course, I had to let Derrick know and that meant looking up the information and that's when the fun really started? Dig this quote from the April 1986 article from Jerry Pournelle

In the word-processing category, there's a three-way tie and an honorable mention. Tied for best of 1985 are Symantec's Q&A, Borland's Turbo Lightning, and Living Videotext's Ready! idea processor. It's impossible to choose among these; they're all useful. Two are memory-resident. I suppose that one day the trend to memory-resident software will be halted by a really excellent multitasking operating system. Maybe this year?
Ah, the good old days...

Monday, August 15, 2005

RE: Transparency, Video, and Windows Vista

In Windows 2000, Microsoft introduced a feature called Layered Windows. This introduced desktop composition features that had not been available in previous versions of windows, the most interesting of which was arguably the support for per-pixel transparency. With layered windows, any pixel in a window could be given its own transparency level. This is used for things like the transparent drop shadows you see on some windows, and Outlook uses it to fade new email notifications up and down.
[Via IanG on Tap] So, why do I WANT to do this, again? In what possible way is a fuzzy semi-visible version of the underlying video stream useful to me?

Friday, August 12, 2005

Maintaining thread safety is hard

Even Microsoft's best have issues.. First they mess it up in .Net 1.1, then they rewrite it wrong in .Net 2.0. I wonder if they need another code reviewer. The 2.0 version has three bugs now! First, it will reset the _inTrim flag on any exception, even if it didn't set the flag. Second, it will reset the _inTrim flag right before the second NeedsTrim() check, even if someone else has already set it (and presumably is using that flag). Lastly, it will now silently eat exceptions when trimming. How many times have you seen a catch when a finally was the right thing. Go vote on Ladybug to get this fixed.

Thursday, August 11, 2005

Tell me where you are...

If anyone is really reading this, how about showing me where you are:

Monday, August 01, 2005

All things UI point to Ajax and Rico...

In the past couple of months, I've been learning more and more about the business of writing very customizable user interfaces for browser-based applications. To me the best system architecture is a strong model-view-controller architecture.

  • The model is your business logic
  • The controller is your user-interface navigation logic (screen to screen)
  • The view is the user-interface presentation (including intrascreen navigation logic)
I'm well versed in the way you assemble and partition business logic into reasonable layers (utilities, data access, business objects, business services). These days, I'm expressing this using the .Net runtime and my common classes, nHibernate, simple non-mutable objects for the business object entities, and state-less service objects for business rules. I've developed these techniques in 20+ years of PL/I, C, C++, VB6 COM and now C# programming. What's been new to me recently is learning about advanced user-interface development. My old days were in console-mode (text) applications and then Windows™ forms-based stuff. I've grokked the separation of the controller from the presentation and benefited from The Humble Dialog Box and the ease of unit-testing it enables. HTML web interfaces, however, were really a dark corner of my mind until the last couple of years. I knew HTML markup, but doing anything interactive was new to me. Thankfully, I didn't have to learn the "old way". I got to play with ASP.Net from the get-go, and it really did save me a huge learning curve. I learned about post-backs without having to unlearn posting to a different page when that was the strategy of the day. But nothing in the web grabbed my attention until I started playing with applications that put someone of the UI-specific controller logic on the pages. Those appplications (gmail, Google Maps, Outlook Web Access, and earlier things like the old Sears Photos site) let me feel immedately in control of the application without all the distaste that the typical usability-be-damned Flash applications didn't. That methodology has a moniker now... it's AJAX, and I like it. I'm converted. How do I build a real application without inventing all that plumbing? As usual, it is time to hit the blogs and start playing. There's a gold mine of great parts that I want to mention: First, as I'm a server-guy at heart, I've got the find the .Net support for the controller. That's easy... Microsoft has announced the up-and-coming Atlas, but why wait when Ajax.Net is already here, and does almost anything you could possibly want; once again the goodness of open-source makes me happy. Make sure you check out Michael Schwarz's blog. OpenRico is the best collection of client-side user-interface tools for AJAX that I've found. It's an amazing piece of engineering, in that it does everything that is hard to get right, it is free, and it is open source. Go play for a minute, then come back and tell me why I should write a WinForms application for the average data-entry application. You'll find a drag-and-drop handler with constrained targets, cinematic stuff for sizing, positioning, corner rounding, fading and some animation effects (some are even tolerable). The really sweet bit is the client-side AJAX for a lazy-loaded, sortable, scrollable data-grid! OpenRico is built upon the very clean object-oriented JavaScript framework Prototype. This set of methods and coding style solves the "how do I do this cleanly" on the client side. Don't structure your code without it. Now the next problem I've noticed is that the pages are getting to be a real tag-soup. This is bad, I can't read the content of the page because it is lost in all of the JavaScript eventing and identifiers and style-sheets. Come on! There are only a few "standard" types of display and entry fields on the average data-entry screen. Can't we find some way of centralizing the behavior of things based on the class of display/input element? Wait, this is sounding familiar... if only there was a way to associate those behaviors with page elements through a simple decoration... sort of like CSS styles... hmmm... YES! that's it exactly. Welcome to the future, associate your behavior using css selectors using this simple framework. Another interesting bit is the Scriptaculous scripts for autocomplete, drag-and-drop of individual elements, and tons of animations (which I hate). Of course, dealing with all of these cool client-side toys means that we need a much better testing experience. I've been looking into the latest coolness, a FIT driven testing tool called Selenium. The coolest part is that the test scripts are simply HTML tables of actions and values, anyone can read (and write) them. Some other AJAX sites: Ajaxian has some interesting insights into JavaScript's implications for screen readers and automated testing. Completely off-topic: Pseudo HTML element that combines a radio button group with a select box for quick picks of common options. Internet Explorer vs. Mozilla from a JavaScript coder's perspective. Things to watch out for when crossing browsers. [Edit: 9:37 AM 4-Aug-2005 - thanks John] The JavaScript language specification is an interesting read. The errata is there to proved that everyone makes mistakes.

Wednesday, July 20, 2005

Thursday, July 14, 2005

Convert VB.Net to C# with C-Sharpener For VB

So you have a ton of VB.Net applications and you want to move to C#, what do you do? Convert VB.Net to C# with C-Sharpener For VB

Tuesday, July 12, 2005

The Game is Afoot

Eric Sink (a Software Craftsman, thank you very much) has an interesting comparison between various sports/games and software development. The Game is Afoot

Friday, July 08, 2005

DateTime is NOT UTC.

It is funny the things you learn from searches. Today I learned that System.DateTime is not based in UTC. In fact it is based in TAI, which is the real "atomic" time without leap-seconds. Now I realize most people don't know or care about the difference, but I have an obsession with dates, times and all things chronological. Here are a few of my favorite tidbits:

  • Developing Time-Oriented Database Applications in SQL by Richard T. Snodgrass is a wonderful book about doing proper SQL database designs and queries when the chronology of data points, the posting of them, or the subsequent view "in time" of data is important. This area is extremely difficult to get right consistently and this book walks you through the evolution of several real world systems to get it right as the needs are evolved. It is out of print, but you can download the book in PDF form here.
  • Why Daylight Savings Time is Non-intuitive Raymond Chen blog
  • Coding Best Practices Using DateTime in the .NET Framework Dan Rogers MSDN article. This is a great place to start, as it addresses most of the issues you will likely run into.
  • New DateTime Best Practices Article BCL Team Article. This is a very good summary of the issues you can run into and how to deal with them.
  • Representing Null DateTime values in code Scott Munro blog talks about how nulls in dates can be handled, lots of excellent links. I personally use the Null Object pattern with DateTime.MinValue representing start dates and DateTime.MaxValue representing end dates.
  • What are the New DateTime Features in Whidbey BCL Team blog gives the changes in .Net 2.0 for DateTime. I recommend all of Anthony Moore's postings as they are very informative and he owns System.DataTime.
  • More links to follow once CodeProject is back online.

    Tuesday, June 28, 2005

    It was fun, it was real, but it wasn't me...

    I'm out of Anheuser-Busch; the contractor phase of my career is now officially dead.
    I loved working with the people I was working with, but the company environment was quite honestly quite stifling. I got to do a lot of ASP.Net stuff, but every time I needed something that didn't ship from Microsoft it was a tooth-and-nail fight to get things authorized. Even grabbing code samples from (or posting them to) places like CodeProject was an exercise that involved lawyers every time. Life is too short to reinvent things over and over.
    So anyway, I'll miss the guys and gals and I enjoyed the work, but I had to move on.
    p.s. If anyone ever gets one of the following people at an interview, consider this a good reference (in alphabetical order, no preferences implied; if someone isn't mentioned here it is because I didn't work enough with them to provide a realistic assessment):

    • Shahbaz Chaudhry - Excellent C#, VB.Net, VB6, Oracle, ASP and ASP.Net developer with a great work ethic, quick learner, good communicator
    • Robert Q. Johnson - Senior development skills in C#, VB6 and ASP.Net. Robert always wants to learn, is a blast to work with, and gets things done.
    • Kerry Kinkade - The most effective team lead and product manager I've ever worked with and for. Kerry understands how to find the nuggets of good in every project, and how to effectively schedule, manage, and control a project.
    • Thomas McMahon - Very skilled architect with intimate knowlege of all things .Net and Oracle. I loved working with Tom, as he's got a great outlook on everything and the ability to translate business needs and processes to real system requirements.
    • Ray Meibaum - Excellent C#, VB6, ASP and ASP.Net developer and analyst. Ray knows how to decompose systems into managable chunks to make applications work. Ray is heads-down and quick.

    Monday, January 31, 2005

    Happy International Zebra Day!

    It's that time of year again! The time when we attempt to break up the mid-winter doldrums with yet another pointless holiday. But this time, it's fun and cheap to celebrate. Rejoice in the wonderous nature, the yin and yang, the dichotomy of color. Render your respect to God's finest creation, the zebra. How might you celebrate International Zebra Day? Let me count (some of) the ways:

    • Wear stripes - You can get arrested, but watch out, some prisons think garish orange is suitable attire.
    • Wear stripes #2 - Avoid the mess, find an out-of-work NHL referee and borrow his shirt.
    • Get a zebra ice cream - You know the kind I mean, a soft-serve twisted vanilla & chocolate.
    • Go on a interracial date - Hey it's the 'aughts, everything is cool!
    • Turn off the color on your printer - Go old school, it's easier to read anyway.
    • Go mono - Plug in that old black & white monitor, feel the tensions ease as you revel in the soothing MDA past (no green-screens!)
    • And of course GO VISIT THE ZOO

    Enjoy, be safe, peace...

    Sunday, January 30, 2005

    Enterprise Library Released!

    Microsoft has made another huge donation to the best-practices code base. I've just finished giving the code a once-over and I'm very pleased indeed. They've tighted things up on all the old application blocks they previously shipped. The application blocks that comprise the Enterprise Library are the following:

    • Caching Application Block. This application block allows developers to incorporate a local cache in their applications.
    • Configuration Application Block. This application block allows applications to read and write configuration information.
    • Data Access Application Block. This application block allows developers to incorporate standard database functionality in their applications.
    • Cryptography Application Block. This application block allows developers to include encryption and hashing functionality in their applications.
    • Exception Handling Application Block. This application block allows developers and policy makers to create a consistent strategy for processing exceptions that occur throughout the architectural layers of enterprise applications.
    • Logging and Instrumentation Application Block. This application block allows developers to incorporate standard logging and instrumentation functionality in their applications.
    • Security Application Block. This application block allows developers to incorporate security functionality in their applications. Applications can use the application block in a variety of situations, such as authenticating and authorizing users against a database, retrieving role and profile information, and caching user profile information.

    My recommendation is to download this code and learn to use it, the code is solid and the documentation is much improved.

    Friday, January 21, 2005

    My Development and Deployment Strategies

    So I tried to come up with a document that summarizes my Development and Deployment Strategies. Enjoy... 1. Tools in use 1.1. Version Control Critical to the development of modern applications is strict control of source files and versions. The source code control system is responsible for maintaining a history of file changes, and to act as a central repository for the archival and distribution of the application source code. There are many modern version control systems, some free, some included with other tools and some very costly (and powerful). For most development shop's needs, the most logical tools to use are one of the following: 1.1.1. CVS – Concurrent Versions System The CVS system is a very mature free open-source system in use in most modern development environments. It can most easily be hosted on a Unix/Linux system, but can also be deployed on Windows servers. It is under active development and is supported by almost all clients (Windows, Unix, Linux, etc.). Notably, there are a couple very good Windows clients to ease check-in and check-out. The current release of CVS is 1.12.11 (released on Dec 13, 2004) which can be found at the home page[i]. There is a fairly complete tutorial on CVS server use[ii]. I recommend use of TortoiseCVS for the client as it has a very easy-to-use integration to the Windows Explorer context menus (right-click). It is a free open-source tool under active development, and is very stable and mature. The current release of TortoiseCVS is 1.8.11 (released on Dec 30, 2004) which can be found at the home page[iii] 1.1.2. Subversion The Subversion system is a mature free open-source system designed to replace CVS and augment it’s abilities with some often requested features and improvements. Its major advantages over CVS are in its much better support for atomic operations (all check-ins happen, or none are committed). Additionally it has much more efficient support for branches and adding and removing directories and files from a project while still maintaining an easy interface to the older versions. Additionally, it is much more efficient in network use as it always sends “difference-only” messages, whereas CVS sends entire files from the client to the server. The current release of Subversion is 1.1.3 (release on Jan 14, 2005), and can be found at the home page[iv]. There is an excellent online book about Subversion use and setup[v]. I recommend use of TortoiseSVN for the client as it has a very easy-to-use integration to the Windows Explorer context menus (right-click). It is a free open-source tool under active development, and is very stable and mature. The current release of TortoiseSVN is kept in sync with the Subversion release and is 1.1.3 (released on Jan 20, 2005) which can be found at the home page[vi] 1.1.3. Visual Source Safe VSS is a source code control package included with Microsoft Visual Studio. It performs fairly well for smaller teams, and is well integrated in to the Microsoft development suite. Does not support atomic check-in and has much more limited branch support than Subversion or CVS. Additionally, it is a file-based system, so it requires Microsoft Network sharing. The best reason to use VSS is that it’s included in most setups and developers will usually be quite familiar with it. 1.2. NAnt NAnt is a free open-source project build tool that automates the process of compiling, linking and deploying builds of projects. It is similar to the Make or NMake tools, and grew out of the java community’s Ant tool. NAnt is driven by a task-list in an XML file. It has the ability to control most development tasks, and can even be extended to add additional tasks that are unique to a specific environment. The task-list is processed with complete dependency checking and task ordering. It can directly consume and build Microsoft Visual Studio project and solution files and is optimized for use with .Net projects. It is a free open-source tool under active development and can be found at the home page[vii]. The current version is 0.85 (released on Nov 11, 2004). In addition to the documentation on the home site, a very good tutorial on the use of NAnt is available[viii]. Additionally a very nice example of a NAnt script is part of the flexwiki project[ix]. 1.3. NUnit NUnit is a unit-testing framework for .Net. It was derived from the JUnit framework and ideas developed by the Agile software development methodologists. NUnit allows writing unit tests in source code, which the framework automatically executes and reports on the status. This allows automatic testing to occur at every point of the development process. The current version is 2.2 (released on Aug 9, 2004) available at the home page[x]. Very good documentation is available on the home site as well as in many sites dedicated to unit testing. The value of unit testing cannot be overstated. It allows the developer to “work with a net” insuring that changes they make do not break other parts of the system and insuring that requirements captured as unit tests are actually completed. The rate at which unit tests are completed and made to pass provides a good indicator of the progress and status of the project. 1.4. Cruise Control.Net Cruise Control.Net is a build automation tool for .Net projects that scripts the regular flow of watching the source code control system for changes, triggering a build, and reporting the results. It is a very mature free open-source product. The current version is 0.8 (released Jan 20, 2005) which is available at the home page[xi]. It is installed as a windows service to insure that it is always running when the Build Server is rebooted. The service is known as CCService, and can be stop, started and restarted using the standard Windows Control Panel / Administrative Tools / Services task. Additionally it can be stopped by executing appropriate commands from the command prompt on the Build Server (e.g. NET STOP CCService or NET START CCService) The actions of Cruise Control.Net are driven by an XML configuration file called ccnet.config which is located in the Cruise Control.Net program’s directory. Cruise Control can do build automation for any number of projects. Each project is described, along with a schedule for builds, the tasks to be run, the source code control system to be monitored, and the people to report build status to (via e-mail). In most cases it is setup to do continuous monitoring and do a build every time a change is made. Since builds take a while, it would be stupid to start a build, then seconds later need another one, so you can configure a quiescent period. Typically a wait time of 60 seconds is used. That way after a change is committed; a build will start, but not until at least 60 seconds of NO other commits occurs. Once the need for a build is detected, the appropriate NAnt task is triggered, which then does the meat of the build. When the build is complete, status of the build is noted from the return status of the NAnt execution and the output of the build and unit tests is emitted to the Cruise Control.Net project dashboard page. Cruise Control.Net acts as a web server to allow browsing of the current status and history of builds. Finally, the build notification e-mails are sent using the list in the ccnet.config file, so if you want someone else to get those notifications, that’s the place to edit the addresses. One thing to be clear about regarding those addresses, those recipients tagged with always get a notification of every build, while those tagged with change get a notification when the build goes from failed to succeeded or succeeded to failed. These are typically the people that you want to raise red-flags to when a build first goes bad or is just-now fixed. 1.5. Microsoft Application Blocks All Microsoft Application Blocks are free open-source code libraries that encapsulate and codify common .Net development patterns a practices. They offer the ability to incorporate advanced functionality without having to reinvent the wheel. All are well documented on the Microsoft Patterns & Practices website[xii] with source available. In the near future, a new generation of these application blocks will be released as the Enterprise Library[xiii]. 1.5.1. Data Access The Data Access Application Block is a .NET component that contains optimized data access code that will help you call stored procedures and issue SQL text commands against a SQL Server database. The documentation provides guidelines for implementing an ADO.NET-based data access layer in a multi-tiered .NET application. It focuses on a range of common data access tasks and scenarios and presents guidance to help you choose the most appropriate approaches and techniques. This guide encapsulates performance and resource management best practices and can easily be used as a building block in your own .NET application. If you use it, you will reduce the amount of custom code you need to create, test, and maintain. Drivers and adapters for Microsoft SQL Server, Oracle, OLEDB and ODBC data sources are included The most current version is actual hosted on the GotDotNet web site[xiv] and should be downloaded from there. 1.5.2. Exception Management Exception Management Application Block for .NET consists of an architecture guide and an application block. The documentation discusses design and implementation guidelines for exception management systems that use .NET technologies. It focuses on the process of handling exceptions within .NET applications in a highly maintainable and supportable manner. Exception Management Application Block for .NET provides a simple yet extensible framework for handling exceptions. With a single line of application code, you can easily log exception information to the Event Log or extend it by creating your own components that log exception details to other data sources or notify operators, without affecting your application code. Exception Management Application Block for .NET can easily be used as a building block in your own .NET application. It can be downloaded from Microsoft[xv]. 1.5.3. Logging Building useful logging capabilities into your applications can be a significant challenge. At the very least, you need to determine what information is appropriate to log, design the events themselves, and make them available for analysis in an appropriate format. Effective logging is useful for troubleshooting problems with an application as well as provides useful data for analysis, helping to ensure that the application continues to run efficiently and securely. To help provide effective logging for enterprise applications, Microsoft has designed the latest patterns & practices applications block: The Logging Application Block. This block is a reusable code component that uses the Microsoft Enterprise Instrumentation Framework (EIF) and the Microsoft .NET Framework to help you design instrumented applications. It can be downloaded from Microsoft[xvi]. 1.5.4. Other Blocks There are several other application blocks that are more difficult to initially integrate into projects but address some other common issues, in particular these may be very useful for some applications. They can be downloaded from the main Microsoft Patterns & Practices website. · User Interface Process · Cache Management · Authorization and Profile 1.5.5. Other Tools 1.5.5.1. Object Relational Managers Depending on the complexity of the databases in use it may be appropriate to use an object-relational data manager. These tools ease the process of persisting the domain objects into the database. In particular, nHibernate[xvii] is a very complete solution. It is a free open-source package with a good parallel in the java community. 1.5.5.2. Page Template / Master Page Frameworks In most web based applications, it is important to deliver a consistent look and feel. This is best done in the .Net environment through the use of a framework that exposes the “inner” variant page content as user controls. A good free open-source package is available on CodeProject[xviii]. 2. Deployment Processes 2.1. Environment Infrastructure 2.1.1. Server Machines The server machines for the recommended development environment are intended to not be used by any developer directly. It is desirable that they are only used to provide the functionality of the intended role and should never be used as a user’s workstation. No development or modifications should ever be performed directly on a server machine. This is in stark contrast to the typical past-generation .ASP and CGI development techniques. 2.1.1.1. Web Servers The web servers are designed to run the application presentation logic, and any necessary data access and business logic that makes up an application. The web servers are often configured in a pooled environment to allow for load sharing, though in the case of some projects, the workload may not warrant that level of complexity. At minimum, the web servers should have the desired .Net runtime and Framework SDKs installed, typically Windows Server 2003 will be the operating system. In the recommended deployment strategy, it is expected that there will be at least three web servers created; one each for the roles of Authoring, Testing and Production. Please refer to section 2.2.2 for details as to how these machines are configured and used. 2.1.1.2. Database Servers The database server is use to house the databases and should not have any other functionality. You may use a single machine or (given the appropriate database software support) use a pool of fail-over machines. It is anticipated that for many shops, both Oracle and Microsoft SQL Server will be used. Each of the development roles of Authoring, Testing and Production (see section 2.2.2) should have its own database server or database server instance. The configuration component will automatically determine the connection-string to be used for the development role. 2.1.1.3. Application Servers The application servers host any business logic (typically exposed as Web Services or through .Net Remoting) that should be centralized and isolated from the normal Web Server farm. Additionally, application servers are used to run any “batch processing” tasks that do not need user input or take a long time to execute. As with all the other servers, there is a role specific instance for the Authoring, Testing and Production uses. It is expected that most business logic will be run on the Web Server, with probably component sharing at the DLL layer to the application server’s batch programs. If a project needs better isolation between the Web Servers and internal resources, the business logic can be coded as Web Services. 2.1.1.4. Source Code Control Server The source code repository server hosts the chosen version control software. If Subversion or CVS is chosen, the server can be Windows or Unix/Linux. If Visual Source Safe is used, then it will have to be a Windows machine with Microsoft Networking shares available. There is no need for a role-specific setup for this server 2.1.2. Build Machine 2.1.2.1. Daily Builds Daily builds are performed on a developer-class machine, which has all the normal development tools installed. It is not to be directly used by developers. Rather it is setup with the Cruise Control.Net system to perform automatic builds. The build server acts as a buffer against a developer’s natural tendency to customize his or her development environment. Rather than incurring the loss of productivity that denying a developer’s favorite tools will cause, the build environment is standardized by moving the build process to a dedicated machine. Additionally, the fact that the builds are automatic insures that the process doesn’t come to a halt just because the “buildmeister” developer is not available. The build server does not need to be mirrored for the Testing and Production roles, as it’s only use is to create Authoring builds. This machine should have Visual Studio, the .Net Framework SDK and any custom controls needed (such as third-party UI widgets, database drivers, etc.) to mirror what is the baseline for a developer machine. It is important to realize that any build starts with a clean-slate, insuring that a build can be rebuilt on a (suitably configured) brand new build server at anytime. 2.1.2.2. Build Repository The builds are performed regularly and a build number assigned. They are then labeled in the source code control system, and the actual builds archived to the repository of builds. When a build is thought to be a candidate for promotion to the Testing role, the actual build (and the entire source) can be captured from the build repository. This need not be a separate server, just some designated storage. Policies about how long builds stay in the repository are determined by how stable the project seems. In the early stages, it is quite appropriate to keep more non-promoted builds in the repository to ease the ability to choose a “best known” build to promote for interim testing. Any build that is promoted to Production should be permanently archived. 2.1.3. Development Machine The development machine is what the individual developers use on a daily basis to develop and support the applications. It should have all the same tools and third-party controls installed as the Build Server. It does not have Cruise Control.Net installed as the developers are not responsible for the daily build process. Additionally, no developer should ever copy binaries, images or pages to a server of any role. Doing so bypasses the build process, which sabotages the ability to always be able to rebuild from the sources in the version control system. When a developer checks files into the source code control system, the Cruise Control.Net process running on the Build Server will automatically begin a build process. 2.2. Deployment Path 2.2.1. Continuous Integration The general strategy to follow is know as Continuous Integration and was formalized by Martin Fowler, a great introduction of the principles of Continuous Integration can be found here[xix]. The Continuous Integration insures that developer changes are quickly assimilated into the overall build. This has several benefits: · Changes are guaranteed to be in the source code control system · Changes made by one developer are quickly integrated with other developer’s changes and (through unit tests) conflicts detected · Any build is a candidate for release, meaning that progress is steady and obvious · Developers can see and benefit quickly from changes made by other team members Critical to the success of Continuous Integration is regular check-ins by developers, including database administrators. Unit tests are just as critical in that they quickly indicate breaking-changes where one developer has made changes that impact existing code. Unit tests are automatable, making it possible for them to be automatically executed on the Build Server by NAnt scripts. 2.2.2. Promotion Roles Essential to management of the large number of potential releases generated by the Continuous Integration process is to have a well defined promotion strategy. A current industry best-practice is to have the output builds of the Build Server (as archived on the Build Repository) posted to the Authoring Environment. This is easily accomplished by creating additional tasks in the NAnt script for the project. These additional tasks can either be automatically triggered at the end of the build process (perhaps only if the unit-tests pass), or it can be triggered by a manual invocation. In either case, the promotion technique is usually little more than the copying of the project build outputs (EXEs, DLLs, pages, images, etc) to the Authoring Server. As the NAnt task that accomplishes this is contained in the standard build file, it is subject to, and benefits from all the same source code controls. This means that even the strategy for copying the build outputs is version controlled. The Authoring Environment is comprised of the “set” of servers used in a project, typically at lease a Web Server and Database Server, and potentially an Application Server. The environment can be quickly setup for each project as needed and then controlled by a configuration management system to insure files paths and connection strings are properly managed at an environment level. This is done by placing simple markers in the .Net Framework’s machine.config file. Testing on the Authoring server is intended for daily development work An individual developer machine can be used as a “proxy” Authoring environment, which means that programs one the developer’s machine are executed against the Authoring environment’s Database Server. This allows the developer to do daily work and insure functionality before committing changes to the source code control system; which triggers the Build Server to do an official build. Once on the project code on the Authoring Environment is deemed worth of promotion, it is copied (using a NAnt task, as always) to the Testing Environment. As with the Authoring Environment, the build outputs are copied to the appropriate server machines. The Testing Environment is intended to be a stable environment where formal system-level testing and user acceptance testing can take place without affecting the Production version (for existing projects undergoing new development) or exposing incomplete projects to the Production environment users. The Testing Environment is only updated on-demand, as determined by the testing team, and is promoted not from the Authoring Environment, which may have already been changed by further development. Rather; the build outputs that were archived on the Build Repository are used. This way, the Testing Environment also always corresponds to a specific build (and thus to a version control system label). The Test Environment is the ideal place to record regression test scripts and perform stress tests as it represents the best image of the final deployment environment, but still is under explicit control of the testing team. Once a build is on the Testing Environment, only the testing team can decide to update it, and only the testing team (with appropriate approvals) can release changes to the Production Environment. When a build is deemed ready to be released for use by the users, it is promoted to the Production Environment. This process is, once again, driven by a NAnt task to ensure repeatability and auditability of the process. This is extremely important with the recent Sarbanes-Oxley regulations[xx] that apply to financial and accounting information processing. The Production Environment is setup to exactly mirror the Testing Environment to insure that the testing reflects the behavior of the tests. The Production Environment is not suitable for use in testing as other users may be modifying the data that the test scripts may be using. To allow for reproducibility of user bug reports, at regular intervals as determined by the testing staff, the databases used by the Testing Environment can be replaced with a backup of the Production Environment’s database. This allows for realistic testing of the application once live data has built up. The same strategy is used to insure that the Authoring Environment’s databases are mirror from (possibly a subset) of the Production Environment’s databases. This also gives usable data to use when developing enhancements and defect corrections that require data conversion, data validation or schema updating. When the Authoring Database repopulation is done, then you can test and retest the SQL scripts to be used when rolling out the next version. 2.3. Hotfixes and Next Generation Development Hotfixes are a reality of software development. All programs have flaw, either in design or in the execution of the design. At times the flaws will be significant enough to warrant immediate correction. Typically this happens when a critical-path of the application no longer works (due to data issues, new uses of the functionality, or simply functionality that was never adequately tested). The important thing to realize is that when a hotfix is needed, it is usually needed immediately. It’s also likely that the pressure to release quickly is very high. Usually these crises arise after the project is no longer under active development, or after an enhancement phase has been begun. All of these factors conspire to make it very difficult to make “surgical strikes” to just fix newly discovered issue. With all these pressures against successful hotfixes, it makes sense to practice the process and formalize how the situation should be handled; after all we are good at what we practice. With a proper Build Repository, it is very easy to get the exact set of source files (and indeed the build outputs) that makes up the current production release. All that needs to be done is to suspend the regular automated builds while the hotfix is under production. The source files are restored on the Build Server, then the changes are made against that version. This processes is known as branching, and is a common practice that is well supported by Subversion and CVS, and adequately (but less well) in Visual Source Safe. Once a decision as to the source code control system is made, the process can be documented, but it’s commonly called a Branching operation. What is important to realize is that once a production release is live, it is imperative to optimize the path for hotfixes, as they are typically time critical and not-often practiced. Thus it is important to perform the “branch” operation as soon as a next-phase development is done. This insures that the system is already in place for hotfixes when the need arises. Obviously, this preemptive branching could impede new development. This is especially true when doing schema or data breaking-changes. When this happens, the best thing to do is create an Authoring Next Environment and Testing Next Environment for the new development. This can simply be new virtual directories and database instances for the deployment and a separate Cruise Control.Net build project. By creating a new project, the existing framework for the old (live) version is left in place and can be quickly triggered into action merely by checking files in on the branch version. This means that the source code control checkouts used by Cruise Control.Net would be driven by the branch label. When the next generation version is released to the Production Environment, the old version’s Authoring Environment and Testing Environment are retired and replaced by the Authoring Next Environment and Testing Next Environment. If a new breaking-changes version is needed for further development, the process of doing the branch and creating new Authoring Next Environment and Testing Next Environment is repeated. 3. Quality Control 3.1. Version Control Best Practices 3.1.1. Check-in Daily As development progresses in the project, nothing will give a better guarantee of success than regularly checking source into the version control system. Obviously only code that works should be checked-in, but that doesn’t mean it should take a more than a day to make single changes. If the tasks are subdivided into day-sized pieces, then every team member can benefit from the ever-increasing functionality. It also guarantees that when a developer takes an absence (even unplanned), nothing is left hanging. Since the automated builds will be triggered by the check-in process, it also insures that all existing unit tests are executed and the source archived. 3.1.2. Use Version Labels When setting up the source control system, plan to use version labels for every good build and additional release labels for each build that is released to the Production Environment. This makes it very easy to do delta reports between releases and to restore to a known build in case of lost of the Build Repository. Lastly it aids in generating any Sorbanes-Oxley reports if needed. 3.1.3. Branch At Breaking Changes Whenever a major change in the data schema or external interfaces is needed, branch and pin the current release’s source to insure that that it is trivial to do hotfixes. The trunk (main version) of the file should always be the next-generation path. 3.1.4. Merge Branches As Soon As Possible When a branch has been made, and hotfixes applied on the branch (old) version, as soon as the fix is released to Production and known-good, merge the change into the trunk (main version) of the file. This insures that the hotfix will not be lost in the next enhancement release of the software. 3.2. Daily Peer Code Review Every day, have all developers get in the habit of updating their source version from the latest version in the source code control system. Before switching to the new versions of the source, however, make it a daily process to do a complete source-comparison. By keeping abreast of the changes committed in previous days, the entire team gains understanding about coding techniques in use, the current areas in flux from other developers, and gain a general understanding of other parts of the project. Peer code reviews need not be formal; rather they should be oriented toward understanding what code was changed and what the intent of the change was. The best source for change reasons would be requirements and comments in the source control system. The peer reviews will catch a lot of programming errors, and daily reviews insure that the feedback is quick and valuable. A great tool for this is Beyond Compare, an (unfortunately) not free tool which can be purchased here[xxi]. 3.3. Pair Programming The next step beyond daily code reviews is doing pair programming. This is a very beneficial technique of having all code developed by two developers. While it might seem that this would halve productivity, studies show that productivity of a pair team is actually higher than the cumulative efforts of the same two developers working independently. This is because the pair can quickly bounce ideas and talk-through design and implementation alternatives and choose the best course from the outset. Additionally it tends to keep developers on-focus and prevent the easy distractions of daily development tasks from overwhelming the task at hand. Typically, the quality of pair-programmed software shown 40% less defects per line of code; two eyes are always better than one. 3.4. Status Reporting Status reporting is essential to the success of the project. Typically, in the past, developers have had to rely on memo and notes to keep track of and report where they are within the list of tasks. If business requirement are captured in unit tests first, then the progress of a project can easily be tracked by the number of failing unit tests. As more functionality is added to a project, unit tests to capture the business requirements are added. The tests initially fail, as there is no implementation of the required logic. As the code is developed, the unit tests will one-by-one begin to pass, and barring breaking changes will continue to pass with each project. Thus the status reporting becomes a matter of measuring how many of the business requirements are captured in unit tests, and how many of those tests are passing. When the rate of unit tests being added slows down, the project manager can verify that it is because the requirements have been captured, then when the rate of unit tests that are newly passing (the “burn rate”) begins to slow, then the project is nearing stability. It is possible to release to production at any point where the remaining failing unit tests are not considered critical, and releases often ship with some outstanding failures due to lack of priority. When a new defect is reported, the developer should first write a unit test to isolate and reproduce that error, then they can safely fix the error and know that it will stay fixed because the unit test is never removed. References [i] https://www.cvshome.org/ - Current download at https://ccvs.cvshome.org/servlets/ProjectDocumentList?folderID=83&expandFolder=83&folderID=80 [ii] https://www.cvshome.org/docs/blandy.html [iii] http://www.tortoisecvs.org/ - Current download at http://prdownloads.sourceforge.net/tortoisecvs/TortoiseCVS-1.8.11.exe [iv] http://subversion.tigris.org/ - Windows Subversion server binaries downloadable at http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91 [v] http://svnbook.red-bean.com/ [vi] http://tortoisesvn.tigris.org/ - Current download at http://tortoisesvn.tigris.org/download.html [vii] http://nant.sourceforge.net/ - Current download at http://sourceforge.net/project/showfiles.php?group_id=31650 [viii] http://theserverside.net/articles/showarticle.tss?id=NAnt [ix] http://cvs.sourceforge.net/viewcvs.py/flexwiki/FlexWikiCore/flexwikicore.build?view=markup [x] http://nunit.org/ [xi] http://ccnet.thoughtworks.com/ [xii] http://www.microsoft.com/resources/practices/default.mspx [xiii] http://www.microsoft.com/resources/practices/comingsoon.mspx [xiv] http://www.gotdotnet.com/workspaces/releases/viewuploads.aspx?id=c20d12b0-af52-402b-9b7c-aaeb21d1f431 [xv] http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=8CA8EB6E-6F4A-43DF-ADEB-8F22CA173E02 [xvi] http://www.microsoft.com/downloads/details.aspx?FamilyId=24F61845-E56C-42D6-BBD5-29F0D5CD7F65&displaylang=en [xvii] http://nhibernate.sourceforge.net/ [xviii] http://www.codeproject.com/aspnet/PageFramework.asp [xix] http://www.martinfowler.com/articles/continuousIntegration.html [xx] http://www.sarbanes-oxley.com/ [xxi] http://www.scootersoftware.com/