Friday, March 31, 2017

Codeplex shutting down, moved the repos to GitHub

The four simple projects I used to have on CodePlex have been migrated to GitHub.

IFilter Extractor

A text extracting COM component that uses any installed IFilter (Microsoft Indexing) to extract text from files. Information in old blog posts.

Dynamic Reflection Library

A library for doing lightweight-code-generation in the era before the expression were a thing. Lots of old blog posts around here about it.

URI Template and URI Patterns

The UriTemplate library is a simple set of code to encapsulate the building and parsing of URIs from replacement tokens. Using this couple of classes will make it easy to build RESTful URIs. Information in old blog posts

ASP.Net RSS Toolkit

An ASP.Net component set for publishing and consuming RSS/Atom feeds in .Net 2.0. Includes a datasource component. This was one of the first things Microsoft open-sourced (sadly in an era where that really meant mostly abandoned). I took the sources and floated a CodePlex repository and then promptly abandoned it myself. Some information in old blog posts.

Tuesday, December 03, 2013

EntityFramework 6 breaks Backup/Restore of LocalDB.

EntityFramework 6 has lots of Resiliency enhancements, but one of the side effects is that doing a backup or restore of a LocalDB database will need a tweak to keep EF from spinning up a transaction around the SQL statement.

Essentially, when you call ExecuteSqlCommand, you have to request that it does NOT ensure the a transaction exists with the first parameter of TransactionalBehavior.DoNotEnsureTransaction.

I have updated the appropriate posts:

How to backup a LocalDB database under an MVC 4 sytem
Restoring a LocalDB within an MVC web application


Friday, May 24, 2013

Getting the requestor's IP address.

Getting the client's IP

While it seems it should be straightforward to get the IP address of the client making a web request in the ASP.Net pipeline, there are a few surprises lurking.  In my link-click tracking application, I ended up with this snippet based on Grant Burton's great post.
namespace Phydeaux.Helpers
{
    using System;
    using System.Net;
    using System.Net.Sockets;
    using System.Web;

    public static class ClientIP
    {
        public static string ClientIPFromRequest(this HttpRequestBase request, bool skipPrivate)
        {
            foreach (var item in s_HeaderItems)
            {
                var ipString = request.Headers[item.Key];

                if (String.IsNullOrEmpty(ipString))
                    continue;

                if (item.Split)
                {
                    foreach (var ip in ipString.Split(','))
                        if (ValidIP(ip, skipPrivate))
                            return ip;
                }
                else
                {
                    if (ValidIP(ipString, skipPrivate))
                        return ipString;
                }
            }

            return request.UserHostAddress;
        }

        private static bool ValidIP(string ip, bool skipPrivate)
        {
            IPAddress ipAddr;

            ip = ip == null ? String.Empty : ip.Trim();

            if (0 == ip.Length
                || false == IPAddress.TryParse(ip, out ipAddr)
                || (ipAddr.AddressFamily != AddressFamily.InterNetwork
                    && ipAddr.AddressFamily != AddressFamily.InterNetworkV6))
                return false;

            if (skipPrivate && ipAddr.AddressFamily == AddressFamily.InterNetwork)
            {
                var addr = IpRange.AddrToUInt64(ipAddr);
                foreach (var range in s_PrivateRanges)
                {
                    if (range.Encompasses(addr))
                        return false;
                }
            }

            return true;
        }

        /// 
        /// Provides a simple class that understands how to parse and
        /// compare IP addresses (IPV4 and IPV6) ranges.
        /// 
        private sealed class IpRange
        {
            private readonly UInt64 _start;
            private readonly UInt64 _end;

            public IpRange(string startStr, string endStr)
            {
                _start = ParseToUInt64(startStr);
                _end = ParseToUInt64(endStr);
            }

            public static UInt64 AddrToUInt64(IPAddress ip)
            {
                var ipBytes = ip.GetAddressBytes();
                UInt64 value = 0;

                foreach (var abyte in ipBytes)
                {
                    value <<= 8;    // shift
                    value += abyte;
                }

                return value;
            }

            public static UInt64 ParseToUInt64(string ipStr)
            {
                var ip = IPAddress.Parse(ipStr);
                return AddrToUInt64(ip);
            }

            public bool Encompasses(UInt64 addrValue)
            {
                return _start <= addrValue && addrValue <= _end;
            }

            public bool Encompasses(IPAddress addr)
            {
                var value = AddrToUInt64(addr);
                return Encompasses(value);
            }
        };

        private static readonly IpRange[] s_PrivateRanges =
            new IpRange[] { 
                    new IpRange("0.0.0.0","2.255.255.255"),
                    new IpRange("10.0.0.0","10.255.255.255"),
                    new IpRange("127.0.0.0","127.255.255.255"),
                    new IpRange("169.254.0.0","169.254.255.255"),
                    new IpRange("172.16.0.0","172.31.255.255"),
                    new IpRange("192.0.2.0","192.0.2.255"),
                    new IpRange("192.168.0.0","192.168.255.255"),
                    new IpRange("255.255.255.0","255.255.255.255")
            };


        /// 
        /// Describes a header item (key) and if it is expected to be 
        /// a comma-delimited string
        /// 
        private sealed class HeaderItem
        {
            public readonly string Key;
            public readonly bool Split;

            public HeaderItem(string key, bool split)
            {
                Key = key;
                Split = split;
            }
        }

        // order is in trust/use order top to bottom
        private static readonly HeaderItem[] s_HeaderItems =
            new HeaderItem[] { 
                    new HeaderItem("HTTP_CLIENT_IP",false),
                    new HeaderItem("HTTP_X_FORWARDED_FOR",true),
                    new HeaderItem("HTTP_X_FORWARDED",false),
                    new HeaderItem("HTTP_X_CLUSTER_CLIENT_IP",false),
                    new HeaderItem("HTTP_FORWARDED_FOR",false),
                    new HeaderItem("HTTP_FORWARDED",false),
                    new HeaderItem("HTTP_VIA",false),
                    new HeaderItem("REMOTE_ADDR",false)
            };
    }
}

Monday, December 10, 2012

Restoring a LocalDB within an MVC web application

UPDATE: EntityFramework 6 has lots of Resiliency enhancements, but one of the side effects is that this needs a tweak to keep EF from spinning up a transaction around the SQL statement.  Essentially, you have to call ExecuteSqlCommand and request that it does NOT ensure the a transaction exists with the first parameter of TransactionalBehavior.DoNotEnsureTransaction.  If still running EF 5 or below, omit that argument. 

So, as a follow-up to backing up a LocalDB database, I guess I should show the simplest path to restoring one.

So, without further adéu, I give you:

public class RestoreDatabaseModel
    {
        public HttpPostedFileBase File { get; set; }
    }

        //
        // GET: /Admin/RestoreDatabase
        [Authorize(Roles = "Admin")]
        public ActionResult RestoreDatabase()
        {
            return View(new RestoreDatabaseModel());
        }

        //
        // POST: /Admin/RestoreDatabase
        [Authorize(Roles = "Admin")]
        [HttpPost]
        public ActionResult RestoreDatabase(RestoreDatabaseModel model)
        {
            const string YOURAPPNAME = "YourAppName";
            var dbPath = Server.MapPath(String.Format("~/App_Data/Restore_{0}_DB_{1:yyyy-MM-dd-HH-mm-ss}.bak", YOURAPPNAME, DateTime.UtcNow));

            try
            {
                model.File.SaveAs(dbPath);

                using (var db = new DBContext())
                {
                    var cmd = String.Format(@"
USE [Master]; 
ALTER DATABASE {0} SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
RESTORE DATABASE {0} FROM DISK='{1}' WITH REPLACE;
ALTER DATABASE {0} SET MULTI_USER;"
                        , YOURAPPNAME, dbPath);
                    db.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction, cmd);
                }

                 ModelState.AddModelError("", "Restored!");
            }
            catch (Exception ex)
            {
                ModelState.AddModelError("", ex);
            }

            return View(model);
        }
This iteration saves the posted file based on the current date-time and the supplies the correct commands to restore the database. I leave the flushing of HttpCache to you...

Also, if you haven't extended the upload limits of a POST, you'll need this (or similar) in your web.config
<system.web>
    <httpRuntime maxRequestLength="40960" targetFramework="4.5">
    </httpRuntime>
</system.web>
This enables larger files to be uploaded (in this case, 40MB... if your database backup is bigger than that, you have no business hacking around with a LocalDB... get a real SQL Server instance to point at.

Tuesday, December 04, 2012

How to create random readable strings for .Net application

Why would I want to be random?

If you need a random string, I assume you know why you're here. However there are some common uses for random strings I want to list out for the Google juice factor:
  1. CAPTCHA codes when not using something cool like reCAPTCHA
  2. Email verification codes.
  3. Nonce values for challenge/response.
  4. salt values to increase entropy on password hashes.
  5. Registration codes.

How should I generate them in .Net?

There is a simple answer, really. In .Net you can just make a call to RNGCryptoServiceProvider's GetNonZeroBytes method and convert those bytes to characters.
var random = new byte[16];           // whatever size you want
var rng = new RNGCryptoServiceProvider();
rng.GetNonZeroBytes(random);         // Fill with non-zero random bytes

return Convert.ToBase64String(random);  // convert to a string.
If you have the MVC 4 package available, you can use the convenient Crypto.GenerateSalt method as a shorthand as it essentially does the above code.

This, of course, limits the returned string to the Base-64 characters.

When should I care about the contents?

In general, you don't care about the contents of the random string. The one generated by logic above is pretty useful as it is a wide set of all-ASCII characters that will not get you in trouble when crossing code-pages.

The biggest downside of this approach is that the string is only using the a 64 character set, so you're excluding a lot of other possible characters, but in most applications that isn't a problem.  In fact, quite the opposite is true. In many cases, we might want to avoid specific characters like the + character  because this might be used in a URL In other cases, you might want to generate a fuller character set (or a specific set  like an all-emoji string).

A more common need, though, would be if you need to put something on screen for a user to type (such as a registration code) that should not be easy to mistake characters.  In some fonts, the characters 1, l and I or 0, o and 0 are very easily mistaken. For such cases, you can use a function like the following to generate a reasonably readable string
namespace Silly
{
    using System.Security.Cryptography;

    public static partial class Helpers
    {
        public static string RandomReadableString(int length)
        {
            return "23456789ABCDEFGHJKMNPQRSTUVWXYZabcdefghijkmnpqrstuvwxyz".RandomString(length);
        }

        public static string RandomString(this string characterSet, int length)
        {
            var rng = new RNGCryptoServiceProvider();
            var random = new byte[length];
            rng.GetNonZeroBytes(random);

            var buffer = new char[length];
            var usableChars = characterSet.ToCharArray();
            var usableLength = usableChars.Length;

            for (int index = 0; index < length; index++)
            {
                buffer[index] = usableChars[random[index] % usableLength];
            }

            return new string(buffer);
        }
    }
}
You can call the second function against any string of characters. For example I'm using the RandomReadableString method to generate email confirmation codes that can easily be typed if needed.

Boring! Spice it up...

For even more fun, here's some Emoji sequences that can be used for eye charts or stupid pet code tricks.
// Emoji fun
// random weather "☀☁☂☃"
// random finger pointers "☜☝☞☟"
// random zodiac "♈♉♊♋♌♍♎♏♐♑♒♓"
// random chess pieces "♔♕♖♗♘♙♚♛♜♝♞♟"
// random music notation "♩♪♫♬♭♯"
// random trigrams "☰☱☲☳☴☵☶☷"
// random planets "♃♄♅♆♇"

Tuesday, November 27, 2012

How to backup a LocalDB database under an MVC 4 sytem

UPDATE: EntityFramework 6 has lots of Resiliency enhancements, but one of the side effects is that this needs a tweak to keep EF from spinning up a transaction around the SQL statement.  Essentially, you have to call ExecuteSqlCommand and request that it does NOT ensure the a transaction exists with the first parameter of TransactionalBehavior.DoNotEnsureTransaction.  If still running EF 5 or below, omit that argument. 

I have a smallish MVC 4 site with a database. In this case it isn't worthy of a dedicated SQL Server, so I decided to try out the new LocalDb database feature.

While the database isn't particularly mission critical, I would like to be able to easily back it up on demand to allow some level of disaster recovery.

So, without further adéu, I give you:

namespace SomeSimpleProject.Controllers
{
    [Authorize(Roles="Admin")]
    public class BackupController : Controller
    {
        public ActionResult BackupDatabase()
        {
            var dbPath = Server.MapPath("~/App_Data/DBBackup.bak");
            using (var db = new DbContext())
            {
                var cmd = String.Format("BACKUP DATABASE {0} TO DISK='{1}' WITH FORMAT, MEDIANAME='DbBackups', MEDIADESCRIPTION='Media set for {0} database';"
                    , "YourDB", dbPath);
                db.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction, cmd);
            }
       
            return new FilePathResult(dbPath, "application/octet-stream");
        }
    }
}


Thursday, December 15, 2011

Obligatory link

Found this link

Monday, January 31, 2011

It's that time of year again... International Zebra Day is upon us.

So, as in many years past, I once again am celebrating International Zebra Day. If you want ideas about how to celebrate, check here and here for past posts. This year, I have to add that two more countries have signed on... perhaps you can spread the striped love.

Tuesday, September 28, 2010

MS10 - 070 Post Mortem analysis of the patch

Now that Microsoft has patched the POET vulnerability, I thought it would be instructive to see what they changed. Now I'm not masochistic enough to disassemble aspnet_wp.exe or webengine.dll so there are things changed there that I don't know... but I did do a Reflector Export of the System.Web and System.Web.Extensions assemblies and then used BeyondCompare to do a diff against the files.

The analysis is simple:

  1. Don't leak exception information - This prevents exploits from seeing what is broken.
  2. Don't short-circuit on padding checks (take the same amount of time for padding correct verses padding broken) - This prevents exploits from seeing timing difference for incorrect padding.
  3. Don't be too picky about exception catching in IHttpHandler.ProcessRequest - This prevents exploits from seeing that you caught one kind of exception (CryptographicException) instead of all exceptions.
  4. Switch from Hash-based initialization vectors to Random IVs - This prevents exploits from using the relationship between the data and the hash to decrypt faster.
  5. Allow for backward compatibility - In case this breaks something, allow the new behavior to be reverted in-part.
  6. When doing a code review pass, change to make it clear you've considered the new options.

Here's the blow-by-blow review

Changes in System.Web

  • AssemblyInfo.cs
    (v2.0/v3.5) Bump AssemblyFileVersion from 2.0.50272.5014 to 2.0.50727.5053
    (v4.0) Bump AssemblyFileVersion from 4.0.30319.1 to 4.0.30319.206
  • ThisAssembly.cs
    (v2.0/v3.5) Bump InformationalVersion from 2.0.50272.5014 to 2.0.50727.5053
    (v4.0) Bump the BuildRevisionStr from 1 to 206
    (v4.) Bump InformationalVersion from 4.0.30319.1 to 4.0.30319.206
  • HttpCapabilitiesEvaluator.cs
    When caching the browser capabilities, sets the cache expiration to sliding time based on the the setting specified (instead of no sliding, no absolute)
  • MachineKeySection.cs
    Calls to EncryptOrDecryptData without specifying an initialization vector type now defaults to IVType.Random instead of IVType.Hash.
    Now uses AppSettings.UseLegacyEncryption flag to determine if the data should be signed.
    If decrypting and signing, now checks to ensure that the data has unhashed content (GetUnHashedData) and throws if there is no data other than the hash block. If there is no data after the signature, throws new Exception()!
    VerifyHashedData no longer fast-aborts when the hashed data mismatches. This makes the check take the same amount of time whether a match is found or not.
  • Handlers\AssemblyResourceLoader.cs
    ProcessRequest now catches any exception and morphs to an undistinguished InvalidRequest instead of leaking it out.
    (v4.0) Also no longer passes the assembly name to the InvalidRequest exception formatting.
  • Security\MachineKey.cs (v4.0)
    Decode respects the new AppSettings.UseLegacyMachineKeyEncryption setting. [note, not AppSettings.UseLegacyEncryption]
  • Security\MembershipAdapter.cs (v4.0)
    EncryptOrDecryptData now explicitly passes false for signData to MachineKeySection.EncryptOrDecryptData to remain compatible with prior calls, whilst still showing this code has been checked.
  • Security\MembershipProvider.cs
    DecryptPassword and EncryptPassword now explicitly passes false for useValidationSymAlgo and signData to MachineKeySection.EncryptOrDecryptData to remain compatible with prior calls, whilst still showing this code has been checked.
  • UI\WebControls CheckBoxField.cs CheckBoxList.cs MailDefinition.cs ObjectDataSource.cs and WebControl.cs
    (v4.0) Various property attributes have been reordered [unrelated, artifact of a new compiler version?]
  • UI\ObjectStateFormatter.cs
    Deserialize now ignores the caught exception and now throws the MacValidationError without leaking out the exception details.
  • UI\Page.cs
    Now flags the page response as in-error for any exception, not just a CryptographicException, thus not leaking out the exception kind.
  • Util\AppSettings.cs
    Now exposes a UseLegacyEncryption setting to allow reverting to old behavior for encryption. [defaults to false]
    (v4.0) Now exposes a UseLegacyMachineKeyEncryption setting to allow reverting to old behavior for MachineKey encryption. [defaults to false]
    (v4.0) Now exposes a ScriptResourceAllowNonJsFiles setting to allow reverting to old behavior for ScriptResource encryption. [defaults to false]
  • Util\VersionInfo.cs (v4.0)
    Bumped the serialized SystemWebVersion property from 4.0.30319.1 to 4.0.30319.206

System.Web.Extensions (v4.0)

  • ScriptResourceHandler.cs
    ProcessRequest now catches any exception and throws an undistinguished 404 error.
    ProcessRequest now uses the new AppSettings.ScriptResourceAllowNonJsFiles setting to control behavior when a resource is requested that doesn't end in ".js" If enabled (NOT DEFAULT), then resource is allowed, otherwise now will throw an undistinguished 404 error.
  • VersionInfo.cs
    Bumped the serialized SystemWebVersion property from 4.0.30319.1 to 4.0.30319.206
  • ThisAssembly.cs
    Bumped the build revision from 1 to 206

Thursday, September 17, 2009

And as for me? I've done nothing.

My last post was 18 months ago... I deleted the two that followed it because they were time-sensitive. I've been sucked into twitter, StackOverflow, parenthood and real life. Coming soon, I hope, will be a bunch of posts on some C# hacks and T-SQL things I've been up to... mostly ASP.Net and twitter related (e.g. huge datasets)

Wednesday, April 30, 2008

And you, sir? What have you done lately?

In Joel Spolsky's latest tirade, we learn that:

  1. Microsoft is an illegal monopoly
  2. Microsoft and Google are hiring people to pay foosball
  3. FolderShare is a tool for uploading and download files to the internet that nobody has ever heard of.

Simply put, to Joel everyone else does everything wrong. He, meanwhile, is milking a not-even-ASP bug-track application (he probably couldn't figure out how to write a ColdFusion version). Or wrapping a proprietary DynamicDNS-like wrapper around invented-elsewhere VNC client (and somehow avoiding the far more evolved UltraVNC variant). Or writing the worst little CMS monstrosity ever created. Joel is bitter about the cost of CS graduates and blames Google and Microsoft (curiously leaving Yahoo out of the mix) for driving it up.

The thing is, he's not entirely wrong... when a company can't figure out or invest in a stronger product, like @Task or Team Foundation, FogBugz isn't a bad compromise. When your dad needs IT support behind his random cable-modem IP, CoPilot isn't a bad tool and far cheaper to use than WebEx or GoToMyPc. And well, the best I can say about CityDesk is that my technophobe wife can figure it out and Joel isn't pushing it.

Seriously, though... what have you done lately that really is worth bragging about, Joel? It's been how many years since FogBugz came out and it still can't track my billable time or hold my requirements docs? It's been years since CityDesk saw a new release, and CoPilot was written by a few interns...over a summer. What has your (self) vaunted managerial skill produced lately. You can only rest on your Excel project manager laurels for so long.

Tuesday, April 15, 2008

SQL Server Data Services via cURL

I just noticed a really cool article on using Microsoft's new SQL Server Data Services which explains how to use cURL at the command line to talk to the SSDS RESTful interface.

What's cURL?

If you've never heard of cURL, it is similar to wget in that it allows you to make HTTP requests of any web service. It can handle all the standard verbs (GET,PUT,POST,DELETE) and also supports all those lovely redirections, security and all that other nonsense. It's great for crufting up any batch/command file that could do all sorts of things as well as to ping-test REST services.

What's SSDS

If you've never heard of SSDS, it is similar to the Amazon SimpleDB. It offers the ability to push a database out to the public cloud and allow access from web applications, thick clients or whatever. What differentiates it from Google's APE-based access to BigTable or Amazon's S3/SimpleDB setup is that both of those systems are tuple-based (or name-value-based) non-relational databases. The SSDS stuff, on the other hand, is a Linq-based. This makes querying MUCH simpler to do.

The killer feature to me is that SSDS doesn't make you (the developer) worry about consistency. With SimpleDB or BigTable, the provider only guarantees "eventual consistency". This means that the changes you make will eventually be propogated through the Amazon/Google cloud. During the time the change was post, but not yet propogate your clients may see stale data which makes these services useable mostly for rarely-changed data.

SSDS doesn't have this restriction. Once your call is complete any access will result in the commited data being returned. This is a much simpler model to program against and it puts the replication issues squarely on the database server/service where it belongs. What remains to be seen is if Microsoft will really be able to scale this out reasonably.

Thursday, April 10, 2008

You can't hold onto nothing

A while back, we were looking for an easy way to count "hits" against content in a CMS-like system. For the sake of discussion, pretend we have a table called ContentEntry that represents the content. We decided we wanted to track the hits by-hour against a particular content entry, so that's the ContentEntryPlusPlus table on the right. The foreign-key is from ContentEntryPlusPlus.ContentEntryID to ContentEntry.ID.

ContentEntry

Now the trick is to insert the row if needed for a particular entry and time-slot then increment the Hits column. The simplest thing to do is to is check to see if the row exists, insert it if not, then do the update.  Something like this to find the row's ID:

SELECT TOP 1 ID FROM dbo.ContentEntryPlusPlus WHERE ContentEntryID = @ContentEntryID AND TimeSlot = DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0))

Then we have to insert the row if missing:

INSERT INTO dbo.ContentEntryPlusPlus(ContentEntryID, TimeSlot) VALUES (@ContentEntryID, DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0)) SELECT Scope_Identity() AS ID

Then we do the update like this:

UPDATE dbo.ContentEntryPlusPlus SET Hits = Hits + 1 WHERE ID = @ID -- from above SELECT or INSERT's Scope_Identity()

Obviously we have to do this inside a transaction or we could have issues and I hate multiple round-trips, so we crafted this cute statement pair to insert the row if needed and then update. Note the use of INSERT FROM coupled with a fake table whose row count is controlled by an EXISTS clause checking for the desired row. This gets executed as a single SQL command.

INSERT INTO dbo.ContentEntryPlusPlus(ContentEntryID, TimeSlot) SELECT TOP 1 @ContentEntryID AS ContentEntryID ,DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0) AS TimeSlot FROM (SELECT 1 AS FakeColumn) AS FakeTable WHERE NOT EXISTS (SELECT * FROM dbo.ContentEntryPlusPlus WHERE ContentEntryID = @ContentEntryID AND TimeSlot = DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0)) UPDATE dbo.ContentEntryPlusPlus SET Hits = Hits + 1 WHERE ContentEntryID = @ContentEntryID AND TimeSlot = DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0)

This got tested and deployed, working as expected. The only problem is that every once in a while, for some particularly popular content, we would get a violation of the clustered-key's uniqueness check on the ContentEntryPlusPlus table. This was quite surprising, honestly as the code obviously worked when we tested it.

The only thing that could cause this is if the two calls executed the inner existence-check simultaneously and both decided an INSERT was warranted. I had assumed that locks would be acquired, and they are, for the inner SELECT, but since there are no rows to when this is executed, there are no rows locked, so both statements will plow on through. So, I just had to add a quick WITH (HOLDLOCK) hint to the inner SELECT and poof it works.

So, the moral of the story? You can't hold onto nothing...

The final version is:

INSERT INTO dbo.ContentEntryPlusPlus(ContentEntryID, TimeSlot) SELECT TOP 1 @ContentEntryID AS ContentEntryID ,DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0) AS TimeSlot FROM (SELECT 1 AS FakeColumn) AS FakeTable WHERE NOT EXISTS (SELECT * FROM dbo.ContentEntryPlusPlus WITH (HOLDLOCK) WHERE ContentEntryID = @ContentEntryID AND TimeSlot = DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0)) UPDATE dbo.ContentEntryPlusPlus SET Hits = Hits + 1 WHERE ContentEntryID = @ContentEntryID AND TimeSlot = DateAdd(hh, DateDiff(hh, 0, GetUtcDate()), 0)

Sunday, March 09, 2008

Sometimes you make a hash of things.

So, hash has been on my mind lately.  No, not that kind of hash, or that kind either.  First, there was last week, when I installed Internet Explorer 8 beta 1.  I was reading the release notes and was amazed to find that # (you know, octothorpe, pound sign) was not considered part of the URL by this version.  Thus you can't link directly to a named element on a page. Eeew!

Then today, Hugh Brown dropped a comment on my diatribe post about value-types, reference-types, Equals and GetHashCode. The post has been live for many months now, and has quite a bit of Google juice. Until now, nobody has ever quibbled with the stuff I wrote, but Hugh had some interesting observations.

First the little stuff

In a minor part of his comment, he was surprised by the many overloads of GetHashCode that I suggest, wondering why I didn't just always expect callers to use the params int[] version. Quite simply, this is because by providing several overloads for a small number of arguments (5 in my example), I avoid paying the cost of allocating the array of integers and copying the values for each call to the CombineHashCodes. While this may seem like a trivial savings, remember that GetHashCode is called many times when dealing with HashTable collections and thus it is worth it to provide expedited code paths for the more common usages. Additional savings inside the CombineHashCodes method are garnered by avoiding the loop setup/iteration overhead. Finally, in optimized builds, these simpler method calls will be inlined by the compiler and/or JIT, where methods having loops in the body are never inlined (in CLR releases thus far). It is worth noting that the .Net runtime implementation does the same thing for System.Web.Util.HashCodeCombiner and System.String.Format.

To the meat of the comment

The main body of his comment was that my code actually didn't return useful values. That concerned me a lot. Given his use of Python and inlined implementation, I had to write my own test jig. Unfortunately it confirmed his complaint. On the one hand, the values he was using to test were not normal values you would expect from GetHashCode. Normally GetHashCode values are well-distributed across the entire 32-bit range of an Int32. He was using sequential, smallish, numbers which was skewing the result oddly. That said, the values SHOULD have been different for very-similar inputs. I delved a little into the code I originally wrote and found that what's on the web page does NOT match what is now in use in the BCL's internal code to combine hash codes (which is where I got the idea of left-shifting by 5 bits before XORing). I think that my code was originally based on the 1.1 BCL but I'm not really sure.

In the .Net 2.0 version, there's a class called System.Web.Util.HashCodeCombiner that actually reflects essentially the same technique as my code, with one huge and very significant difference. Where I simply left-shift the running hash code by 5 bits and then XOR in the next value, they are doing the left-shift and also adding in the running hash, then doing the XOR.

Why so shifty, anyway?

You might be wondering why do the left shift in the first place. The simple answer is that by doing a left-shift by some number of bits, we preserve the low order bits of the running hash somewhat. This prevents the incoming value from XORing away all the significance of the bits thus far, and also insures that low-byte-only intermediate hash codes don't simply cancel each other out. By shifting left 5 digits, we're simply multiplying by 32 (and thus preserving the lowest 5 digits). Then the original running hash value is added in on more time, making the effective multiplier 33. This isn't far off from Hugh's suggestion of multiplying by 37, while being significantly faster in the binary world of computers. Once the shift and add (e.g. multiplication by 33) is completed, the XOR of the new values results in much better distribution of the final value.

I've updated my code in the Utilities library, and I'm going back to the original post to point to this post and the new code. So, I owe you one, Hugh...and maybe Microsoft does too because while I was reviewing their code in the newly released BCL source code, I found a very unexpected implementation. This is the snippet in question:

    internal static int CombineHashCodes(int h1, int h2) {
        return ((h1 << 5) + h1) ^ h2; 
    }
 
    internal static int CombineHashCodes(int h1, int h2, int h3) { 
        return CombineHashCodes(CombineHashCodes(h1, h2), h3);
    } 

    internal static int CombineHashCodes(int h1, int h2, int h3, int h4) {
        return CombineHashCodes(CombineHashCodes(h1, h2), CombineHashCodes(h3, h4));
    } 

    internal static int CombineHashCodes(int h1, int h2, int h3, int h4, int h5) { 
        return CombineHashCodes(CombineHashCodes(h1, h2, h3, h4), h5); 
    }
Did you see the oddity? That implementation taking 4 values does its work by calling the two-value one three times. Once to combine the first pair (h1 and h2) of arguments, once to combine the second pair (h3 and h4), then finally to combine the two intermediate values. That's a bit different than doing what the 3-value and 5-value overloads use. I personally think it should have called the 2-value against output of the 3-value to combine the 4th value (h4). That would be more like what the 3-value and 5-value overload do. In other words, the method should be:
    internal static int CombineHashCodes(int h1, int h2, int h3, int h4) {
        return CombineHashCodes(CombineHashCodes(h1, h2, h3), h4);
    }

Perhaps they don't care that the values are inconsistent, especially since they don't provide a combiner that takes a params int[] overload, but imagine if I had blindly copied that code and you got two different values from this:

   Console.WriteLine("Testing gotcha:");
   Console.WriteLine(String.Format("1,2: {0:x}", Utilities.CombineHashCodes(1, 2)));
   Console.WriteLine(String.Format("1,2,3: {0:x}", Utilities.CombineHashCodes(1, 2, 3)));
   Console.WriteLine(String.Format("1,2,3,4: {0:x}", Utilities.CombineHashCodes(1, 2, 3, 4)));
   Console.WriteLine(String.Format("1,2,3,4,5: {0:x}", Utilities.CombineHashCodes(1, 2, 3, 4, 5)));
   Console.WriteLine(String.Format("[1,2]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2 })));
   Console.WriteLine(String.Format("[1,2,3]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3 })));
   Console.WriteLine(String.Format("[1,2,3,4]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3, 4 })));
   Console.WriteLine(String.Format("[1,2,3,4,5]: {0:x}", Utilities.CombineHashCodes(new int[] { 1, 2, 3, 4, 5 })));

Where we are at now

Here is the revised version of the CombineHashCodes methods from my Utilities library

    public static partial class Utilities
    {
        public static int CombineHashCodes(params int[] hashes)
        {
            int hash = 0;

            for (int index = 0; index < hashes.Length; index++)
            {
                hash = (hash << 5) + hash;
                hash ^= hashes[index];
            }

            return hash;
        }

        private static int GetEntryHash(object entry)
        {
            int entryHash = 0x61E04917; // slurped from .Net runtime internals...

            if (entry != null)
            {
                object[] subObjects = entry as object[];

                if (subObjects != null)
                {
                    entryHash = Utilities.CombineHashCodes(subObjects);
                }
                else
                {
                    entryHash = entry.GetHashCode();
                }
            }

            return entryHash;
        }

        public static int CombineHashCodes(params object[] objects)
        {
            int hash = 0;

            for (int index = 0; index < objects.Length; index++)
            {
                hash = (hash << 5) + hash;
                hash ^= GetEntryHash(objects[index]);
            }

            return hash;
        }

        public static int CombineHashCodes(int hash1, int hash2)
        {
            return ((hash1 << 5) + hash1)
                   ^ hash2;
        }

        public static int CombineHashCodes(int hash1, int hash2, int hash3)
        {
            int hash = CombineHashCodes(hash1, hash2);
            return ((hash << 5) + hash)
                   ^ hash3;
        }

        public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4)
        {
            int hash = CombineHashCodes(hash1, hash2, hash3);
            return ((hash << 5) + hash)
                   ^ hash4;
        }

        public static int CombineHashCodes(int hash1, int hash2, int hash3, int hash4, int hash5)
        {
            int hash = CombineHashCodes(hash1, hash2, hash3, hash4);
            return ((hash << 5) + hash)
                   ^ hash5;
        }

        public static int CombineHashCodes(object obj1, object obj2)
        {
            return CombineHashCodes(obj1.GetHashCode()
                , obj2.GetHashCode());
        }

        public static int CombineHashCodes(object obj1, object obj2, object obj3)
        {
            return CombineHashCodes(obj1.GetHashCode()
                , obj2.GetHashCode()
                , obj3.GetHashCode());
        }

        public static int CombineHashCodes(object obj1, object obj2, object obj3, object obj4)
        {
            return CombineHashCodes(obj1.GetHashCode()
                , obj2.GetHashCode()
                , obj3.GetHashCode()
                , obj4.GetHashCode());
        }

        public static int CombineHashCodes(object obj1, object obj2, object obj3, object obj4, object obj5)
        {
            return CombineHashCodes(obj1.GetHashCode()
                , obj2.GetHashCode()
                , obj3.GetHashCode()
                , obj4.GetHashCode()
                , obj5.GetHashCode());
        }
     }

Wednesday, January 02, 2008

Random thoughts to start the year.

Little bunny FoFo, hopping through the forest, scooping up the field mice and bopping them on the heads.

Down came the good fairy, "Little bunny FoFo, I don't want to see you scooping up the field mice and bopping them on the heads. I'll give you three chances and then I'll turn you into a goon".

  1. The Blues deserve a good bounce. I just finished watching the Blues v. Stars game and there's no question that they outplayed the Stars all but the first couple minutes. That crappy bounce off Tkachuk's skate beat us. That's all.
  2. Xen's bout with RSV teaches a couple lessons. First, no matter how hard the nights are, you will miss your boy's loud cry when it is replaced by a weak wail of despair. My soul can't handle that kind of trial very often and I thank God that he knows what I can bear. It's simple, when you can't do anything to help, you feel useless... when everything you do (sucking his nose clear, pounding the phlegm out of his lungs) actually make your child cry more, it's HARD. Second, there are things that make you remember where you are in life... I'm a senior developer, the old-wise-guy at church, and a BABY as a dad. I don't know squat.
  3. When you need something that isn't .Net all the way, expect it to be hard. In this case, Subversion to Team Foundation tools SUCK. And the Subversion paradigm expects you to Alt.Net it and NOT use TFS, even though it is several dozen times better a tool.  Expect me to be announcing something based on the TFS Migration Toolkit soon, cause damn sure Microsoft isn't going to bother.
  4. Be VERY careful what skills you teach your children. No matter if they are good, or bad, they will be used against you.
    The other day, at a Blues home game; my 4 year old daughter, Arianna, was squirming a bit in the seat. I told her that if she didn't stop I was going to turn her into a goon. The next whistle (she's a GOOD hockey fan) she asks, "Dad, are fairies real?".
    My spidey sense being dull, I answered, "No, they're just like Santa Claus, just a character."
    She replies with no delay, "Then you can't turn me into a goon."

Sunday, November 18, 2007

Individual label RSS subscriptions now available

It seems there are several not-very-overlapping audiences for this blog. There are people reading for the SQL stuff, especially the datetime related stuff. There are people reading for the Lightweight Code Generation stuff, especially the DynamicMethod/DynamicSorter library. Then there are the people hunting down information about the RSSToolkit library. Finally, there's the people following the recent URITemplate library.

Since many of you visitors seem to have specific interestes, I've added the ability to subscribe to individual labels applied to the posts via the excellent tip given by Daniel Cazzulino in his instructional posting.

Just check out the labels listing on the right-side navigation. Oh, if you only read via a feed, this might be worth a read of the actual page.