Monday, 17 December 2012

Diagnosing problems with Process Explorer


I recently upgraded an instance of Team Foundation Server 2010 to the 2012 version. The core upgrade process went very smoothly. The problem came with my customised build templates which used a custom activity written in C#. I was expecting one error, which I tried to fix before even attempting a build. This first error was that my custom activity was written against the TFS 2010 client assemblies:

Microsoft.TeamFoundation.Build.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Microsoft.TeamFoundation.Build.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a
Microsoft.TeamFoundation.WorkItemTracking.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a

These had to be changed to the TFS 2012 (Version 11.0.0.0) assemblies. So I upgraded and rebuilt the assembly holding my custom activity.

However, running a build after the upgrading my custom activity still gave me an error:

TF215097: An error occurred while initializing a build for build definition \[Team Project Name]\[My Build Name]: Cannot create unknown type '{clr-namespace:StealFocus.TfsExtensions.Workflow.Activities;assembly=StealFocus.TfsExtensions}UpdateBuildNumber'.

For some reason my "UpdateBuildNumber" class could not be created. That type could not even be found.

Time to poke around and see what is happening.

After checking some obvious things (were all dependencies available on the Build Agent, checking Fusion Log etc), I used Process Explorer (http://technet.microsoft.com/en-gb/sysinternals/bb896653.aspx) to look at the "TFSBuildServiceHost.exe" process and see if my assembly was even loading.

Starting up Process Explorer, I got a list of all processes on the local machine. I know I'm looking for "TFSBuildServiceHost.exe" as that is the executable for the Team Build service.


I can look at the detail of "TFSBuildServiceHost.exe", including all loaded .NET assemblies.


Looking down the list of loaded assemblies, my "StealFocus.TfsExtensions.dll" (which holds my custom activity) is not loaded at all, so it looks like there's quite a fundamental problem.

The thought occurred to me that I compiled my upgraded custom assembly against .NET 4.5, perhaps the TFS Build Agent is a .NET 4.0 application. This would mean the TFS Build Agent would be unable to load my assembly. As a rule of thumb, a .NET application cannot load assemblies from a later .NET Framework version. As an aside, the CLR version indicated as "v4.0.30319.17929" in the information above is misleading, the .NET Framework 4.5 still uses the CLR v4.0. The CLR saying "v4.0.30319.17929" did not mean it was a .NET 4.0 application.

So I rebuilt my upgraded assembly against .NET 4.0 and the build completed successfully. Looking into the detail with Process Explorer once more, I saw the following:


My assembly ("StealFocus.TfsExtensions.dll") had indeed loaded.

Problem solved.

So Process Explorer was a very useful tool to see what was happening under the hood. It showed me that my assembly was not loaded, which triggered my thought process about .NET Framework versions.

Monday, 10 December 2012

Cloud "Forecast"?


It's worth re-capping a couple of points from my previous post talking about operational cost reduction using Windows Azure. A key benefit of using the cloud is the ability to quickly and easily tear down environments, this includes test environments. If you are not using a test environment, then simply delete it and recreate it later when you need it. This can give you very large cost reductions.

With this in mind, I have created an application that will automatically tear down (or create) cloud resources on a schedule. The application can be run as a console or installed as a windows service.

At the moment you can:
  • Delete "Hosted Services" on a schedule.
  • Create "Hosted Services" on a schedule.
  • Delete Storage Tables on a schedule.
  • Delete Storage Blob Containers on a schedule.
Additional features will be added in time.

You can see the GitHub project here: https://github.com/StealFocus/Forecast

An example configuration is as follows:


  <stealFocusForecastConfiguration 
    xmlns="urn:StealFocus.Forecast.Configuration">
    <scheduleDefinitions>
      <scheduleDefinition name="MorningOutOfHours">
        <days>
          <day name="Monday" startTime="00:00:00" endTime="07:59:59" />
          <day name="Tuesday" startTime="00:00:00" endTime="07:59:59" />
          <day name="Wednesday" startTime="00:00:00" endTime="07:59:59" />
          <day name="Thursday" startTime="00:00:00" endTime="07:59:59" />
          <day name="Friday" startTime="00:00:00" endTime="07:59:59" />
        </days>
      </scheduleDefinition>
      <scheduleDefinition name="BusinessHours">
        <days>
          <day name="Monday" startTime="08:00:00" endTime="18:00:00" />
          <day name="Tuesday" startTime="08:00:00" endTime="18:00:00" />
          <day name="Wednesday" startTime="08:00:00" endTime="18:00:00" />
          <day name="Thursday" startTime="08:00:00" endTime="18:00:00" />
          <day name="Friday" startTime="08:00:00" endTime="18:00:00" />
        </days>
      </scheduleDefinition>
      <scheduleDefinition name="EveningOutOfHours">
        <days>
          <day name="Monday" startTime="18:00:01" endTime="23:59:59" />
          <day name="Tuesday" startTime="18:00:01" endTime="23:59:59" />
          <day name="Wednesday" startTime="18:00:01" endTime="23:59:59" />
          <day name="Thursday" startTime="18:00:01" endTime="23:59:59" />
          <day name="Friday" startTime="18:00:01" endTime="23:59:59" />
        </days>
      </scheduleDefinition>
      <scheduleDefinition name="Weekend">
        <days>
          <day name="Saturday" startTime="00:00:00" endTime="23:59:59" />
          <day name="Sunday" startTime="00:00:00" endTime="23:59:59" />
        </days>
      </scheduleDefinition>
    </scheduleDefinitions>
    <windowsAzure>
      <subscriptions>
        <subscription
          id="myArbitraryAzureSubscriptionName"
          subscriptionId="GUID"
          certificateThumbprint="0000000000000000000000000000000000000000" />
      </subscriptions>
      <hostedService>
        <packages>
          <package
            id="myArbitraryPackageName"
            storageAccountName="myAzureStorageAccountName"
            containerName="MyContainer"
            blobName="MyPackage.cspkg" />
        </packages>
        <deploymentDeletes>
          <deploymentDelete
            serviceName="myAzureServiceName"
            pollingIntervalInMinutes="10"
            subscriptionConfigurationId="myArbitraryAzureSubscriptionName">
            <deploymentSlots>
              <deploymentSlot name="Staging" />
              <deploymentSlot name="Production" />
            </deploymentSlots>
            <schedules>
              <schedule scheduleDefinitionName="MorningOutOfHours" />
              <schedule scheduleDefinitionName="EveningOutOfHours" />
              <schedule scheduleDefinitionName="Weekend" />
            </schedules>
          </deploymentDelete>
        </deploymentDeletes>
        <deploymentCreates>
          <deploymentCreate
            subscriptionConfigurationId="myArbitraryAzureSubscriptionName"
            windowsAzurePackageId="myArbitraryPackageName"
            serviceName="myAzureServiceName"
            deploymentName="MyName"
            deploymentLabel="MyLabel"
            deploymentSlot="Production"
            packageConfigurationFilePath="C:\PathTo\MyPackageConfiguration.cscfg"
            treatWarningsAsError="true"
            startDeployment="true"
            pollingIntervalInMinutes="10">
            <schedules>
              <schedule scheduleDefinitionName="BusinessHours" />
            </schedules>
          </deploymentCreate>
        </deploymentCreates>
      </hostedService>
      <storageService>
        <storageAccounts>
          <storageAccount
            storageAccountName="myStorageAccountName"
            storageAccountKey="myStorageAccountKey" />
        </storageAccounts>
        <tableDeletes>
          <tableDelete
            id="myArbitraryTableDeleteId"
            storageAccountName="myStorageAccountName"
            pollingIntervalInMinutes="60">
            <storageTables>
              <storageTable tableName="myAzureTableName1" />
              <storageTable tableName="myAzureTableName2" />
            </storageTables>
            <schedules>
              <schedule scheduleDefinitionName="MorningOutOfHours" />
              <schedule scheduleDefinitionName="EveningOutOfHours" />
              <schedule scheduleDefinitionName="Weekend" />
            </schedules>
          </tableDelete>
        </tableDeletes>
        <blobContainerDeletes>
          <blobContainerDelete
            id="myArbitraryBlobContainerDeleteId"
            storageAccountName="myStorageAccountName"
            pollingIntervalInMinutes="60">
            <blobContainers>
              <blobContainer blobContainerName="myAzureBlobContainerName1" />
              <blobContainer blobContainerName="myAzureBlobContainerName2" />
            </blobContainers>
            <schedules>
              <schedule scheduleDefinitionName="MorningOutOfHours" />
              <schedule scheduleDefinitionName="EveningOutOfHours" />
              <schedule scheduleDefinitionName="Weekend" />
            </schedules>
          </blobContainerDelete>
        </blobContainerDeletes>
      </storageService>
    </windowsAzure>
  </stealFocusForecastConfiguration>


Monday, 3 December 2012

Amazon Web Services


Every day this year, on average, AWS added the same server capacity to its public cloud as it took to run the Amazon.com retail business back in 2003, when it had $5.2bn in revenues. Amazon had "a whole lot of servers" back then in 2003…

This time last year, AWS execs were bragging that the public cloud, on average through 2011, was adding enough server capacity each day to run Amazon.com when it was a $2.76bn business in 2000.” - http://www.theregister.co.uk/2012/11/29/amazon_aws_update_jassy/

Wow.

Monday, 26 November 2012

IDisposable and object initialisers

Background...

The "using" statement is just syntax trick. The resulting compiled code from a "using" statement is actually a try-catch statement.

This...


    public class MyClass
    {
        public void MyMethod()
        {
            using (MyDisposableClass myDisposableClass = new MyDisposableClass())
            {
                myDisposableClass.DoSomething();
            }
        }
    }


...is converted by the compiler to this...


    public class MyClass
    {
        public void MyMethod2()
        {
            MyDisposableClass myDisposableClass = null;
            try
            {
                myDisposableClass = new MyDisposableClass();
            }
            catch
            {
                myDisposableClass.Dispose();
            }
        }
    }


Problem...

Consider the following code:

    public class MyClass
    {
        public void MyMethod()
        {
            using (MyDisposableClass myDisposableClass = new MyDisposableClass
                {
                    MyProperty = string.Empty
                })
            {
                myDisposableClass.DoSomething();
            }
        }
    }


This can cause a problem.

The problem is caused by the object initialiser. Remember, the object initialiser is just syntactic sugar. "Under the covers" (the actual compiled code), the object will be created as normal with its constructor and then the properties would be assigned afterwards.

So the use of the object initialiser means the object will be created outside the scope of the using statement. This in turn means the scope of the try-catch statement generated by the compiler would be incorrect. If the object is not created inside the scope of the using statement (and therefore the try-catch statement), should there be an error during the creation and initialisation, then "Dispose" will never be called. If Dispose is never called, the object will not be disposed in a timely manner. This means you may leak resources.

FxCop will advise you of the problem with an error like the following:

CA2000 : Microsoft.Reliability : In method 'MyClass.MyMethod()', object '<>g__initLocal0' is not disposed along all exception paths. Call System.IDisposable.Dispose on object '<>g__initLocal0' before all references to it are out of scope.

The correct code is as follows:


    public class MyClass
    {
        public void MyMethod()
        {
            using (MyDisposableClass myDisposableClass = new MyDisposableClass())
            {
                myDisposableClass.MyProperty = string.Empty;
                myDisposableClass.DoSomething();
            }
        }
    }


The lesson is not to use object initialisers in combination with "using" statements.

Monday, 19 November 2012

Enterprise debt

Following on from the earlier post on Technical Debt, there exists a similar concept for the enterprise - "Enterprise Debt".

Enterprise debt follows the exact same parallels with financial debt. You may have inefficient business processes, duplicated business processes, manual processes or a myriad of problems with your Enterprise Architecture. These problems constitute a tax on your business but this costs may be justified. Duplicated business processes might allow different business units to start or move in parallel. Inefficient or manual business process might be justified because you have not had time to improve them or automated them, yet they perform a critical function. You might be happy, in the short or medium term, to pay the interest on these debts to keep your enterprise functioning.

Of course, as before, you must take care not to go bankrupt. Should your enterprise debt exceed your capacity to pay down the interest, your business is finished.

One of the goals of an Enterprise Architecture initiative should be to identify your enterprise debt and seek to reduce it. An ongoing Enterprise Architecture program should also identify new enterprise debt and ensure that it is justified i.e. the "tax" that it incurs brings a larger long term benefit.

Monday, 12 November 2012

Technical debt

Technical debt is a way of describing some of the costs you can incur when building software. For example, when implementing a feature in a system you generally have two options.

  1. Implement the feature quickly, knowing that it will make future changes to the system harder.
  2. Implement the feature with more care (using time/effort) resulting in a better design and more robust implementation. This makes subsequent changes to the system easier.
There is a clear trade off here. You can liken technical debt to financial debt.

When incurring technical debt, as with financial debt, you will have to pay "interest". In the example above, the interest you pay on quickly implementing the feature is the additional cost required to implement subsequent features. You can continue to pay the interest each time you add a feature or you can repay the principal by going back and implementing the original feature properly.

Note that technical debt, like financial debt, is sometimes useful. In the same way that you likely couldn't buy a house without going into debt, sometimes you might struggle to deliver an application or system without taking on some technical debt first. Examples of useful technical debt are as follows...
  • Quickly building features so that users or business sponsors can see functionality.
  • Quickly building features to attract more customers or satisfy existing ones.
  • Quickly building features (or a whole product!) so that you get first mover advantage over competitors.
In most circumstances the driver for taking on technical debt is time, for example, you have to deliver the software on a certain date.

However, like with financial debt, you must take care not to go bankrupt. Should the interest payments exceed your capacity to repay them, you are finished.

Should you be spending all your effort on simply paying down the interest on your debts, you will be unable to move your project or business forward. At this point your your project or business will fail.

There are many examples of technical debt. Not having an automated deployment mechanism is common. Not having such a thing saves you the time it takes to implement one. The trade off being that this costs you time/effort for each occasion you deploy your application, because you have to do the deployment manually. You can either invest the time to implement an automated deployment and then experience very low cost for subsequent deployments or continue with a fixed cost for each release. Note that if the application has a short lifespan, and you will only deploy the application a few times, then it's simply not worth investing in the automated deployment mechanism. Continuing the financial debt parallel, this is similar to renting a car versus buying a car. The automated deployment is like buying the car - large up front cost giving a low cost for subsequent uses. Where as manual deployments are like renting a car - zero up front costs but a fixed cost for each use.

Like most things in software, its a matter of weighing up the costs and benefits. Taking on technical debt can be justified but should be clearly acknowledged if it is. In the vast majority of times there should be a plan to pay off the debt.

Tuesday, 6 November 2012

SignalR included in ASP.NET Fall 2012 Update

Just what it says on the tin, the SignalR framework is included in ASP.NET Fall 2012 Update.


It's really great to see Microsoft really embracing open source components, giving them first class support (SignalR Hub Classes have their own Visual Studio templates) and delivering them via NuGet.

And if you've not yet heard of SignalR, you really need to check it out.

Monday, 5 November 2012

Base 64 encoding strings

Encoding strings to base 64 is super trivial...

The code.


    using System;
    using System.Text;
 
    public static class StringExtensions
    {
        public static string Base64Encode(this string text)
        {
            byte[] bytes = new UTF8Encoding().GetBytes(text);
            string base64EncodedText = Convert.ToBase64String(bytes);
            return base64EncodedText;
        }
 
        public static string Base64Decode(this string base64EncodedText)
        {
            byte[] bytes = Convert.FromBase64String(base64EncodedText);
            string text = new UTF8Encoding().GetString(bytes);
            return text;
        }
    }


The test.


    using Microsoft.VisualStudio.TestTools.UnitTesting;
 
    [TestClass]
    public class StringExtensionsTests
    {
        [TestMethod]
        public void TestEncodeAndDecode()
        {
            const string OrginalText = "originalText";
            string encodedText = OrginalText.Base64Encode();
            Assert.AreNotEqual(OrginalText, encodedText, "The encoded text was the same as the original plain text.");
            string decodedText = encodedText.Base64Decode();
            Assert.AreEqual(OrginalText, decodedText, "The decoded text was not the same as the original plain text.");
        }
    }


And that is it.

Not sure why there aren't such methods in the Base Class Libraries (BCL).

Friday, 2 November 2012

Interrogating NServiceBus Saga data stored in RavenDB

These days, NServiceBus stores its data in RavenDB, including any saga data. Sometimes it is useful to interrogate the saga data to report on running processes.

Querying the saga data stored in NServiceBus is not as simple as plain RavenDB Document Store client, you need to supply conventions to be able to pick up the saga data.

An example is as follows:

    using System;
    using System.Linq;
 
    using NServiceBus.Persistence.Raven;
 
    using Raven.Client;
    using Raven.Client.Document;
 
    public class Example
    {
        public void GetSagaDataFromRavenDB()
        {
            using (IDocumentStore documentStore = new DocumentStore
                {
                    ConnectionStringName = "MyConnectionString",
                    ResourceManagerId = Guid.Parse("C6687DB2-764C-46A4-A3C5-15A3BA22A01A"),
                    Conventions = new DocumentConvention
                        {
                            FindTypeTagName = new RavenConventions().FindTypeTagName
                        }
                })
            {
                documentStore.Initialize();
                using (IDocumentSession documentSession = documentStore.OpenSession())
                {
                    MySagaData[] mySagaDatas =
                        (from mySagaData in documentSession.Query<MySagaData>()
                         select mySagaData).ToArray();
                    foreach (MySagaData mySagaData in mySagaDatas)
                    {
                        Console.WriteLine(mySagaData.SomeProperty);
                    }
                }
            }
        }
    }

The GUID is specific (you need to supply that one) and the "RavenConventions" come from the NServiceBus persistence API.

And that is it.

Tuesday, 30 October 2012

Windows Azure Storage URLs

Following on from the previous post about accessing the Azure Storage Emulator, you would also need to know what the actual endpoint URLs are.


The following URL formats are used for addressing resources running on Azure and on the storage emulator:

Blob Service

Emulator - http://127.0.0.1:10000/storageaccount/
Azure - http://storageaccount.blob.core.windows.net/

Queue Service

Emulator - http://127.0.0.1:10001/storageaccount/
Azure - http://storageaccount.queue.core.windows.net/

Table Service

Emulator - http://127.0.0.1:10002/storageaccount/
Azure - http://storageaccount.table.core.windows.net/

For your emulator, the "storageaccount" is always "devstoreaccount1".

The "storageaccount" value for Azure is whatever you created. The name is always lowercase and cannot contain spaces or special characters.

Monday, 29 October 2012

Access to the Windows Azure Storage Emulator

Should you wish to do any work against the Windows Azure Storage Emulator, say testing the REST API, you might be wondering what the account name and account key might be.

Those details for the Storage Emulator are the same for every single installation of the developer tools, they do not vary between machines.

They are published here: http://msdn.microsoft.com/en-us/library/windowsazure/gg432983.aspx

Account name:

devstoreaccount1

Account key:

Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Giving you a connection string of:

DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Though the following is also a valid connection string for the Storage Emulator:

usedevelopmentstorage=true

The individual account name and key would be of use when you don't have a connection string, like the aforementioned REST API.

Friday, 26 October 2012

A "processing" dialogue box with jQuery

Suppose you want to create something like the following:



Common scenarios are when a user submits something and you want to wait for a response without refreshing the page.

This is easy!

First add two NuGet packages to your web application:

    install-package jQuery-UI
    install-package jQuery.UI.Combined

Then use the following mark-up:


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
    <title></title>
    <link href="Content/themes/base/jquery-ui.css" rel="stylesheet" type="text/css" />
    <link href="Content/themes/base/jquery.ui.core.css" rel="stylesheet" type="text/css" />
    <link href="Content/themes/base/jquery.ui.theme.css" rel="stylesheet" type="text/css" />
    <link href="Content/themes/base/jquery.ui.dialog.css" rel="stylesheet" type="text/css" />
    <script src="Scripts/jquery-1.8.2.js" type="text/javascript"></script>
    <script src="Scripts/jquery-ui-1.9.0.js" type="text/javascript"></script>
</head>
    <body>
        <script type="text/javascript">
            $(document).ready(function () {
                $("#progressDialog").dialog({
                    autoOpen: false,
                    draggable: false,
                    modal: true,
                    resizable: false,
                    closeOnEscape: false,
                    open: progressDialogOpen
                });
            });
 
            function progressDialogOpen() {
                $(".ui-dialog-titlebar-close"this.parentNode).hide();
            }
 
            function progressDialogClose() {
                $("#progressDialog").dialog('close');
            }
 
            function doSomethingOpen() {
                $("#progressDialog").dialog('open');
                setTimeout(progressDialogClose, 3000);
                return false;
            }
 
            function doSomethingClose() {
                progressDialogClose();
            }
        </script>
        <button onclick="doSomethingOpen()">Open</button>
        <button onclick="doSomethingClose()">Close</button>
        <div id="progressDialog" title="Processing">
            <img alt="Processing..." src="Content/images/YourAnimatedProcessingImage.gif" />
        </div>
    </body>
</html>


Note that this example just puts in a time out to simulate something happening. You can replace the call to "setTimeout" with some server call.

And that is it.

Thursday, 25 October 2012

Building Windows Services with TopShelf

TopShelf is a framework you can use to build windows services which can be installed easily. It is used by such famous products as NServiceBus. It allows you to create windows services that you don't have to install with "installutil.exe". The windows service you create is self installing.

When using TopShelf, you install your windows service by running the following command:

    Acme.TopShelfExample.Windows.Service.exe install

Which would result in the following console output:


    Configuration Result:
    [Success] Name ExampleWorker
    [Success] DisplayName Example Worker
    [Success] Description An Example Worker.
    [Success] ServiceName ExampleWorker
    Topshelf v3.0.105.0, .NET Framework v4.0.30319.269

    Running a transacted installation.

    Beginning the Install phase of the installation.
    Installing Example Worker service
    Installing service ExampleWorker...
    Service ExampleWorker has been successfully installed.
    Creating EventLog source ExampleWorker in log Application...

    The Install phase completed successfully, and the Commit phase is beginning.

    The Commit phase completed successfully.

    The transacted install has completed.


Which is pretty much the same as you get using "installutil.exe".

To uninstall, run the following:

    Acme.TopShelfExample.Windows.Service.exe uninstall

Which would result in the following console output:


    Configuration Result:
    [Success] Name ExampleWorker
    [Success] DisplayName Example Worker
    [Success] Description An Example Worker.
    [Success] ServiceName ExampleWorker
    Topshelf v3.0.105.0, .NET Framework v4.0.30319.269


    The uninstall is beginning.
    Uninstalling ExampleWorker service
    Removing EventLog source ExampleWorker.
    Service ExampleWorker is being removed from the system...
    Service ExampleWorker was successfully removed from the system.

    The uninstall has completed.


Again, the same as you'd get using "installutil.exe".

To use TopShelf for your windows service, create a console application and install TopShelf using NuGet:

    install-package TopShelf

The following code should get you started:


namespace Acme.TopShelfExample.Windows.Service
{
    using Topshelf;
 
    internal class Program
    {
        internal static void Main(string[] args)
        {
            HostFactory.Run(hostConfigurator =>
            {
                hostConfigurator.Service<ExampleWorker>(serviceConfigurator =>
                {
                    serviceConfigurator.ConstructUsing(name => new ExampleWorker());
                    serviceConfigurator.WhenStarted(ew => ew.Start());
                    serviceConfigurator.WhenStopped(ew =>
                        {
                            ew.Stop();
 
                            // And dispose or release any component containers (e.g. Castle) 
                            // or items resolved from the container.
                        });
                });
                hostConfigurator.RunAsLocalSystem();
                hostConfigurator.SetDescription("An Example Worker.");
                hostConfigurator.SetDisplayName("Example Worker");
                hostConfigurator.SetServiceName("ExampleWorker"); // No spaces allowed
                hostConfigurator.StartAutomatically();
            });
        }
    }
}



namespace Acme.TopShelfExample.Windows.Service
{
    using System;
    using System.Globalization;
    using System.Threading;
 
    public abstract class Worker
    {
        private Thread thread;
 
        private bool stop = true;
 
        protected Worker()
        {
            this.SleepPeriod = new TimeSpan(0, 0, 0, 10);
            this.Id = Guid.NewGuid();
        }
 
        public TimeSpan SleepPeriod { getset; }
 
        public bool IsStopped { getprivate set; }
 
        protected Guid Id { getprivate set; }
 
        public void Start()
        {
            string logMessage = string.Format(CultureInfo.CurrentCulture, "Starting worker of type '{0}'."this.GetType().FullName);
            System.Diagnostics.Debug.WriteLine(logMessage);
            this.stop = false;
 
            // Multiple thread instances cannot be created
            if (this.thread == null || this.thread.ThreadState == ThreadState.Stopped)
            {
                this.thread = new Thread(this.Run);
            }
 
            // Start thread if it's not running yet
            if (this.thread.ThreadState != ThreadState.Running)
            {
                this.thread.Start();
            }
        }
 
        public void Stop()
        {
            string logMessage = string.Format(CultureInfo.CurrentCulture, "Stopping worker of type '{0}'."this.GetType().FullName);
            System.Diagnostics.Debug.WriteLine(logMessage);
            this.stop = true;
        }
 
        protected abstract void DoWork();
 
        private void Run()
        {
            try
            {
                try
                {
                    while (!this.stop)
                    {
                        this.IsStopped = false;
                        this.DoWork();
                        Thread.Sleep(this.SleepPeriod);
                    }
                }
                catch (ThreadAbortException)
                {
                    Thread.ResetAbort();
                }
                finally
                {
                    this.thread = null;
                    this.IsStopped = true;
                    string logMessage = string.Format(CultureInfo.CurrentCulture, "Stopped worker of type '{0}'."this.GetType().FullName);
                    System.Diagnostics.Debug.WriteLine(logMessage);
                }
            }
            catch (Exception e)
            {
                string exceptionMessage = string.Format(CultureInfo.CurrentCulture, "Error running the '{0}' worker."this.GetType().FullName);
                System.Diagnostics.Debug.WriteLine(exceptionMessage, e);
                throw;
            }
        }
    }
}



namespace Acme.TopShelfExample.Windows.Service
{
    using System.Diagnostics;
 
    public class ExampleWorker : Worker
    {
        protected override void DoWork()
        {
            Debug.WriteLine("Example Worker is doing something.");
        }
    }
}


And that is it.

About Me