.net Archives - Tales of a Code Monkey https://cymbeline.ch/tag/net/ ... the adventures of a guy making software. Wed, 20 Dec 2017 07:56:35 +0000 en-US hourly 1 https://wordpress.org/?v=5.9.3 Lucene.Net.ObjectMapping for .Net Standard 2.0 https://cymbeline.ch/2017/12/19/lucene-net-objectmapping-net-standard-2-0/?utm_source=rss&utm_medium=rss&utm_campaign=lucene-net-objectmapping-net-standard-2-0 Tue, 19 Dec 2017 21:11:58 +0000 https://cymbeline.ch/?p=388 It’s been a long time since I’ve done some work on my Lucene.Net.ObjectMapping library. Recently I accepted a pull request that added support for the 4.8 beta releases of Lucene.Net itself, but when I involuntarily needed to updated one of my services to bring it up to speed with running in a Docker container, I … Continue reading "Lucene.Net.ObjectMapping for .Net Standard 2.0"

The post Lucene.Net.ObjectMapping for .Net Standard 2.0 appeared first on Tales of a Code Monkey.

]]>
It’s been a long time since I’ve done some work on my Lucene.Net.ObjectMapping library. Recently I accepted a pull request that added support for the 4.8 beta releases of Lucene.Net itself, but when I involuntarily needed to updated one of my services to bring it up to speed with running in a Docker container, I decided that it was about time to update Lucene.Net.ObjectMapping for .Net Standard 2.0. The last time I used the library in a Docker container, ASP.NET vNext RC1 was just about to become final. so that’s a long time ago. Accordingly, there was quite a bit of work to understand the changes needed: both in .Net (and ASP.NET) between the 1.0 RC1 and the .Net Standard 2.0 releases, and also between the Lucene.Net 3.x and 4.8 releases. Luckily, the latter was largely taken care of by the pull request for the library itself. The former however proved a bit challenging. After all, the toolset has changed significantly.

Updated Sources

To cut a long story short, the updated sources are now available on GitHub. I decided to track it in a separate branch for better isolation. This new branch is aptly called netstandard. I’ll try to stay up-to-date with the more recent releases of Lucene.Net, and also with .Net Standard 2.0. That is, provided that I find the time for it. You may notice that the project files have become quite a bit simpler. That’s certainly one change in .Net Standard and Core that I welcome. The other is the better integration of Nuget for package referencing and package creation/pushing.

Updated Unit Tests

As a side effect, I also figured that it was going to be easier to update NUnit to the latest version, since its toolset is also well integrated with the new dotnet toolset. Since I’m doing all changes through VSCode and with building/testing/packaging in Docker containers based on the microsoft/aspnetcore-build:2 images, I wanted to keep it simple. The good thing here is that the dotnet toolset seems to offer really everything I need for this, and is suprisingly easy to handle, especially when compared to the RC1 version.

Updated Nuget Package

As I’ve mentioned in the beginning, I primarily made this effort because I needed a newer version of Lucene.Net with compatibility for .Net Standard 2.0. As a result, I published a new RC build as a Nuget package too. It is built on the latest Lucene.Net 4.8 beta release and currently supports only .Net Standard 2.0. If there’s a great demand for it, I’ll see if I can add support for other targets – or accept pull requests accordingly.

Conclusions

Nothing much besides the obvious: .Net Standard seems to be in a good shape wrt libraries and toolset, as well as support on Linux. There are a few gotchas but overall nothing much of a problem. Lucene.Net is still somewhat badly documented itself, and the tracking of braking changes between major/minor versions (and in fact also revisions/beta releases of the same major/minor) could be greatly improved. An online documentation would be very useful – maybe it exists, and I just haven’t found it? In any case, skimming through the Lucene.Net sources on GitHub works too, though being much slower.

You can find more information about object mapping for Lucene.Net on the Lucene.Net.ObjectMapping page.

The post Lucene.Net.ObjectMapping for .Net Standard 2.0 appeared first on Tales of a Code Monkey.

]]>
Writing to Event Log — the right way https://cymbeline.ch/2014/04/27/writing-to-the-event-log-the-right-way/?utm_source=rss&utm_medium=rss&utm_campaign=writing-to-the-event-log-the-right-way Sun, 27 Apr 2014 18:04:07 +0000 http://www.cymbeline.ch/?p=239 This one’s been on my mind for a long time. I know it’s very tempting to just use System.Diagnostics.EventLog.WriteEntry to write some string to the event log. But personally I never liked the fact that you write all that static text along with the variables like actual error messages etc. Why make your life harder … Continue reading "Writing to Event Log — the right way"

The post Writing to Event Log — the right way appeared first on Tales of a Code Monkey.

]]>
This one’s been on my mind for a long time. I know it’s very tempting to just use System.Diagnostics.EventLog.WriteEntry to write some string to the event log. But personally I never liked the fact that you write all that static text along with the variables like actual error messages etc. Why make your life harder analyzing events later on when there’s an easy way to fix that?

Instrumentation Manifests to the Rescue!

For a while now this has actually been quite easy, using instrumentation manifests. You can read more about it here: http://msdn.microsoft.com/en-us/library/windows/desktop/dd996930(v=vs.85).aspx. These manifests allow you to define events, templates for events, messages for events, even your own event channels (so you wouldn’t need to log into that crowded “Application” channel anymore) and a lot more. But let’s look at a little example.

<?xml version="1.0" encoding="utf-8"?>
<instrumentationManifest xsi:schemaLocation="http://schemas.microsoft.com/win/2004/08/events eventman.xsd" xmlns="http://schemas.microsoft.com/win/2004/08/events" xmlns:win="http://manifests.microsoft.com/win/2004/08/windows/events" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:trace="http://schemas.microsoft.com/win/2004/08/events/trace">
    <instrumentation>
        <events>
            <provider name="MyService" guid="{DDB3FC6E-6CC4-4871-9F27-88C1B1F19BBA}" symbol="TheEventLog"
                      message="$(string.MyService.ProviderMessage)"
                      resourceFileName="MyService.Events.dll"
                      messageFileName="MyService.Events.dll"
                      parameterFileName="MyService.Events.dll">
                <events>
                    <event symbol="ServiceStarted" version="0" channel="Application"
                           value="1000" level="win:Informational"
                           message="$(string.MyService.event.1000.message)" />
                    <event symbol="ServiceStopped" version="0" channel="Application"
                           value="1001" level="win:Informational"
                           message="$(string.MyService.event.1001.message)"/>
                    <event symbol="ServiceConfigurationError" version="0" channel="Application"
                           value="1002" level="win:Error" template="ServiceException"
                           message="$(string.MyService.event.1002.message)"/>
                    <event symbol="ServiceUnhandledException" version="0" channel="Application"
                           value="1003" level="win:Error" template="ServiceException"
                           message="$(string.MyService.event.1003.message)"/>
                </events>
                <levels/>
                <channels>
                    <importChannel name="Application" chid="Application"/>
                </channels>
                <templates>
                    <template tid="ServiceException">
                        <data name="Exception" inType="win:UnicodeString" outType="xs:string"/>
                    </template>
                </templates>
            </provider>
        </events>
    </instrumentation>
    <localization>
        <resources culture="en-US">
            <stringTable>
                <string id="level.Informational" value="Information"/>
                <string id="level.Error" value="Error"/>
                <string id="channel.Application" value="Application"/>

                <string id="MyService.ProviderMessage"
                        value="My Windows Service"/>

                <string id="MyService.event.1000.message"
                        value="My Windows Service has started."/>
                <string id="MyService.event.1001.message"
                        value="My Windows Service has stopped."/>
                <string id="MyService.event.1002.message"
                        value="My Windows Service encountered a problem with its configuration. Please fix these issues and start the service again.:%n%n%1"/>
                <string id="MyService.event.1003.message"
                        value="My Windows Service encountered an unhandled exception:%n%n%1"/>
            </stringTable>
        </resources>
    </localization>
</instrumentationManifest>

Let’s start at the top. Lines 5-9 define some basic information about this instrumentation provider, like a name, a unique ID and a symbol (which will come in handy later). We can also define a friendly name for events logged this way (i.e. the event source). Let’s ignore the three xyzFileName attributes for now. On lines 11-22 we’re defining four events, some of them informational (like “the service started” or “the service stopped”), some are errors (e.g. configuration errors, or unhandled exceptions). If we wanted to define our own channel, we’d do so between lines 25 and 27. For now we’re just re-using (i.e. importing) the pre-defined “Application” channel.

Event Templates

Event templates are particularly handy if you want to write parameters with your events. Lines 29-31 define a template which has exactly one parameter, which happens to be a unicode string. We’ll use it to store exceptions. We can define more than one parameter and there’s a lot of types to use, but I’ll let you explore those on your own. This template, as you can see, is referred to by the two events with IDs 1002 and 1003.

Resources

The localization gods are with us to. Our event and template definitions so far were abstract, no actual UI strings were contained. We can define those per language, as you can see starting line 37. In the resources element and its sub-elements, we define the actual strings we want to show, including any parameters. Parameters are numbered (1-based) and are referred to with %1, %2, %3 and so on. As you can see on lines 51 and 53, we’re defining the strings for the two error events with one parameter each (“%1”), to contain the exception message. If you want line breaks, you’ll achieve those with “%n”.

Compile, with some Sugar added

So now we have a fancy manifest, but what can we do with it? Well, eventually we want to log events using the definitions from this manifest, so let’s get to it. The Windows SDK comes with two very handy tools, MC.exe (the message compiler) and RC.exe (the resource compiler). We’ll use the first to compile the manifest — and generate some c# code as a side effect — then use the second to compile the output of the first into a resource which can be linked into an executable. The commands are as follows.

mc.exe -css MyService.Events manifest.man -r obj\Debug
rc.exe obj\Debug\manifest.rc

MC.exe was nice enough to generate a file called manifest.cs for us. That file contains some code that you can use to log every event you defined in the manifest. This is why it was so handy to define the events (and templates): depending on how many parameters an event’s template has, the generated methods will ask you to provide just as many (typed) values for those parameters. Isn’t that great?! You’ll also find the compiled manifest.res file in obj\Debug. You can link that into its own executable (or your main assembly too, if you wanted), as follows:

csc.exe /out:MyService.Events.dll /target:library /win32res:obj\Debug\manifest.res

And you have a satellite assembly which holds the manifest you’ve built! CSC will log a warning about missing source files (because you didn’t add any .cs files to be compiled) but so far that doesn’t hurt anyone. We could probably also use link.exe but so far the C# compiler does a nice enough job.

Use that generated Code

Remember the code that was generated for us by MC.exe? Let’s go ahead and use it.

// ...
TheEventLog.EventWriteServiceStarted();
// ...
TheEventLog.EventWriteServiceConfigurationError(exception.Message); // ... or log the entire exception, including stack traces.
// ...

Wasn’t that very easy?

Install the Event Provider

There’s still something missing though: we’ll need to install our instrumentation/event provider with the system. It’s similar to creating the event source (which in fact will happen automatically when installing the manifest). This will typically happen in your application’s/service’s installer, using a command line as follows. But before that, remember the xyzFileName attributes we talked about? These need to be updated to point to the full path of the MyService.Events.dll assembly we generated. Otherwise the following command is going to fail.

wevtutil.exe im path\to\my\manifest.man

From now on, when your app or service starts and logs those events, they’ll show up in the event viewer. For the two events we defined with parameters, the values of the parameters are essentially the only thing that’s stored along with the ID of the event. Likewise, they’ll be the only thing that’s going to be exported with the event — so the files with the exported events you’re going to ask your customers to send you are going to be a lot smaller and won’t contain the static part of the events you already know anyway!

To uninstall the manifest, just run this command:

wevtutil.exe um path\to\my\manifest.man

Both commands need to run elevated (particularly important to remember when writing your installer).

Next Steps

As a next step, you’ll probably want to add the manual steps of compiling the manifest linking into the satellite assembly to the project file as automated targets. I’ll likely write another post about that in the future too.

Summary

As you can see, writing a manifest, compiling it and using the generated code to write to the event log is quite easy. So no more excuses to write each event as one big string (which is can be a lot harder to analyze when they come back from your customers because you first need to parse the strings).

The post Writing to Event Log — the right way appeared first on Tales of a Code Monkey.

]]>
Gzip Encoding an HTTP POST Request Body https://cymbeline.ch/2014/03/16/gzip-encoding-an-http-post-request-body/?utm_source=rss&utm_medium=rss&utm_campaign=gzip-encoding-an-http-post-request-body Sun, 16 Mar 2014 17:30:35 +0000 http://www.cymbeline.ch/?p=232 I was wondering how difficult it was to Gzip-compress the body of an HTTP POST request (or any HTTP request with a body, that is), for large request bodies. While the .Net HttpClient has supported compression of response bodies for a while, it appears that to this day there is no out-of-the-box support for encoding … Continue reading "Gzip Encoding an HTTP POST Request Body"

The post Gzip Encoding an HTTP POST Request Body appeared first on Tales of a Code Monkey.

]]>
I was wondering how difficult it was to Gzip-compress the body of an HTTP POST request (or any HTTP request with a body, that is), for large request bodies. While the .Net HttpClient has supported compression of response bodies for a while, it appears that to this day there is no out-of-the-box support for encoding the body of a request. Setting aside for now that the server may not natively support Gzip-compressed request bodies, let’s look at what we need to do to support this on the client side.

Enter HttpMessageHandler

The HttpMessageHandler abstract base class and its derived classes are used by the HttpClient class to asynchronously send HTTP requests and receive the response from the server. But since we don’t actually want to send the message ourselves – just massage the body and headers a little bit before sending – we’ll derive a new class GzipCompressingHandler from DelegatingHandler so we can delegate sending (and receiving) to another handler and just focus on the transformation of the content. So here’s what that looks like.

public sealed class GzipCompressingHandler : DelegatingHandler
{
    public GzipCompressingHandler(HttpMessageHandler innerHandler)
    {
        if (null == innerHandler)
        {
            throw new ArgumentNullException("innerHandler");
        }

        InnerHandler = innerHandler;
    }

    protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        HttpContent content = request.Content;

        if (request.Method == HttpMethod.Post)
        {
            // Wrap the original HttpContent in our custom GzipContent class.
            // If you want to compress only certain content, make the decision here!
            request.Content = new GzipContent(request.Content);
        }

        return base.SendAsync(request, cancellationToken);
    }
}

As you can see, all we’re doing is just wrapping the original HttpContent in our GzipContent class. So let’s get right to that.

Gzip-compressed HttpContent: GzipContent

We’re almost there, all we need to do is actually compressing the content and modify the request headers to indicate the new content encoding.

internal sealed class GzipContent : HttpContent
{
    private readonly HttpContent content;

    public GzipContent(HttpContent content)
    {
        this.content = content;

        // Keep the original content's headers ...
        foreach (KeyValuePair<string, IEnumerable<string>> header in content.Headers)
        {
            Headers.TryAddWithoutValidation(header.Key, header.Value);
        }

        // ... and let the server know we've Gzip-compressed the body of this request.
        Headers.ContentEncoding.Add("gzip");
    }

    protected override async Task SerializeToStreamAsync(Stream stream, TransportContext context)
    {
        // Open a GZipStream that writes to the specified output stream.
        using (GZipStream gzip = new GZipStream(stream, CompressionMode.Compress, true))
        {
            // Copy all the input content to the GZip stream.
            await content.CopyToAsync(gzip);
        }
    }

    protected override bool TryComputeLength(out long length)
    {
        length = -1;
        return false;
    }
}

Easy, right? Of course you could add other supported compression algorithms, using more or less the same code (or even adding some abstraction for different compression algorithms), but this is basically all that’s required.

Summary

Using the HttpMessageHandler and its associated classes makes it extremely easy to apply transformations to all (or a well-defined subset) of HTTP requests you’re sending. In this case, we’re applying Gzip-compression to the bodies of all outgoing POST requests, but the logic to decide when to compress can be as customized as you want; you could even apply Gzip-compression only if the requested URI ends with “.gzip” or for certain content types.

The post Gzip Encoding an HTTP POST Request Body appeared first on Tales of a Code Monkey.

]]>
Dynamic AES Key Exchange Through RSA Encryption https://cymbeline.ch/2014/02/28/dynamic-aes-key-exchange-through-rsa-encryption/?utm_source=rss&utm_medium=rss&utm_campaign=dynamic-aes-key-exchange-through-rsa-encryption Fri, 28 Feb 2014 17:30:09 +0000 http://www.cymbeline.ch/?p=216 I wanted to prototype encrypted communication channel between a client and a server. Now of course there are HTTPS and other TLS channels that work quite well, but what I have in mind is supposed to be used to transfer rather sensitive data. So how can I establish a secure channel through an HTTP/HTTPS channel? … Continue reading "Dynamic AES Key Exchange Through RSA Encryption"

The post Dynamic AES Key Exchange Through RSA Encryption appeared first on Tales of a Code Monkey.

]]>
I wanted to prototype encrypted communication channel between a client and a server. Now of course there are HTTPS and other TLS channels that work quite well, but what I have in mind is supposed to be used to transfer rather sensitive data. So how can I establish a secure channel through an HTTP/HTTPS channel?

  1. Have the server generate an RSA key pair and send the public key to the client.
  2. Have the client generate an AES key, encrypt it with the received public key, and send the encrypted key to the server.
  3. Let the server decrypt the AES key.
  4. Both the client and the server are now in possession of the same AES key and can therefore communicate securely.

Of course, the generated AES key should only be used for the communication with the one client which sent it, so some sort of secure key management on the server (also regarding the RSA key pair) is vital. Also, the AES key could periodically be updated (i.e. a new key generated). At the very least, every message sent back and forth encrypted with AES will have to use a separate IV — but naturally that IV could be part of the transmitted message. So let’s get a very basic REST API-based implementation going.

Generate RSA key-pair on the Server

[...]

public sealed class SessionKey
{
    public Guid Id;
    public byte[] SymmetricKey;
    public RSAParameters PublicKey;
    public RSAParameters PrivateKey;
}

[...]

private Dictionary<Guid, SessionKey> sessionKeys;

[...]

public RSAParameters Generate(Guid sessionId)
{
    // NOTE: Make the key size configurable.
    using (RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(2048))
    {
        SessionKey s = new SessionKey()
        {
            Id = sessionId,
            PublicKey = rsa.ExportParameters(false /* no private key info */),
            PrivateKey = rsa.ExportParameters(true /* with private key info */),
            SymmetricKey = null, // To be generated by the client.
        };

        sessionKeys.Add(id, s);

        return s.PublicKey;
    }
}

[...]

This key generation can then be used to generate a new RSA key pair whenever a new client connects and requests secure communication. Of course, make sure you send the public key back to the client, and not the private key — else there’s no point in encrypting in the first place.

Generate an AES key on the Client

[...]

// Get the Public Key from the Server
RSAParameters publicKey = GetFromServer(...);

// Holds the current session's key.
byte[] MySessionKey;

// Send encrypted session key to Server.
SendToServer(GenerateAndEncryptSessionKey(publicKey));

[...]

private byte[] GenerateAndEncryptSessionKey(RSAParameters publicKey)
{
    using (Aes aes = Aes.Create())
    {
        aes.KeySize = aes.LegalKeySizes[0].MaxSize;
        // Setting the KeySize generates a new key, but if you're paranoid, you can call aes.GenerateKey() again.

        MySessionKey = aes.Key;
    }

    using (RSACryptoServiceProvider rsa = new RSACryptoServiceProvider())
    {
        rsa.ImportParameters(publicKey);

        return rsa.Encrypt(MySessionKey, true /* use OAEP padding */);
    }
}

[...]

As you can see, we just take the public key we got from the server to set up the RSA provider and then encrypt the generated AES key using that public key. Once the client sends the encrypted key to the server, they both share the same secret and can securely communicate with each other.

Decrypt AES Key on the Server

[...]

public void SetSymmetricKey(Guid id, byte[] encryptedKey)
{
    SessionKey session = sessionKeys[id];

    using (RSACryptoServiceProvider rsa = new RSACryptoServiceProvider())
    {
        rsa.ImportParameters(session.PrivateKey);

        session.SymmetricKey = rsa.Decrypt(encryptedKey, true /* use use OAEP padding */);
    }
}

[...]

Since we already have the private key for this session, we can just use it to decrypt the AES key we got from the client. Again, making sure that the stored symmetric key is safe, is key to security.

Encrypt / Decrypt

Encrypting and decrypting can now be done the same way on both sides (since we’re using a symmetric-key algorithm). So here’s what that looks like.

[...]

public byte[] EncryptData(byte[] key, string data)
{
    using (Aes aes = Aes.Create())
    {
        byte[] result;

        aes.Key = key;
        aes.GenerateIV();

        using (ICryptoTransform encryptor = aes.CreateEncryptor())
        {
            using (MemoryStream ms = new MemoryStream())
            {
                using (CryptoStream cs = new CryptoStream(ms, encryptor, CryptoStreamMode.Write))
                {
                    using (StreamWriter writer = new StreamWriter(cs))
                    {
                        writer.Write(data);
                    }
                }

                byte[] encrypted = ms.ToArray();
                result = new byte[aes.BlockSize / 8 + encrypted.Length];

                // Result is built as: IV (plain text) + Encrypted(data)
                Array.Copy(aes.IV, result, aes.BlockSize / 8);
                Array.Copy(encrypted, 0, result, aes.BlockSize / 8, encrypted.Length);

                return result;
            }
        }
    }
}

public string Decrypt(byte[] key, byte[] data)
{
    using (Aes aes = Aes.Create())
    {
        aes.Key = key;

        // Extract the IV from the data first.
        byte[] iv = new byte[aes.BlockSize / 8];
        Array.Copy(data, iv, iv.Length);
        aes.IV = iv;

        // The remainder of the data is the encrypted data we care about.
        byte[] encryptedData = new byte[data.Length - iv.Length];
        Array.Copy(data, iv.Length, encryptedData, 0, encryptedData.Length);

        using (ICryptoTransform decryptor = aes.CreateDecryptor())
        {
            using (MemoryStream ms = new MemoryStream(encryptedData))
            {
                using (CryptoStream cs = new CryptoStream(ms, decryptor, CryptoStreamMode.Read))
                {
                    using (StreamReader reader = new StreamReader(cs))
                    {
                        return reader.ReadToEnd();
                    }
                }
            }
        }
    }
}

[...]

As you can see, each time we encrypt something we generate a new IV, which we send at the beginning of the data to the other side. The other side then extracts the IV first and uses it to initialize AES.

REST APIs?

Using all this through REST APIs is trivial: All you really need to make sure is that the client sends the session GUID (or whatever you use to identify a session) with every encrypted message, either through the URL, parameters or headers. Of course it is vital to guarantee that a client cannot get access to another client’s session (e.g. to provide a new session key), but through ordinary (secure) authentication that should easily be doable.

Next Steps

As far as encryption is concerned, this should already do the trick. You may want to add signatures to the encrypted messages too, to make sure that the encrypted blocks have not been tampered with. In addition, the AES key exchange could be repeated periodically (maybe even after every exchanged message).

The post Dynamic AES Key Exchange Through RSA Encryption appeared first on Tales of a Code Monkey.

]]>
A Basic Framework for Using Performance Counters with .Net Applications https://cymbeline.ch/2010/07/03/a-basic-framework-for-using-performance-counters-with-net-applications/?utm_source=rss&utm_medium=rss&utm_campaign=a-basic-framework-for-using-performance-counters-with-net-applications Sat, 03 Jul 2010 12:45:00 +0000 /post/2010/07/03/A-Basic-Framework-for-Using-Performance-Counters-with-Net-Applications.aspx In my last post I promised to tell you more about the performance counters. So here we are: today I’m going to tell you about how to use the performance counter infrastructure offered by Windows in a .net application. On top of this I’m providing you with a basic framework that you can use to … Continue reading "A Basic Framework for Using Performance Counters with .Net Applications"

The post A Basic Framework for Using Performance Counters with .Net Applications appeared first on Tales of a Code Monkey.

]]>
In my last post I promised to tell you more about the performance counters. So here we are: today I’m going to tell you about how to use the performance counter infrastructure offered by Windows in a .net application. On top of this I’m providing you with a basic framework that you can use to author performance counters through XML and then get the code to read/write the counters generated for you at build time. So let’s get started.

Some Basics

Before I go off to the code, here’s a little overview on performance counters in Windows. I’m sure if you’re reading this, you already know about Perfmon.exe, a nice little tool to look at various performance aspects of Windows machines. When plottig performance counters, you’re typically adding the counters you’re interested in. These counters are grouped in categories, for instance Processor or PhsyicalDisk. Each category can contain multiple counters, for instance Disk Read Bytes/sec or Disk Write Bytes/sec. And finally, each counter can have multiple instances, for instance in multi-processor machines, you’ll find one instance of the % Idle Time counter per processor.

What tools like Perfmon.exe do is to grab the values of the counters you selected every second (by default, but that can typically be changed) and record/plot the values. Your job here is to build performance counters that measure certain aspects of your application so these tools can help you in analyzing your performance. And that’s why you’re here, right?

Step 1: The XML Schema

In order to know what we talk about in the XML I announced, let me start with the XML schema. I use this mainly to make sure that what I have in the XML declaring the perf counters makes some sense and is legal input. And of course because Visual Studio tells you that something’s wrong when the schema is not adhered to.

<?xml version="1.0" encoding="utf-8"?>
<xs:schema targetNamespace="urn:Cymbeline.Diagnostics.PerfCounters"
           xmlns="urn:Cymbeline.Diagnostics.PerfCounters"
           xmlns:tns="urn:Cymbeline.Diagnostics.PerfCounters"
           xmlns:xs="http://www.w3.org/2001/XMLSchema"
           attributeFormDefault="unqualified"
           elementFormDefault="qualified">
  <xs:element name="PerfCounters">
    <xs:complexType>
      <xs:sequence>
        <xs:element name="Category" maxOccurs="unbounded">
          <xs:complexType>
            <xs:sequence>
              <xs:element name="Counter" maxOccurs="unbounded">
                <xs:complexType>
                  <xs:attribute name="Name" type="xs:string" use="required" />
                  <xs:attribute name="Symbol" type="tns:Symbol" use="required" />
                  <xs:attribute name="Type" use="required">
                    <xs:simpleType>
                      <xs:restriction base="xs:string">
                        <xs:enumeration value="NumberOfItems32" />
                        <xs:enumeration value="RateOfCountsPerSecond32" />
                        <xs:enumeration value="RawFraction" />
                      </xs:restriction>
                    </xs:simpleType>
                  </xs:attribute>
                  <xs:attribute name="Help" type="xs:string" use="required" />
                </xs:complexType>
              </xs:element>
            </xs:sequence>
            <xs:attribute name="Name" type="xs:string" use="required" />
            <xs:attribute name="Symbol" type="tns:Symbol" use="required" />
            <xs:attribute name="Help" type="xs:string" use="required" />
          </xs:complexType>
        </xs:element>
      </xs:sequence>
    </xs:complexType>
  </xs:element>
  <xs:simpleType name="Symbol">
    <xs:restriction base="xs:token">
      <xs:pattern value="[a-zA-Z_][\w_]*" />
    </xs:restriction>
  </xs:simpleType>
</xs:schema>

On lines 21 to 23 you find some enumeration values. These map to the values defined in the PerformanceCounterType Enumeration. This is also where you can add support for more performance counter types when you need it. The other elements and attributes are used to describe the category for the performance counters and the performance counters themselves, including the help text that will show up in Perfmon.exe and also including the symbol that you can use in the code to access the counter.

Step 2: Declaring the Performance Counters

Now let’s use the schema we built above. To give you an idea of the context, I’m providing here some of the XML I use to build the perf counters for my SMTP server.

<?xml version="1.0" encoding="utf-8" ?>
<PerfCounters xmlns="urn:Cymbeline.Diagnostics.PerfCounters">
  <Category Name="CymbeMail" Symbol="CymbeMail" Help="CymbeMail SMTP Server v1">
    <Counter Name="# Total Connections"
         Symbol="TotalConnections"
         Type="NumberOfItems32"
         Help="The total number of connections since the server was started." />
    <Counter Name="# Total Connections Refused"
         Symbol="TotalRefusedConnections"
         Type="NumberOfItems32"
         Help="The total number of connections refused since the server was started." />
    <Counter Name="# Total Connections with Errors"
         Symbol="TotalErroneousConnections"
         Type="NumberOfItems32"
         Help="The total number of connections which reported errors since the server was started." />
    <Counter Name="# Active Connections"
         Symbol="ActiveConnections"
         Type="NumberOfItems32"
         Help="The number of currently active connections." />
    <Counter Name="# Authorization Records"
         Symbol="AuthorizationRecords"
         Type="NumberOfItems32"
         Help="The number of authorization records currently kept in the server." />
    <Counter Name="# Connections/sec"
         Symbol="ConnectionsPerSec"
         Type="RateOfCountsPerSecond32"
         Help="The number of connections per second." />
    <Counter Name="# Refused Connections/sec"
         Symbol="RefusedConnectionsPerSec"
         Type="RateOfCountsPerSecond32"
         Help="The number of connections refused per second." />
  </Category>
</PerfCounters>

As you can see, I create seven perf counters, most of them ordinary counters that count the number of occurrences of a certain event (like when a client makes a connection to the server). I actually also use the NumberOfItems32 type to count the number of active connections — I basically increment the counter when a connection was established and decrement it when the connection is terminated. And I have a couple of counters which count the number of occurrences per second (the RateOfCountsPerSecond32 counter type). The good thing about these counters is that you don’t need to provide your own counter base (check out MSDN for more info on this, starting with the above mentioned PerformanceCounterType Enumeration).

Step 3: Building the XSLT to Generate Code

We already have our perf counters declared, so let’s generate some code from that XML. Using XSL stylesheets makes this very easy: We’ll use the XML as input and get c# code as output that we can simply include into our project. I won’t show the full XSLT here (it’s about 240 lines).

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0"
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
        xmlns:msxsl="urn:schemas-microsoft-com:xslt"
        xmlns:tns="urn:Cymbeline.Diagnostics.PerfCounters"
        exclude-result-prefixes="msxsl">
  <xsl:output method="text" indent="no"/>

  <xsl:param name="TargetNamespace" />
  <xsl:param name="TargetClassName" select="'PerfCounters'" />
  <xsl:param name="AccessModifier" select="'public'" />

  <xsl:template match="tns:PerfCounters">
    <xsl:if test="$TargetNamespace=''">
      <xsl:message terminate="yes">
        The Parameter 'TargetNamespace' is undefined.
      </xsl:message>
    </xsl:if>
    <!-- ... -->
  </xsl:template>

  <xsl:template match="tns:Counter" mode="Counter">
    <xsl:text>
      private static PerformanceCounter _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>;

      </xsl:text><xsl:value-of select="$AccessModifier"/><xsl:text> static PerformanceCounter </xsl:text><xsl:value-of select="@Symbol"/><xsl:text>
      {
        get
        {
          MakeSureCountersAreInitialized();
          return _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>;
        }
      }
</xsl:text>
  </xsl:template>

  <xsl:template match="tns:Counter[@Type='RawFraction']" mode="Counter">
    <xsl:text>
      private static PerformanceCounter _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>;

      </xsl:text><xsl:value-of select="$AccessModifier"/><xsl:text> static PerformanceCounter </xsl:text><xsl:value-of select="@Symbol"/><xsl:text>
      {
        get
        {
          MakeSureCountersAreInitialized();
          return _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>;
        }
      }

      private static PerformanceCounter _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>Base;

      </xsl:text><xsl:value-of select="$AccessModifier"/><xsl:text> static PerformanceCounter </xsl:text><xsl:value-of select="@Symbol"/><xsl:text>Base
      {
        get
        {
          MakeSureCountersAreInitialized();
          return _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>Base;
        }
      }
</xsl:text>
  </xsl:template>

  <xsl:template match="tns:Counter" mode="InitCounter">
    <xsl:text>
        _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text> = new PerformanceCounter(
          CategoryName,
          "</xsl:text><xsl:value-of select="@Name"/><xsl:text>",
          false);
</xsl:text>
  </xsl:template>

  <xsl:template match="tns:Counter[@Type='RawFraction']" mode="InitCounter">
    <xsl:text>
        _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text> = new PerformanceCounter(
          CategoryName,
          "</xsl:text><xsl:value-of select="@Name"/><xsl:text>",
          false);

        _</xsl:text><xsl:value-of select="@Symbol"/><xsl:text>Base = new PerformanceCounter(
          CategoryName,
          "</xsl:text><xsl:value-of select="@Name"/><xsl:text> Base",
          false);
</xsl:text>
  </xsl:template>

  <xsl:template match="tns:Counter" mode="CreateCounter">
      <xsl:text>
      {
        CounterCreationData counter = new CounterCreationData();

        counter.CounterName = "</xsl:text><xsl:value-of select="@Name" /><xsl:text>";
        counter.CounterHelp = "</xsl:text><xsl:value-of select="@Help" /><xsl:text>";
        counter.CounterType = PerformanceCounterType.</xsl:text><xsl:value-of select="@Type" /><xsl:text>;

        counters.Add(counter);
      }
</xsl:text>
  </xsl:template>

  <xsl:template match="tns:Counter[@Type='RawFraction']" mode="CreateCounter">
      <xsl:text>
      {
        CounterCreationData counter = new CounterCreationData();

        counter.CounterName = "</xsl:text><xsl:value-of select="@Name" /><xsl:text>";
        counter.CounterHelp = "</xsl:text><xsl:value-of select="@Help" /><xsl:text>";
        counter.CounterType = PerformanceCounterType.</xsl:text><xsl:value-of select="@Type" /><xsl:text>;

        CounterCreationData counterBase = new CounterCreationData();

        counterBase.CounterName = "</xsl:text><xsl:value-of select="@Name" /><xsl:text> Base";
        counterBase.CounterHelp = "</xsl:text><xsl:value-of select="@Help" /><xsl:text>";
        counterBase.CounterType = PerformanceCounterType.RawBase;

        counters.AddRange(new CounterCreationData[]{counter, counterBase});
      }
</xsl:text>
  </xsl:template>

  <!-- ... -->
</xsl:stylesheet>

On lines 9 to 11 you’ll find the declaration for the parameters which you can use to change the namespace, the class name and the access modifiers (public vs internal really) for the generated classes.

Starting on lines 22 and 37 respectively, you find different templates for general counters and the RawFraction type counters. I use this to automatically generate the base counter so you won’t have to worry about it. You can do similar things for other counters which need a base counter. Then, starting on lines 63 and 72 respecctively, I initialize the counters, with specialization again for the RawFraction counter types. And starting on lines 86 and 100 respectively you’ll find the code to create and set the CounterCreationData which is used to register perf counters.

You can find the full code in the ZIP file linked to at the end of this post. With this we basically have all the tools we need to actually generate the code as part of the build. We just need to bring the pieces together.

Step 4: Bringing it Together aka Updating the Project

Well yes, I have silently assumed that you have a c# project (.csproj or any other project that is supported by MSBuild.exe) that you’re working on to update. All we need to do in the .csproj is to add the files we authored, then run the XSLT transformation and add the output .cs file to the list of source code files as well. And here’s how you can do that.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
  <!-- ... -->
  <ItemGroup>
    <PerfCounterXml Include="Diag\PerfCounters.xml">
      <OutputCs>Diag\PerfCounters.Generated.cs</OutputCs>
      <Parameters>
        <Parameter Name="TargetNamespace" Value="$(RootNamespace).Diag" />
      </Parameters>
      <SubType>Designer</SubType>
    </PerfCounterXml>
    <PerfCounterXslt Include="Diag\PerfCounters.xslt" />
    <Compile Include="Diag\PerfCounters.Generated.cs">
      <PerfCounters>true</PerfCounters>
      <AutoGen>true</AutoGen>
      <DependentUpon>PerfCounters.xml</DependentUpon>
    </Compile>
  </ItemGroup>
  <ItemGroup>
    <None Include="Diag\PerfCounters.xsd">
      <SubType>Designer</SubType>
    </None>
  </ItemGroup>
  <!-- XSL Transform for Perf Counters -->
  <Target Name="GeneratePerfCounters" BeforeTargets="BeforeBuild"
      Inputs="@(PerfCounterXml);@(PerfCounterXslt)"
      Outputs="@(PerfCounterXml->'%(OutputCs)')">
    <XslTransformation XmlInputPaths="@(PerfCounterXml)"
      XslInputPath="@(PerfCounterXslt)"
      OutputPaths="@(PerfCounterXml->'%(OutputCs)')"
      Parameters="@(PerfCounterXml->'%(Parameters)')" />
  </Target>
</Project>

Add lines 4 to 32 to the end of the project file and make sure the paths you’re using point to the places where you actually stored the files. On line 8 you can see that I’m passing the target namespace parameter (called out in step 3) by concatenating the assemblies default root namespace and “.Diag”. This way you can actually put this stuff into a .targets file that you can include in all of the projects which take advantage of this framework.

Starting on line 25 I defined the actual target which runs the XslTransformation task that comes with .net 4.0. I also declared the inputs as being the XML file and the XSLT file — this way, while experimenting with the XSLT, the code gets regenerated also if the XML file hasn’t changed, but the XSLT has.

Using the Generated Classes

So now you have pretty much all the code you need to use these performance counters of yours. All that’s left is actually writing the code which

  1. Installs the Performance Counters
  2. Updates the Performance Counters
  3. Uninstalls the Performance Counters

#1 and #3 would probably be used by your installer and #2 by your application at runtime. Incrementing and decrementing the counters is as easy as

PerfCounters.CymbeMail.ActiveConnections.Increment();
//...
PerfCounters.CymbeMail.ActiveConnections.Decrement();

Where PerfCounters is the name of the generated class which contains all the generated perf counters, CymbeMail is the name of the class for the category (the Symbol attribute, remember?) and ActiveConnections is of course the symbol name for the counter we’re modifying. Isn’t this simple?

As for setting things up (or removing them), the relevant piece of code gets generated for you, too. So all you need to do is actually call it, potentially from a small application that you run — as I mentioned before — from your installer.

// Setup the Performace Counters
PerfCounters.Setup();

// And for the Uninstaller:
// Remove the Performance Counters
PerfCounters.Remove();

Of course you’ll have to make sure that the version of the generated code that’s run by the uninstaller is the same version as the installer did run — else you may end up with stale categories / counters on the machine. But then again, if you’re installing through an MSI package, you get that for free … when done properly.

Summary

In just a few steps I have shown how you can build a framework to use performance counters in .Net applications. On top of it’s simplicity, it’s also quite easy to modify the framework and adapt it to your own needs or extend it to allow more performance counter types.

And finally, as promised, here’s the ZIP file which contains the relevant pieces. Please forgive me for not adding the .csproj file — that actually contains other relevant data that I didn’t want to share. Instead I added the generated c# code file for reference.

PerfCounters.zip (4.08kb)

The post A Basic Framework for Using Performance Counters with .Net Applications appeared first on Tales of a Code Monkey.

]]>
Managed debugging with WinDbg https://cymbeline.ch/2010/03/29/managed-debugging-with-windbg/?utm_source=rss&utm_medium=rss&utm_campaign=managed-debugging-with-windbg Mon, 29 Mar 2010 18:43:00 +0000 /post/2010/03/29/Managed-debugging-with-WinDbg.aspx This is surely no surprise for many people: you can actually debug managed code with WinDbg (or CDB, if you prefer that one) by using the SOS extension (that ships with the .Net framework) in the debugger. What’s new is that today, Microsoft released Psscor2 which is pretty much an enhanced version of SOS. Now I’m … Continue reading "Managed debugging with WinDbg"

The post Managed debugging with WinDbg appeared first on Tales of a Code Monkey.

]]>
This is surely no surprise for many people: you can actually debug managed code with WinDbg (or CDB, if you prefer that one) by using the SOS extension (that ships with the .Net framework) in the debugger. What’s new is that today, Microsoft released Psscor2 which is pretty much an enhanced version of SOS. Now I’m sure that you’ll soon find much information on using Psscor2 on Tom’s blog. I’m much in favor of Psscor2 myself and therefore I’ll try to post useful scripts here as well (if I can come up with some ;-)).

The post Managed debugging with WinDbg appeared first on Tales of a Code Monkey.

]]>
Windows Authentication through Forms https://cymbeline.ch/2009/10/11/windows-authentication-through-forms/?utm_source=rss&utm_medium=rss&utm_campaign=windows-authentication-through-forms Sun, 11 Oct 2009 11:36:00 +0000 /post/2009/10/11/Windows-Authentication-through-Forms.aspx Let’s assume you have a web site which is exposed to the internet (i.e. to the public) but the site itself works with windows accounts internally. If you have problems visualizing this scenario, think about a web mail interface for a mail server system which offers mail services for user accounts in active directory. Users … Continue reading "Windows Authentication through Forms"

The post Windows Authentication through Forms appeared first on Tales of a Code Monkey.

]]>
Let’s assume you have a web site which is exposed to the internet (i.e. to the public) but the site itself works with windows accounts internally. If you have problems visualizing this scenario, think about a web mail interface for a mail server system which offers mail services for user accounts in active directory. Users inside your corporate network can use integrated Windows authentication to access this site, so they don’t really have a problem. But when they want to access the interface from outside the corporate network, or from a machine/device which doesn’t understand integrated Windows authentication, you’ll realize that it doesn’t just work like that.

So you’ll want to do authentication based on the credentials the user enters in a simple form on a web page. The challenge now is that the rest of the site (i.e. apart from the one form which does the initial authentication) will likely still work with the HttpContext.Current.User property to determine which user is currently authenticated and provide actions and data based on that identity, because you don’t want to re-implement the entire logic. The good news is, you can do that! The bad news is, you don’t just get it for free. But that’s why you came here, and that’s why I’ll try to help you with this.

Let’s first arrange a few things on the web site facing the public — you don’t necessarily need to change this on your internal site. First, the login page will need to be accessible for anyone, i.e. anonymous access must be turned on on the site. Second, from ASP.net’s point of view, all the pages which require the user to be authenticated should be in/under the same directory. If you don’t want to do this, you’ll simply have to add a element for all pages requiring authentication in your web.config or do the opposite and add all pages which don’t require authentication in the same way. With the directories however, the basic structure can be as simple as

WebSiteRoot/
+--Default.aspx        Could redirect to /ActualSite/Default.aspx
+--ActualSite/
   +--Default.aspx     The actual home page with all its logic
+--Auth/
   +--Login.aspx       The login page

So all the logic you had in the site’s root directly would be under ActualSite or whatever name you chose.

Updating Web.config

Now for the above scenario with different directories, the web.config in the site’s root could look as follows. Please note that for the pages under ActualSite we are simply disabling anonymous access, while for everything else, we allow it. Also, in the authentication element we’re setting the mode to None because we’re not going to use any predefined authentication mechanism as is.

<configuration>
    <location path="ActualSite">
        <system.web>
            <authorization>
                <deny users="?"/>
            </authorization>
        </system.web>
    </location>

    <system.web>
        <authentication mode="None">
            <forms loginUrl="~/Auth/Login.aspx" defaultUrl="~/ActualSite/Default.aspx" />
        </authentication>

        <authorization>
            <allow users="*" />
        </authorization>
    </system.web>
</configuration>

But of course that’s not all yet. You’ll see that if now you wanted to look at the actual site, you’ll get a 401 because you’re not authenticated. So let’s take a look at that.

Validating credentials

Next, let’s create the login page which the user will use to enter his credentials. Fortunately, ASP.net offers the Login control which does almost everything we need. So add that one to your login page plus add an event handler for the OnAuthenticate event of that control. Alternatively, you can also derive your own control from the Login control and override the OnAuthenticate method. This event handler is where we’ll do our custom logic to check that the credentials really map to an existing Windows user. Below’s the code which does that.

protected override void OnAuthenticate(AuthenticateEventArgs args)
{
    string[] parts = UserName.Split('\\');
    string password = Password;
    string username;
    string domain = null;

    args.Authenticated = false;

    if (parts.Length == 1)
    {
        username = parts[0];
    }
    else if (parts.Length == 2)
    {
        domain = parts[0];
        username = parts[1];
    }
    else
    {
        return;
    }

    if (WebAuthenticationModule.AuthenticateUser(username, domain, password))
    {
        string userData = String.Format("{0}\n{1}\n{2}",
            username, domain, password);

        FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(
            2                           /* version */,
            username,
            DateTime.Now                /* issueDate */,
            DateTime.Now.AddMinutes(30) /* expiration */,
            true                        /* isPersistent */,
            userData,
            FormsAuthentication.FormsCookiePath);

        HttpCookie ticketCookie = new HttpCookie(
            FormsAuthentication.FormsCookieName,
            FormsAuthentication.Encrypt(ticket));

        Context.Response.Cookies.Add(ticketCookie);

        Context.Response.Redirect(
            FormsAuthentication.GetRedirectUrl(username, false), false);
    }
}

The AuthenticateUser method from WebAuthenticationModule is a wrapper on the LogonUser function from the Win32 API which will return true if the user could be logged on. So if the credentials are valid, we’re going to pass them in to the FormsAuthenticationTicket in the UserData property so later on, we’ll be able to use them again. At least, we don’t want the consumers of the site have to enter credentials for every request their making, right? Also, we’re encrypting the entire ticket because we’re going to send it over the wire. The Encrypt method from the FormsAuthentication class does this. However, you’ll have to make sure that the protection attribute of the forms element in the web.config is set to All which actually is the default (but it can be inherited, so watch out!).

What you see is that we’re heavily using the functionality offered by the FormsAuthentication class and related classes to handle the tickets, encryption, settings, etc. This is not mandatory but it helps a lot. Plus it’s better anyway than coming with your own ticketing and encryption mechanisms; unless you have a degree in maths and/or cryptography, chances are that that’s not so secure as you think it is.

Authenticating users

Then, we need to authenticate the user for all the requests he makes after providing the credentials. Thus, we need to add some custom logic to the AuthenticateRequest event of the HttpApplication. There’s multiple ways to do that:

  • Add a file called global.asax to your site’s root
  • Create and register a new HttpModule by implementing the System.Web.IHttpModule interface and adding an entry in the httpModules section in your root’s web.config

Personally, I like the approach with the custom module more, but adding this stuff to the global.asax can be done a little bit faster. In either case, make sure you can handle the AuthenticateRequest event. My code proposal for the handler is given below. I’m intentionally omitting most error handling code here.

private static void OnAuthenticateRequest(object sender, EventArgs args)
{
    HttpApplication application = sender as HttpApplication;

    HttpContext context = application.Context;
    HttpRequest request = context.Request;
    HttpResponse response = context.Response;

    if (!request.IsAuthenticated &amp;&amp;
        !context.SkipAuthorization)
    {
        if (request.CurrentExecutionFilePath.Equals(FormsAuthentication.LoginUrl,
                                                    StringComparison.OrdinalIgnoreCase) ||
            request.CurrentExecutionFilePath.EndsWith(".axd"))
        {
            context.SkipAuthorization = true;
        }
        else
        {
            HttpCookie cookie = request.Cookies.Get(FormsAuthentication.FormsCookieName);
            FormsAuthenticationTicket ticket = FormsAuthentication.Decrypt(cookie.Value);

            if (!ticket.Expired)
            {
                IntPtr hToken = LogonUserFromTicket(ticket);

                WindowsIdentity identity = new WindowsIdentity(
                    hToken, "Win/Forms", WindowsAccountType.Normal, true);

                context.User = new WindowsPrincipal(identity);

                if (FormsAuthentication.SlidingExpiration)
                {
                    ticket = FormsAuthentication.RenewTicketIfOld(ticket);
                    cookie.Value = FormsAuthentication.Encrypt(ticket);
                    response.Cookies.Set(cookie);
                }

                return;
            }

            FormsAuthentication.RedirectToLoginPage();
        }
    }
}

So let’s go through the important things here. First we check (lines 9/10) if the request is already authenticated or authorization is to be skipped completely. If either of those is true, we’re out of the picture already. Else it’s our job to do the authentication. Lines 12-17 are used to prevent authentication on the login page as well as the *.axd handlers, which are typically used to return resources like scripts for ASP.net components. Lines 20-42 do the actual authentication: we first retrieve the cookie from the previous credential validation and decrypt it to get the ticket. If the ticket has not expired, we log the user on (the call to LogonUserFromTicket is again only a wrapper for the LogonUser function from the Win32 API; it uses the data from the UserData property of the ticket) to get the logon token which we’ll pass in to the WindowsIdentity constructor to get the WindowsIdentity object all the code in ActualSite will use to determine which user is making the request. Then of course we need to update the context with the new identity. If sliding expiration is turned on in the web.config, we renew the ticket. And if the ticket has expired, of course we don’t authenticate the user but instead we redirect him to the login page.

Cleanup

Finally, we still have a tiny problem here. We have used the LogonUser function, but according to its documentation, we should call the CloseHandle function from the Win32 API. For the method AuthenticateUser I have already done that; there we don’t need the token/handle anymore when we have verified the credentials. But what about the authentication we’re doing in AuthenticateRequest? We’re setting the newly created WindowsIdentity (which is here used to represent the user token) to the current HttpContext because we’ll need it there to ultimately handle the request. But once the request handler is done, we don’t need it anymore. Luckily, there’s also an event for this purpose. It’s the last event that there is and it’s called EndRequest. So let’s add the following code in the handler for that event.

private static void OnEndRequest(object sender, EventArgs args)
{
    HttpApplication application = sender as HttpApplication;

    HttpContext context = application.Context;

    if (null != context.User)
    {
        WindowsIdentity identity = context.User.Identity as WindowsIdentity;

        if (null != identity)
        {
            NativeAuth.CloseHandle(identity.Token);
        }
    }
}

Basically all it does, if the request was given a WindowsIdentity, it’ll call the CloseHandle function from the Win32 API on that identity’s token handle. That should do the trick and we shouldn’t leak handles anymore.

Summary

I have shown here how you can make use of the forms authentication mechanisms which come with ASP.net to do Windows authentication behind the scenes. This can be very useful in cases when not all users have access to systems which know how to do Windows authentication.

Please note that I do not claim that this solution is going to work for every challenge you may be facing. The solution shown here is neither claimed to be complete nor suitable for every web application.

The post Windows Authentication through Forms appeared first on Tales of a Code Monkey.

]]>
Migrating ASP.NET Web Services to WCF https://cymbeline.ch/2009/08/06/migrating-asp-net-web-services-to-wcf/?utm_source=rss&utm_medium=rss&utm_campaign=migrating-asp-net-web-services-to-wcf https://cymbeline.ch/2009/08/06/migrating-asp-net-web-services-to-wcf/#comments Thu, 06 Aug 2009 14:59:00 +0000 /post/2009/08/06/Migrating-ASPNET-Web-Services-to-WCF.aspx I recently had to migrate a common ASP.NET web service over to WCF, making sure that clients of the former would still be able to use the latter. There were a couple of things I stumbled across, so I am blogging about the minimal steps I had to perform to get clients of the old … Continue reading "Migrating ASP.NET Web Services to WCF"

The post Migrating ASP.NET Web Services to WCF appeared first on Tales of a Code Monkey.

]]>
I recently had to migrate a common ASP.NET web service over to WCF, making sure that clients of the former would still be able to use the latter. There were a couple of things I stumbled across, so I am blogging about the minimal steps I had to perform to get clients of the old ASP.NET web service running with the new WCF one. Let’s use the following simple ASP.NET web service for this tiny tutorial.

[WebService(Namespace = "http://foo.bar.com/Service/Math")]
public class MathAddService : WebService
{
    [WebMethod]
    public int Add(int x, int y)
    {
        // Let's ignore overflows here ;-)
        return x + y;
    }
}

The first thing we need to do is create a new interface which offers the same methods as the web service did and mark it as a service contract. This is required because the WCF endpoints are contract based, i.e. they need such an interface. So we extract the public web service interface of the MathAddService class and decorate it with the WCF attributes:

[ServiceContract(Namespace = "http://foo.bar.com/Service/Math")]
[XmlSerializerFormat]
public interface IMathAddService
{
    [OperationContract(Action = "http://foo.bar.com/Service/Math/Add")]
    int Add(int x, int y);
}

The ServiceContract attribute tells WCF to use the same namespace for the web service as ASP.NET did. If you don’t do this, your clients will not be able to use the migrated service because the namespaces don’t match. The XmlSerializerFormat attribute is used to make sure that WCF uses the standard SOAP format for messages. If you don’t specify this, your clients will likely see strange error messages of mismatching operations / messages. Then, for each method you exposed in the former web service, you need to add the exact same signature here, plus make sure that the OperationContract attribute for each method has the Action property set to ‘/’ . Without this, you’ll get another set of exceptions like ‘operation not defined’.

Now the next step is to implement this interface in a class, but we basically already have this in the former MathAddService class. So we just adapt the class’ definition as follows.

[WebService(Namespace = "http://foo.bar.com/Service/Math")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
[ServiceBehavior(Namespace = "http://foo.bar.com/Service/Math")]
public class MathAddService : WebService, IMathAddService
{
    [WebMethod]
    public int Add(int x, int y)
    {
        // Let's ignore overflows here ;-)
        return x + y;
    }
}

As you can see, we’re also adding two new attributes. AspNetCompatibilityRequirements are used to make sure that the new WCF service is really capable of serving old clients. The ServiceBehavior attribute is used to make sure that the WCF hosted service really uses the correct namespace, i.e. the same as the old ASP.NET service used. By the way, you should find all the additional attributes in the System.ServiceModel and System.ServiceModel.Activation namespaces (from the System.ServiceModel assembly).

Now lets get to the configuration of endpoints and bindings for the web service. The following block shows you the new sections in the web.config file for the virtual directory which hosts the WCF service.

<configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0">
    <system.web>
        <!-- ... -->
    </system.web>
    <system.serviceModel>
        <serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
        <services>
            <service name="MathAddService" behaviorConfiguration="MathAddServiceBehavior">
                <endpoint address=""
                          binding="basicHttpBinding"
                          bindingConfiguration="httpsIwa"
                          bindingNamespace="http://foo.bar.com/Service/Math"
                          contract="IMathAddService"/>
            </service>
        </services>
        <bindings>
            <basicHttpBinding>
                <binding name="httpsIwa">
                    <security mode="Transport">
                        <transport clientCredentialType="Windows" />
                    </security>
                </binding>
            </basicHttpBinding>
        </bindings>
        <behaviors>
            <serviceBehaviors>
                <behavior name="MathAddServiceBehavior">
                    <serviceMetadata httpsGetEnabled="true" />
                    <serviceDebug httpsHelpPageEnabled="true" includeExceptionDetailInFaults="true" />
                </behavior>
            </serviceBehaviors>
        </behaviors>
    </system.serviceModel>
</configuration>

As you can see on lines 19 and 20, we are using HTTPS and IWA for this particular binding, but you should of course make it the same as you had for your ASP.NET service. If you served all requests without HTTP based authentication and without SSL/TLS, then you should stick to that so you don’t break your clients :). You have to make sure that you are offering at least one basicHttpBinding, because that’s what closest matches the ASP.NET SOAP interface.

Finally, we add a new file called ‘MathAddService.svc’ in the virtual directory on IIS with the following contents.

<%@ ServiceHost Service="MathAddService" %>

This will use the implementation of the MathAddService class to serve the request for the IMathAddService interface. Of course your clients will have to be updated to use the new URL now (or you can try a 302 redirect but depending on the client’s policies, this may fail). In case your requests to the new SVC file produce strange results (or send you back the above contents of the file), in the IIS administrative tools make sure that the .svc extension is mapped properly. If it isn’t, you can run the aspnet_regiis.exe tool from the .NET framework to get that done.

The post Migrating ASP.NET Web Services to WCF appeared first on Tales of a Code Monkey.

]]>
https://cymbeline.ch/2009/08/06/migrating-asp-net-web-services-to-wcf/feed/ 2
User Search in AD https://cymbeline.ch/2009/02/10/user-search-in-ad/?utm_source=rss&utm_medium=rss&utm_campaign=user-search-in-ad Tue, 10 Feb 2009 22:16:00 +0000 /post/2009/02/10/User-Search-in-AD.aspx I stumbled upon the System.DirectoryServices.AccountManagement namespace this week. It was introduced with .Net 3.5 and offers functionality to perform queries on AD objects like users, groups and computers in a more comfortable way than through the DirectorySearcher class from the System.DirectoryServices namespace. To illustrate the ease of using these classes, I came up with a … Continue reading "User Search in AD"

The post User Search in AD appeared first on Tales of a Code Monkey.

]]>
I stumbled upon the System.DirectoryServices.AccountManagement namespace this week. It was introduced with .Net 3.5 and offers functionality to perform queries on AD objects like users, groups and computers in a more comfortable way than through the DirectorySearcher class from the System.DirectoryServices namespace. To illustrate the ease of using these classes, I came up with a tiny example which lists all users whose account name (the samAccountName attribute in AD) starts with an 'a'. On top of this, using LINQ it is quite simple to convert the resulting PrincipalSearchResult<Principal> collection into an IEnumerable<UserPrincipal>.

using System;
using System.Collections.Generic;
using System.DirectoryServices.AccountManagement;
using System.Linq;

namespace UserSearch
{
    class Program
    {
        static void Main(string[] args)
        {
            PrincipalContext context = new PrincipalContext(ContextType.Domain, "contoso.com");

            UserPrincipal searchFilter = new UserPrincipal(context);
            searchFilter.SamAccountName = "a*";

            PrincipalSearcher ps = new PrincipalSearcher(searchFilter);

            IEnumerable<UserPrincipal> results = from principal in ps.FindAll()
                                                 where principal is UserPrincipal
                                                 select principal as UserPrincipal;

            foreach (UserPrincipal user in results)
            {
                Console.WriteLine("User '{0}' ({1}) Info:", user.SamAccountName, user.Name);
                Console.WriteLine("    Password Set On  {0}", user.LastPasswordSet);
                Console.WriteLine("    Last Log On      {0}", user.LastLogon);
                Console.WriteLine();
            }
        }
    }
}

The post User Search in AD appeared first on Tales of a Code Monkey.

]]>