Convert CRT to PFX

Every time I have to rotate an SSL certificate on my web site I end-up spending a few hours going through proper steps to generate necessary files and convert them between formats. Since I only do so once a year I never bothered documenting the process. Now, for my own sake, I will publish this post to help folks who have similar needs.

My web service is deployed to IIS which consumes certificates in PFX format. However, my certification authority issues certificates in CRT format. Natural question is how to get one converted into another? We will be using OpenSSL for that. I prefer Windows Open SSL port just because most of my ecosystem is Windows.

Quick Segway into nomenclature:

  • CA means Certification Authority. This is the organization that will issue the certificate.
  • CSR stands for Certificate Signing Request and this is the payload you submit to CA to obtain a certificate.
  • KEY is a private key that comes paired with your CSR and will be included into final PFX file.
  • PFX is a final certificate that gets uploaded to IIS.

It is absolutely critical that you do not lose or discard your CSR or KEY files after generation because you will need them to create final certificate

First thing we need to do to create SSL certificate is generate CSR and KEY files. For that we need to configure OpenSSL distinguished name property. That would be the only certificate property we need to define. Create an empty openssl.cnf file next to openssl.exe and type the following content, changing things as you go to fit your needs:

[ req ]
distinguished_name = req_distinguished_name

[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = US
countryName_min = 2
countryName_max = 2

stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = My State

localityName = Locality Name (eg, city)
localityName_default = My City

organizationName = Organization Name (eg, company)
organizationName_default = My Organization

organizationalUnitName = Organizational Unit Name (eg, section)

commonName = Common Name (eg, your website’s domain name)
commonName_max = 64

emailAddress = Email Address
emailAddress_max = 40

Now we are ready to generate necessary files for CA

openssl req -newkey rsa:2048 -keyout -config openssl.cnf -out

Make sure you end-up with two files. If you don't have them then something went wrong and requires investigation. We need to make sure that CSR file contains exactly what we expect. There is a command for that.

openssl req -text -noout -verify -in

In the output of the tool look at Subject property and pay special attention to CN attribute. It must match your website domain name. You will submit your CSR to CA and wait for certificate to be issued.

Typically CA returns certificate in 2 parts:

  • *.CRT contains the certificate itself
  • *.P7B has certification chain that establishes trust for your CA.

We do not need P7B file as in most cases IIS can verify chain trust with publicly installed CA certificates. As such this file will be discarded. We will only use CRT file.

openssl pkcs12 -export -out -inkey -in certificate.crt

Resulting PFX file can be uploaded to IIS server.

author: Oleg Ignat | posted @ Tuesday, March 22, 2016 1:12 PM

How to debug HTTP 404 from WebResource.axd

I have just refactored a large portion of the web site code, which included moving resources around and in some cases renaming them. Of course such a change can't go without a glitch the first time you fire up a portal after refactoring. As expected, I immediately started hitting a few HTTP 404 responses on WebResource.axd calls. Natural question is - which resources are incorrectly referenced? Unfortunately WebResource.axd URLs are encrypted. Looking at URL itself does no good. Unfortunately there's nothing on the web about debugging specific WebResource.axd failures so I decided to share my trick and hopefully it will save someone some time.

Embedded resource URLs look as follows:


This URL is generated by System.Web.Handlers.AssemblyResourceLoader class, source code for which is available here. Let's take a look at it.

/// <devdoc>
///     Performs the actual putting together of the resource reference URL.
/// </devdoc>
private static string FormatWebResourceUrl(string assemblyName, string resourceName, long assemblyDate, bool htmlEncoded) 
  string encryptedData = Page.EncryptString(assemblyName + "|" + resourceName);
  if (htmlEncoded) 
    return String.Format(CultureInfo.InvariantCulture, _webResourceUrl + "?d={0}&t={1}", 
    return String.Format(CultureInfo.InvariantCulture, _webResourceUrl + "?d={0}&t={1}",

Highlighed line indicates what's being encrypted. So d is just assembly modification timestamp. If I could reverse engineer the value of d parameter then I would know what is broken. Since I didn't care about encryption process, rather than how to reverse engineer this URL value I looked at at Page.DecryptString implementation which is available here

/// <devdoc> 
///    <para>Decrypt the string using symmetric algorithm defined in config.</para>
/// </devdoc>
internal static string DecryptString(string s) 
    // DevDiv Bugs 137864: Hash is the default for higher compatibility because it retains 
    // the behavior that a given input always results in the same output.
    return DecryptStringWithIV(s, IVType.Hash); 

// DevDiv Bugs 137864: Add IVType overload for improved encryption semantics 
// Note: Not an actual overload because this breaks the reflection that SWE 1.0 uses to call this method.
internal static string DecryptStringWithIV(string s, IVType ivType) 
    if (s == null)
        return null; 

    byte[] buf = HttpServerUtility.UrlTokenDecode(s); 
    if (buf != null) 
        buf = MachineKeySection.EncryptOrDecryptData(false, buf, null, 0, buf.Length, false, false, ivType);

    if (buf == null)
        throw new HttpException(SR.GetString(SR.ViewState_InvalidViewState));

    return Encoding.UTF8.GetString(buf);

If only MachineKeySection.EncryptOrDecryptData and ivType were public it would have been a bit easier to accomplish my task but since they aren't I had to write a bit of throw-away reflection code to get to them. Here's the code that will reverse-engineer the token and spit-out assembly name and resource name in referencedResources string. Their format will be the same as mentioned in FormatWebResourceUrl above, namely - [Assembly Name]|[Resource Name].

byte[] inputArray = System.Web.HttpServerUtility.UrlTokenDecode("5r59bSNcPgH41sn0n7yZmk2vV3gfQl_ywYeTKnRzb77SVSt3KyaY6wxCRYCg5aDC39z2o-utTlN62bHX1pTCxoXR8dulDCDP_8btJcrcqErJeXkRxpQCnOhVFV-f9Gfp0");

byte[] outputAarray = null;
if (inputArray != null)
    // Get IVType for GetMethod and Hash value for Invoke
    Type ivType = typeof(System.Web.Configuration.MachineKeySection).Assembly.GetType("System.Web.Configuration.IVType");
    object hashValue = ivType.GetEnumValues().GetValue(2);  // 2 = Hash

    // Get MachineKeySection type to call private static method on it
    Type machineKeySectionType = typeof(System.Web.Configuration.MachineKeySection);
    var encryptOrDecryptData = machineKeySectionType.GetMethod("EncryptOrDecryptData", System.Reflection.BindingFlags.Static | System.Reflection.BindingFlags.NonPublic, null, new Type[] { typeof(bool), typeof(byte[]), typeof(byte[]), typeof(int), typeof(int), typeof(bool), typeof(bool), ivType }, null);

    // Decrypt the token
    outputAarray = (byte[])encryptOrDecryptData.Invoke(null, new object[] { false, inputArray, null, 0, inputArray.Length, false, false, hashValue });

string referencedResources = Encoding.UTF8.GetString(outputAarray);

Now the job is easy. I stick this code in my Default.aspx page Page_Load method and reverse-engineer every resource that fails on the previous try. Of course, this can be wrapped-up nicely in a method and put in a better place (error handler for example) but that wasn't needed for my purpose. Feel free to do so if it works for you.

author: Oleg Ignat | posted @ Sunday, November 02, 2014 5:05 PM | Feedback (0)

My ID HUB - A New Kind of Online Password Manager

A few days ago a very lightweight, intuitive and yet powerful online password manager was released to the world. It is called My ID HUB. This project has been in development and testing for quite some time so you can be sure it is solid.

My ID HUB Home Page

There are a lot of password managers online and why would anyone care to switch to another one? There are a lot of good reasons.

My ID HUB has a very high level of security

Security is no joke, especially considering how often we hear about retailers being hacked left and right. In each case customers lose their data to thieves and may need to face the consequences. If My ID HUB is hacked and entire database stolen it is as good as garbage without each individual user master password. It is economically detrimental to try and guess the decryption keys. If user selected a strong password then looking for a key to decrypt a record will take significant time and money. My ID HUB injects random "salt" into identities before encryption to further increase computational cost of cracking passwords. It may be even cheaper to steal user information elsewhere.

My ID HUB is using 2048-bit EV SSL certificate and DNSSEC to validate connection authenticity to and prevent interception of already encrypted content. My ID HUB doesn't accept non-secure connections at all and a request over HTTP protocol is redirected to HTTPS before it is serviced. DNSSEC ensures that DNS entry of the server isn't spoofed either. As such if user machine is not compromised then it is virtually impossible to spoof a web site and show another one looking just like My ID HUB to steal password. On top of that at the transport layer, information is encrypted twice with different keys at different layers.

User data is encrypted on the client side inside the browser using a key derived from the password. Encryption key is never sent to the server. It stays inside the browser in memory. When browser is closed, key is disposed.

Need more security - here you go! My ID HUB offers two-factor authentication via authenticator app for FREE. It the same authenticator app people use for Google, Facebook and Microsoft Live ID. This means that if someone does obtain valid user password and will try to access My ID HUB account he'll be denied without dynamic PIN code that is changed every 30 seconds.

My ID HUB Two Factor Authentication

Certain competitor services offer two-factor authentication for a monthly fee, but not My ID HUB. My ID HUB is committed to providing highest level of security possible free of charge.

How about a bit more security features? My ID HUB has a built-in account audit support. It means that user can see all requests issued by anyone in the world for his account, both authorized and denied. Each request is mapped to geographical locality (so called GeoIP) to let user reason about who it might be.

My ID HUB User Audit

PS. I apologize for red bars, just trying to keep my privacy private :)

Any more security? My ID HUB got it:

  • CAPTCHA prompt during login prevents brute-force attack on user account. Service denies requests that fail CAPTCHA.
  • Automatic block-listing of IP addresses that send too many requests per minute. This keeps people who want to cause DDOS out, while serving requests to everyone else.
  • A few more interesting features are coming up, like personal firewall, but more about that will come later.

Convenience at the heart of the service

Everyone knows that security comes at the price of convenience. It is hard to strike a balance between the two. In my mind, My ID HUB hit the spot. In order to use the service from the browser there is absolutely no need to download anything. Service is available as is - just navigate to the home page and sign-in.

My ID HUB tries to assist users by guessing what they want automatically. For example, if user creates an entry name "Login" My ID HUB will recognize that as identity name and assign it corresponding icon. If user create a "password" property then My ID HUB will suggest to generate value automatically, it will hide the value to ensure no one can see read it visually and offer a button to reveal it. In Internet Explorer it also provides an ability to copy password value into the clipboard without revealing it simply by clicking on it.

My ID HUB Password Experience in Internet Explorer

Availability and fault tolerance

My ID HUB service is backing up the database a few times a day to a geographically redundant storage. In case of a catastrophic failure and complete loss of a datacenter, My ID HUB will be up and running in a few hours with at most 6 hours of data loss.

Overall this is a great tool to keep all your secrets away from prying eyes yet conveniently close just a click away. I would strongly encourage you to try it for yourself and leave comments, and suggestions on UserVoice

author: Oleg Ignat | posted @ Tuesday, October 28, 2014 12:35 AM | Feedback (0)

Base32 Encoder and Decoder in C#

Recently I faced a need to convert regular byte array to Base32 and back. Having searched internet I didn't find a reliable converter. Some had bugs and some were good only one way but not the other. Hence I ended up writing my own. Turned out it wasn't that hard.

This code is available under Creative Commons Attribution license

using System;
using System.Text;

namespace Utilities
    /// <summary>
    /// Class used for conversion between byte array and Base32 notation
    /// </summary>
    internal sealed class Base32
        /// <summary>
        /// Size of the regular byte in bits
        /// </summary>
        private const int InByteSize = 8;

        /// <summary>
        /// Size of converted byte in bits
        /// </summary>
        private const int OutByteSize = 5;

        /// <summary>
        /// Alphabet
        /// </summary>
        private const string Base32Alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ234567";

        /// <summary>
        /// Convert byte array to Base32 format
        /// </summary>
        /// <param name="bytes">An array of bytes to convert to Base32 format</param>
        /// <returns>Returns a string representing byte array</returns>
        internal static string ToBase32String(byte[] bytes)
            // Check if byte array is null
            if (bytes == null)
                return null;
            // Check if empty
            else if (bytes.Length == 0)
                return string.Empty;

            // Prepare container for the final value
            StringBuilder builder = new StringBuilder(bytes.Length * InByteSize / OutByteSize);

            // Position in the input buffer
            int bytesPosition = 0;

            // Offset inside a single byte that <bytesPosition> points to (from left to right)
            // 0 - highest bit, 7 - lowest bit
            int bytesSubPosition = 0;

            // Byte to look up in the dictionary
            byte outputBase32Byte = 0;

            // The number of bits filled in the current output byte
            int outputBase32BytePosition = 0;

            // Iterate through input buffer until we reach past the end of it
            while (bytesPosition < bytes.Length)
                // Calculate the number of bits we can extract out of current input byte to fill missing bits in the output byte
                int bitsAvailableInByte = Math.Min(InByteSize - bytesSubPosition, OutByteSize - outputBase32BytePosition);

                // Make space in the output byte
                outputBase32Byte <<= bitsAvailableInByte;

                // Extract the part of the input byte and move it to the output byte
                outputBase32Byte |= (byte)(bytes[bytesPosition] >> (InByteSize - (bytesSubPosition + bitsAvailableInByte)));

                // Update current sub-byte position
                bytesSubPosition += bitsAvailableInByte;

                // Check overflow
                if (bytesSubPosition >= InByteSize)
                    // Move to the next byte
                    bytesSubPosition = 0;

                // Update current base32 byte completion
                outputBase32BytePosition += bitsAvailableInByte;

                // Check overflow or end of input array
                if (outputBase32BytePosition >= OutByteSize)
                    // Drop the overflow bits
                    outputBase32Byte &= 0x1F;  // 0x1F = 00011111 in binary

                    // Add current Base32 byte and convert it to character

                    // Move to the next byte
                    outputBase32BytePosition = 0;

            // Check if we have a remainder
            if (outputBase32BytePosition > 0)
                // Move to the right bits
                outputBase32Byte <<= (OutByteSize - outputBase32BytePosition);

                // Drop the overflow bits
                outputBase32Byte &= 0x1F;  // 0x1F = 00011111 in binary

                // Add current Base32 byte and convert it to character

            return builder.ToString();

        /// <summary>
        /// Convert base32 string to array of bytes
        /// </summary>
        /// <param name="base32String">Base32 string to convert</param>
        /// <returns>Returns a byte array converted from the string</returns>
        internal static byte[] FromBase32String(string base32String)
            // Check if string is null
            if (base32String == null)
                return null;
            // Check if empty
            else if (base32String == string.Empty)
                return new byte[0];

            // Convert to upper-case
            string base32StringUpperCase = base32String.ToUpperInvariant();

            // Prepare output byte array
            byte[] outputBytes = new byte[base32StringUpperCase.Length * OutByteSize / InByteSize];

            // Check the size
            if (outputBytes.Length == 0)
                throw new ArgumentException("Specified string is not valid Base32 format because it doesn't have enough data to construct a complete byte array");

            // Position in the string
            int base32Position = 0;

            // Offset inside the character in the string
            int base32SubPosition = 0;

            // Position within outputBytes array
            int outputBytePosition = 0;

            // The number of bits filled in the current output byte
            int outputByteSubPosition = 0;

            // Normally we would iterate on the input array but in this case we actually iterate on the output array
            // We do it because output array doesn't have overflow bits, while input does and it will cause output array overflow if we don't stop in time
            while (outputBytePosition < outputBytes.Length)
                // Look up current character in the dictionary to convert it to byte
                int currentBase32Byte = Base32Alphabet.IndexOf(base32StringUpperCase[base32Position]);

                // Check if found
                if (currentBase32Byte < 0)
                    throw new ArgumentException(string.Format("Specified string is not valid Base32 format because character \"{0}\" does not exist in Base32 alphabet", base32String[base32Position]));

                // Calculate the number of bits we can extract out of current input character to fill missing bits in the output byte
                int bitsAvailableInByte = Math.Min(OutByteSize - base32SubPosition, InByteSize - outputByteSubPosition);

                // Make space in the output byte
                outputBytes[outputBytePosition] <<= bitsAvailableInByte;

                // Extract the part of the input character and move it to the output byte
                outputBytes[outputBytePosition] |= (byte)(currentBase32Byte >> (OutByteSize - (base32SubPosition + bitsAvailableInByte)));

                // Update current sub-byte position
                outputByteSubPosition += bitsAvailableInByte;

                // Check overflow
                if (outputByteSubPosition >= InByteSize)
                    // Move to the next byte
                    outputByteSubPosition = 0;

                // Update current base32 byte completion
                base32SubPosition += bitsAvailableInByte;

                // Check overflow or end of input array
                if (base32SubPosition >= OutByteSize)
                    // Move to the next character
                    base32SubPosition = 0;

            return outputBytes;

author: Oleg Ignat | posted @ Saturday, February 15, 2014 9:10 AM | Feedback (1)

SQL Server 2012: Configuring AlwaysOn Read-Only Routing

SQL Server 2012 contains an amazing HA/DR improvement over Database Mirroring technology - AlwaysOn. It provides an ability to split application workflows into Writing and Reading and send them to different SQL Server instances thus improve hardware utilization and increase throughput.

Application can declare the type of queries it executes by adding a connection string keyword "ApplicationIntent". This keyword is supported by all SQL Server 2012 clients: SQL Server Native Client 11.0 ODBC driver, SQLNCLI11 OLE DB provider, JDBC and SqlClient (ADO.NET). This keyword can have one of two values: ReadOnly or ReadWrite. By default application is considered writing and if nothing is specified in the connection string it's "ApplicationIntent" is set ot "ReadWrite" automatically by the SQL Server client. To be able to utilize readable Secondary replicas of AlwaysOn application must specify ReadOnly application intent in the connection string. Of course, DBA needs to configure readable secondary to accept Read-Only or All connections by executing either one of the following DDL:

ALTER AVAILABILITY GROUP [<your Availability Group>]


ALTER AVAILABILITY GROUP [<your Availability Group>]

Once the SQL Server is configured to accept read-only connections application can connect using a connection string like this:

Server=<your secondary replica>;Database=<Availability Group database>;Trusted_Connection=Yes;ApplicationIntent=ReadOnly;

To access Primary replica of the Availability Group regardless of which physical machine it is hosted on we added a concept of Availability Group Listener. It maps to Virtual Network (Generic Client Access Point) in the Windows Failover Cluster but is fully under control of the SQL Server. SQL Server can dynamically bind to it when a particular physical machine becomes Primary and unbind from it when machine transitions into the Secondary role. This feature solves a problem of locating Primary replica through an immutable identifier.

In order to access a Secondary replica developers would have to hard-code AlwaysOn replica names in the application connection strings. Obviously it doesn't work well. After Availability Group failover replicas change roles and Primary becomes Secondary while one of the Secondaries becomes Primary. Primary replica remains addressable by Availability Group Listener but there's no immutable identifier for one of the Secondary replicas. To resolve this problem and make connectivity experience seamless my team implemented Read-Only Routing feature.

In a nutshell, Read-Only Routing allows redirecting application workflows that only read data to Secondary replicas using Availability Group Listener and Application Intent. From client application perspective only three requirements need to be fulfilled:

  1. Application must use SQL Server 2012 or later client (that supports TDS 7.4 or above protocol)
  2. Application's connection string must point to Availability Group listener in "Server" keyword. Warning: Pointing application directly to primary replica will bypass read-only routing.
  3. Application must declare it's workflow as read-only by adding a connection string keyword "ApplicationIntent" with value "ReadOnly".

If server is configured correctly application that meets the requirements will be redirected to the secondary replica. A lot more interesting and complicated topic is server-side configuration. I talked to a few customers who embraced AlwaysOn technology and realized that this post is due.

In order for SQL Server to successfully service read-only routing requests the following requirements must be met:

  1. An Availability Group must have a Listener and all read-only connections should be pointed to it.
  2. Each Secondary replica must have a valid and correct URL assigned to it.
  3. Currently Primary replica must have a routing list configured. This list must contain the replicas in routing preference order.
  4. Secondary replica that is supposed to service read-only requests must be actively synchronizing data with the Primary replica. Warning: If data synchronization is suspended replica is deemed unavailable from Read-Only Routing perspective.

I'd like to stop at each of the points and talk about it.

Availability Group Listener

Listener can be created as part of the new availability group wizard. It is hidden on the last tab of the "Specify Replicas" page.

Availability Group Listener

By default it is disabled but I recommend creating a listener with the Availability Group. Listener is the recommended way to access Primary replica. However if you already have an Availability Group but need to add a Listener it can be easily done with the following DDL:

ALTER AVAILABILITY GROUP [<your availability group>]
ADD LISTENER N'<your listener name>' (
((N'<IP address>', N'<IP subnet>')
, PORT=<port on which all SQL Server replicas are listening>);

To verify that the listener exists and operates correctly you can query the DMVs we've added to SQL Server 2012:

select * from sys.availability_group_listeners
select * from sys.availability_group_listener_ip_addresses

Read-Only Routing URL

Before Read-Only Routing can be configured each replica must be assigned a valid and correct URL that will be sent to the SQL Server client during routing. People quite often confuse this URL with the Database Mirroring end-point URL. In fact these are two distinct end-points. Database Mirroring end-point is generally configured on port 5022 and is used solely by SQL Server instances to communicate with each other and synchronize data. Read-Only Routing URL on the other hand is a URL that can be used by SQL Server client to connect to SQL Server instance. The port in Read-Only Routing URL is a TCP port at which SQL Server has a TDS end-point. By default it is 1433.

Syntax for configuring Read-Only Routing URL:

ALTER AVAILABILITY GROUP [<your availability group>]
MODIFY REPLICA ON N'<your availability group replica>' WITH (SECONDARY_ROLE(READ_ONLY_ROUTING_URL=N'tcp://<host>:<port>'))

Please note that the URL is under SECONDARY_ROLE option. If replica is currently Primary you still need to configure it's READ_ONLY_ROUTING_URL under SECONDARY_ROLE.

The format of the URL must follow regular URL specification = <protocol>://<host>:<port>. There might be a confusion since SQL Server clients support a different format of the URL in the connection string - <protocol>:<host>[,<port>]. Please use regular URL format when configuring Read-Only Routing on the server and disregard the other format that SQL Server clients understand.

SQL Server currently supports 4 ways to specify the target replica in the URL:

  • Fully qualified domain name. For example - tcp:// This is the recommended way.
  • Relative domain name - tcp://myreplica:1433
  • IP v4 address - tcp://
  • IP v6 address - tcp://[2001:4898:f0:f020:79ee:9e2f:4319:60b6]:1433

 To check which URL is assigned to the replica please query the column in a catalog view:

select read_only_routing_url from sys.availability_replicas

NOTE: SQL Server will fail Read-Only Routing configuration if at least one of the replicas in the Read-Only Routing List doesn't have a URL.

Read-Only Routing List

It is a list of replicas in the order of preference that Primary replica should evaluate whenever it receives a readable connection request and redirect the client to the first replica from the list that can serve the request. It is not a random distribution list accross all secondary replicas. We counciously decided not to implement random distribution because many applications expect consistent "view of the world" accross connections. This isn't possible with random distribution accross multiple asynchronous replicas that are at different points of transactional commit.

ALTER AVAILABILITY GROUP [<your availabiliry group>]
MODIFY REPLICA ON N'<your availability group replica>' WITH (PRIMARY_ROLE(READ_ONLY_ROUTING_LIST = (N'<first preference replica>', N'<second preference replica>')))

Please note that READ_ONLY_ROUTING_LIST is under PRIMARY_ROLE configuration section of the replica. If replica is currently in Secondary role it is strongly advised to configure Read-Only Routing List for it as well in case failover happens to preserve readable workload. Without Read-Only Routing List configured for would-be Primary readable workload would be cut off after failover.

To see all read-only routing lists that are visible to the replica on which you run the query, you can execute this:

select * from sys.availability_read_only_routing_lists

Obviously to get a view per Availability Group this catalog view would have be joined with other catalog views. For example the following query will return human-readable view of the routing configuration for all Availability Groups:

select, r1.replica_server_name, l.routing_priority, r2.replica_server_name, r2.read_only_routing_url
from sys.availability_read_only_routing_lists as l
join sys.availability_replicas as r1 on l.replica_id = r1.replica_id
join sys.availability_replicas as r2 on l.read_only_replica_id = r2.replica_id
join sys.availability_groups as g on r1.group_id = g.group_id

If everything is done correctly Read-Only Routing should be up and running by now. Let's pretend that it isn't and try to troubleshoot it.

Database Synchronization State

If Secondary replica isn't synchronizing data with the Primary it will not be considered by the Primary as a routing candidate even if it is in the Read-Only Routing List.

If database on the Secondary replica has a little "pause" sign next to it in SQL Server Management Studio object explorer it means it is not synchronizing data. Data synchronization can be resumed right-clicking on the database and choosing "Resume Data Movement..."

Resume Data Movement


From the T-SQL script it can be done executing the following DDL on the replica that isn't synchronizing:


Once data movement is resumed Primary replica will activate routing to the Secondary replica of interest.

Extended Events

To help troubleshoot Read-Only Routing beyond obvious problems we added 3 events to the SQL Server. They are hidden under "Debug" channel of the Extended Events Session dialog.

Read-Only Routing Extended Events

To access them please click a little drop-down box on the "Channel" column and check "Debug".

Please add following events:

  • read_only_route_complete is registered on the Primary replica when it successfully routes the client to the URL provided in the Secondary replica that was selected by the Primary replica.
  • read_only_route_failed is fired when Primary replica couldn't find a single viable candidate for routing
  • hadr_evaluate_readonly_routing event is fired on the Primary replica whenever Availability Group state changes and it needs to re-evaluate the routing target from the list. This can happen when Availability Group fails over, one of the replicas change their connectivity policy (e.g. Read-Only to No), Read-Only Routing List changes, etc. No client connections are necessary for this event to occur. Pure server-side changes trigger it.

Registered Extended Events

Using these 3 events one can very easily troubleshoot routing problems.

One very common mistake that people make is they generate wrong but valid Read-Only Routing URLs. Tripple-check that your URLs contain correct representations of the host on which SQL Server is running and most importantly the port number. To see which ports SQL Server is listening on one can use SQL Server Configuration Manager or SQL Server logs. Both places contain port numbers. Make sure that these port numbers are reflected in the routing URLs. One clear indication of incorrect URL is when Primary replica fires read_only_route_complete but for some reason client's get connection timeout errors.

Command-Line Tools

We added Read-Only Routing support to SQLCMD.exe tool through -K command-line option. To use it simply specify application intent prefixed by -K. For example:

SQLCMD.exe -S <your availability group listener> -D <availability group database> -E -K ReadOnly

SQLCMD tool will connect to the availabiliy group listener and pass it's ReadOnly intent which will trigger Read-Only Routing and the tool will be redirected to the Secondary replica which accepts read-only workflow.

Lastly, if you're enjoying Read-Only Routing in your enterprise and it works perfectly I'd love to hear your feedback. On the other hand if you have any troubles with Read-Only Routing I would like to hear them as well and help you get a quick and reasonable resolution. Stay tuned for updates and trick.

Edit on 02/15/2012

Q: When porting applications to SQL Server AlwaysOn is it application's responsibility to figure out which replica to send the query to or will SQL Server route the requests automatically depending on the query?

A: Short answer - SQL Server will figure it out. Let me also give a long answer to help understand the whys and hows. Read-Only Routing is connection-level feature, not a query-level. If a read-only query is sent via a connection with ApplicationIntent = ReadWrite it will not be routed to the secondary replica, because by the time the query is sent connection will already have been routed. Routing occurs during login phase, before any query is sent to the server.

If an application combines two types of workflows (read-only and read-write) then it is application's responsibility to open two connections: the first with ApplicationIntent = ReadWrite and the second one with ApplicationIntent = ReadOnly. Reading queries should be sent via the second connection while writing via the first.

Developer and DBA should be mindful of a subtle yet very important aspect of AlwaysOn. There are 2 modes of data synchronization between Primary and Secondary replicas: Synchronous and Asynchronous:

  • In Synchronous mode, both Primary and Secondary replicas view the data at the same time the same way. Basically the operation is committed only when both Primary and Secondary replicas commiited. In this mode of replication it is safe to send read-only queries via a separate connection because they will be able to pick up the results of the read-write queries immediately
  • In Asynchronous replication mode, Secondary replica is applying transaction log with a delay and can theoretically fall behind a Primary for more than a few seconds. If an application that has just inserted a data into the Primary replica (via ReadWrite connection) queries for it on the Secondary it may not find it since the transaction log hasn't been committed yet on the Secondary replica. In this case it isn't generally safe to split a single application into two workflows. However in certain scenarios this is acceptable if application doesn't make any assuptions about data availability between reads and writes.

I hope this outlines both pros and cons of splitting workflow of a single application in two. Read-Only Routing is mostly beneficial to applications that have a single type of workflow (e.g. reporting applications, periodic aggregation, etc).

Edit on 4/27/2012

Q: What algorithm(s) does the ReadOnly connection router use when ApplicationIntent=ReadOnly?

A: We've built a foundation for connection routing. This was a first iteration of functionality and doesn't include all the features we'd like. For example, in future we may add resource governance to connection routing. It might even be possible to configure entirely custom routing algorithm, including random or event normal distribution. We received numerous comments about the types of algorithms customers would like to use for their solutions.

Going back to your question - the only algorithm that is currently supported is sequential traversal of the routing list to the end. It is not a round-robin in a sense that it doesn't wrap around when the end of the list is reached. Primary replica strictly traverses the list and looks for the first replica that can serve the connection request. Once found, all subsequent read-only connections are routed to it. Only availability group state changes (e.g. failover, replica state change, etc) can cause read-only target recalculation.

We recognize that this algorithm is limited. Please keep in mind that it serves it's purpose - keep the read-only applications running. If we introduced random distribution or even round-robin, it could have broken numerous applications, that establish multiple independent connections and expect data to be consistent accross all of them. If primary replica were choosing different replicas upon each request, then not only it would increase load on the primary replica, it would also result in different connections going to different replicas and thus having different view of the world due to asynchronous data replication between instances.

Please let me know if you have any other questions.

author: Oleg Ignat | posted @ Wednesday, January 25, 2012 4:21 PM | Feedback (18)

How to connect Windows Phone 7 to your car stereo

Windows Phone 7 that was built on top of Microsoft Zune platform is a new trend and hasn't reached wide adoption yet. I'm a strong believer that it will grow it's market share which will stimulate Zune player interface implementation in various devices, including car stereos. However at this point, iPod is de-facto the standard that everyone loves and deployes.

If you're have a Windows Phone 7 device you still have a few options to connect it to your car stereo. The most widely spread approach is through Zune to FM converter. It plugs into your Windows Phone 7 device, receives the audio stream from it, converts it to FM and transmits at the configured frequency. This may be the cheapest solution for you if what I'll be discussing below is not applicable for your car. I tried this one and wasn't extremely happy with it. Overall it gets the job done but if you live in US it is difficult to find a frequency not occupied by a radio station. If you do find it, you'll still experience a fair amount of interference while listening your music. If you can avoid it - don't buy it.

The other option which I discovered fairly recently is way better than the previous one - Bluetooth pairing with audio player. I'm not talking about regular phone pairing between a car and a Windows Phone 7 device. The list of Bluetooth profiles supported by Windows Phone are listed at Turns out Windows Phone 7 supports A2DP profile. If your car stereo supports this profile as well, you can pair your car with Windows Phone and enjoy digital quality audio stream from your phone through your car stereo. The exact instructions vary from car to car so you'd have to refere to car manual. If you drive Lexus then here's what you need to do:

  1. Click hardware "Audio" button on your car stereo
  2. Choose "BT" tab which stands for "Bluetooth"
  3. Tap "Connect" button on the screen
  4. Enable bluetooth on your Windows Phone 7 device
  5. From the list of bluetooth devices discovered by Windows Phone 7 pick your car stereo and pair them

Once device is paired, you can stream music from your phone. The drawback of this method is that you still can't control playback from your car stereo, you need to choose what to play on Windows Phone 7. However this approach is better than FM transmitter because audio is delivered in loseless digital format as opposed to analog.

I hope this post helps you get more out of your Windows Phone 7 devices and enjoy it to the extent it deserves.

author: Oleg Ignat | posted @ Saturday, September 10, 2011 4:24 PM | Feedback (0)

Microsoft OLE DB deprecation

Recently Microsoft announced in a blog post that it is deprecating OLE DB stack in SQL Server Native Client and shifting focus to ODBC. Given that I own SQL Server Native Client product at Microsoft I'd like to take a few minutes to clarify what is actually happening and why industry shouldn't panic.

After reading original blog post and various interpretations by different people on the internet one might think that OLE DB as technology is being deprecated. This isn't true. Only a concrete implementation is being deprecated, namely SQL Server Native Client's OLE DB stack. If you are familiar with SQL Server Native Client it comes with two database access stacks packaged in a single DLL - OLE DB and ODBC. Only OLE DB stack is being deprecated, that's it. OLE DB as a database access technology lives on. A miriad of OLE DB providers written since OLE DB inception will continue to work on Windows after SQL Server codename 'Denali' ships.

I'd also like to point out a distinction between deprecation announcement and removal. Even though OLE DB stack in SQL Server Native Client is being deprecated, it is not being removed any time soon. You can still use it when you upgrade to SQL Server Native Client Denali. Moreover, I would strongly enocunrage everyone using previous versions of SQL Server Native Client to upgrade as there will be a lot of improvements spanning functionalily, performance, scalability and robustness.

Linked SQL Servers

I'd like to ensure everyone that backwards compatibility in linked servers functionality will be preserved in SQL Server Denali. Both Microsoft SQL Server Native Client OLE DB provider and all 3rd party providers that used to work before will continue to function.


SQL Server Integration Services, Reporting Services and Analysis Services will continue to operate and serve your business needs. Even in many years when OLE DB code from SQL Server Native Client will be removed, Microsoft will provide a solution to ensure minimum impact on your business.


There are quite a few questions about ADO.NET's OLE DB wrapper, namely System.Data.OleDb. As far as I know this bridge is not going anywhere with SQL Server Native Client's OLE DB stack deprecation. You can still use it if you need to interop from managed code into native OLE DB provider. Some people get confused (and rightfully so) between stacks and don't know where's the safe heaven. General rule of thumb is quite simple actually - if you're developing a native C++ application then please use ODBC technology to access data in the SQL Server. However, if you are writing a .NET application in C# or VB.NET, either console, or web application, please use System.Data.SqlClient. There's no need for you to install SQL Server Native Client and interop into native stack through either System.Data.OleDb or System.Data.Odbc. SqlClient is the way to go. It also means that you shouldn't jump on System.Data.Odbc if you were using System.Data.OleDb. Please carefully evaluate your requirements and make a call. For example, if you are using a 3rd party OLE DB stack that doesn't have ADO.NET implementation then no action is required on your side.

I'd like to close this article on a positive note. SQL Server Native Client OLE DB deprecation is in fact a good thing.  Instead of confusing customers even more Microsoft decided to invest havier in one stack and make it awesome as opposed to two stacks which would only be great. Yes, there will be certain discomfort along the way, but sometimes tough choices are necessary.

Disclaimer: Don't take my words as official Microsoft commitment. This article describes current state of affairs which may change in the future.

author: Oleg Ignat | posted @ Friday, September 09, 2011 7:51 PM | Feedback (1)

SQL Server - WMI Error 0x80041010 - Invalid Class

Recently I started experiencing WMI repository corruption that results in inability to open SQL Server Configuration Manager with the following error message:

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager.
Invalid class [0x80041010]

At this point if I browse WMI repository under root\microsoft\sqlserver\computermanagement11 I see no SQL Server-specific classes. Only system classes exist. To resolve this issue SQL Server WMI provider needs to be reinstalled.

Step 1 - Install classes

Navigate to C:\Program Files (x86)\Microsoft SQL Server\<SQL Server version, for example "110" for SQL Server 11>\Shared if you have x64 installation of SQL Server. On x86 machine the path is C:\Program Files\Microsoft SQL Server\<SQL Server version, for example "110" for SQL Server 11>\Shared. Make sure that file sqlmgmproviderxpsp2up.mof exists. If it doesn't then either you're in the wrong location or your installation is corrupted and only SQL Server setup can fix the problem. However if the file does exist, then resolution is as simple as running command:

mofcomp sqlmgmproviderxpsp2up.mof

WMI compiler will notify you that repository was successfully updated

Step 2 - Install localization information

After the classes are deployed localization information needs to be installed as well. From the path in the previous step navigate to the sub-folder that reflects the locale of SQL Server installation. For example 1033 (for english-US). Full path is now C:\Program Files (x86)\Microsoft SQL Server\<SQL Server version, for example "110">\Shared\<locale ID, for example "1033" for english>. When at this path run the following command:

mofcomp sqlmgmprovider.mfl

WMI resopitory should have been updated successfully again.

Now you're all set. The problem should be resolve and SQL Server Configuration Manager should be functional again. If it still isn't the I would encourage you to describe your problem at SQL Server Manageability forum -


author: Oleg Ignat | posted @ Tuesday, December 07, 2010 9:47 AM | Feedback (1)

SQL Server 2005 and Service Broker

I got a couple of spare minutes while my tests are running. I'd like to use this time to share with you a simple but painful situation I have every time I need to verify Service Broker regressions in the internal SQL Server builds.

According to official Microsoft web site we supply Service Broker in Enterprise SKU for sure. However our internal test build has this feature disabled by default. I spent pretty much time exploring the options to get it enabled.

Basically, there are 2 probems:

1. Service Broker endpoint is not configured. To resolve this problem you need connect to the server and run a query


This query will create a service broker if it doesn't exist. If you have one but need to start/stop it please refer to the documentation.

2. Service Broker is configured and running but the database is not configured for the Service Broker. To solve this issue you have to run a query one of the listed below:

-- Enable Service Broker:
-- Disable Service Broker:

If you still can't solve your problem please write on the Microsoft Service Broker blog.

Once I figured these two queries my life has become much easier. Hope this helps you as well.

author: Oleg Ignat | posted @ Friday, December 21, 2007 3:49 PM | Feedback (6)

Gabriel Garcia Marquez about Life

Gabriel Garcia MarquezI've been reading about the Gabriel Garcia Marquez and collected the most famous phrases of his about life. I'm presenting my translation so please don't judge hardly. Here they are:

  • A true friend is the one holding your hand and feeling your heart beating
  • The worst way to grieve about somebody is to be with him and understand he'll never be yours.
  • Don't cry about something that passed. Smile because it has happened.
  • Don't waste your time for the man that is not willing to spend it with you.
  • No one deserves your tears. The one who deserves will never make you cry.
  • Never stop smiling even when you're sad because someone might fall in love with your smile.
  • Don't try so hard - all the best happens spontaneously.
  • I love you not for what you are but  for what I am when I'm with you.
  • In this world you're only a human but for someone you might be the whole world.
  • The God probably wants us to meet the wrong people before we meet the one to make us thankful when it happens.
  • When someone doesn't love you the way you want it doesn't mean that he doesn't love you with all his heart.
  • Make sure you understand yourself before you meet a new man hoping he'd to understand you.
  • There always are people who will cause you pain. You should trust people but be a bit more careful.
The ideas #3, #4, #5, #10 and #12 are my favorite.

author: Oleg Ignat | posted @ Sunday, July 01, 2007 12:34 PM | Feedback (2)