RsaProtectedConfigurationProvider: Not recommended for children under 5

So just an intro before getting into the nitty gritty, these are my experiences personally when dealing with the RSAProtectedConfigurationProvider combination of tools in order to get sections of my web.config for an enterprise web application encrypted and some of the hassles along the way. As with anything cryptosecurity-related, there is some cursing involved.

Create an RSA Key

The first step to getting RSA encryption into your web application’s web.config, you need to create a common key that can be shared across servers and/or development environments. To do this, open up a Visual Studio command prompt (or navigate a regular command prompt over to your %WINDIR%\Microsoft.NET\Framework(64)\{insert .net version}\ directory) as an administrator and type in

aspnet_regiis -pc "ProjectKeyNameHere" -exp

This will create a key for you in the %PROGRAMDATA%\Microsoft\Crypto\RSA\MachineKeys directory

MachineKeys folder

Each key is given its own unique, seemingly nonsensical filename. However, the naming convention is actually the calculated key hash based off of the keyname itself, followed by the machine key GUID, and separated by an underscore. The key hash is calculated by an interesting algorithm when it gets imported into the keystore, and the machine GUID can simply be found in the registry under HKLM:\SOFTWARE\Microsoft\Cryptography.

Quick Trivia Fact: The key “c2319c42033a5ca7f44e731bfd3fa2b5_…” is the key that is used by IIS to encrypt the MetaBase. Delete or mess with this key and the IIS server on the machine will cease to function.

So even though you can modify permissions to keys and even delete them directly from this directory, DO NOT MANUALLY EDIT PERMISSIONS OR DELETE KEYS FROM THIS DIRECTORY OR YOU WILL SCREW THINGS UP. Use the aspnet_regiis tool to manage your keys instead. Trust me on this (or take the risk and have fun troubleshooting later).

Exporting the key

Now, in order to share the key to other developers and/or machines, you will need to export the key into an XML file. This can be done by running the following command:

aspnet_regiis -px "ProjectKeyNameHere" "C:\location\for\key\KeyName.xml" -pri

This will give you a plain text XML file which describes the private key. As with anything that has “private” in its name, treat this file with respect and proper security procedures (aka, don’t throw this up on a public share, make sure you keep track of where it goes, etc).

Installing existing (XML) key to machine(s)

Once you send off the XML file in a secure fashion to another developer or machine, you can then run the following command on that machine in order to import the key into that machine’s RSA key store:

aspnet_regiis -pi "ProjectKeyNameHere" "C:\location\of\key\KeyName.xml"

Keep in mind, the “ProjectKeyNameHere” name has to be the same throughout the entire process, as this is the name of the key that will be identified in the web.config later on.

At this point, any administrative user on the box can now use the key to encrypt/decrypt information. No other (regular/non-admin) user will have access to read the key — this includes the user that IIS AppPools run under. The names or groups of the users that have this permission change based on environment, but is usually “NT AUTHORITY\NETWORK SERVICE”. In order to grant permissions for other users to read the key, run the following command:

aspnet_regiis -pa "ProjectKeyNameHere" "UsersOrGroups"

If you’re using the impersonation setting in IIS for your web application, read the “Impersonation permission issues” section below.

Adding key to web.config

In order for IIS to know which key to use for encryption/decryption, we first have to turn off the default RSAConfigurationProvider in the web.config and then add in our own key that we created. To do this, simply add the following to your web.config (directly under the root configuration node):

<configprotecteddata>
    <providers>
      <remove name="RSAProtectedConfigurationProvider">
      <add keycontainername="ProjectKeyNameHere" usemachinecontainer="true" description="Uses RsaCryptoServiceProvider to encrypt and decrypt" name="RsaProtectedConfigurationProvider" type="System.Configuration.RsaProtectedConfigurationProvider, System.Configuration, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">
    </add></remove></providers>
</configprotecteddata>

Change the assembly version number for System.Configuration.RsaProtectedConfigurationProvider to whichever version of .NET you are using for your application (e.g. 2.0.0.0 for 2.0/3.5).

After saving the file, you will now be able to use aspnet_regiis to encrypt and decrypt sections of your web.config file as you see fit.

To encrypt (in this example, the connectionStrings node):

aspnet_regiis -pef "connectionStrings" "c:\path\to\web.config\"

To decrypt

aspnet_regiis -pdf "connectionStrings" "c:\path\to\web.config\"

Note that I supply the path to the config file, but I don’t use the filename itself in the parameter (e.g. Don’t use c:\path\to\web.config\web.config). Also note that you cannot encrypt the entire config file, only nodes under the root configuration node (it would be a bad idea to encrypt the entire file as the part to know how to decrypt the file would also be encrypted).

You will also be able to use tools such as the Enterprise Library Config Tool in order to do this for you automatically should you so desire.

Reference: ASP.NET IIS Registration Tool http://msdn.microsoft.com/en-us/library/k6h9cz8h(v=vs.80).aspx

Running publish commands to encrypt config

Personally, I find it easiest when I don’t encrypt any data in the web.config file until I’m ready to publish (so if I have to change a connectionString, I won’t have to manually decrypt, change, and then re-encrypt each time). This process will vary based on which method you use to publish your web application. I will only be covering the file system publish method, but make sure to check out the reference links below for compiling your own methods for publishing.

For MSBuild functionality, you can either store these commands in the main .csproj file if you want the action to occur globally across all the publish profiles that you have, or you can set it individually in each .publxml file (recommended).

Now, what I ended up doing for the file system publish was capturing an event after the build and web.config transform but before the files were moved to the filesystem location that was specified. Under the root Project node in the .pubxml file, I added the following:

<target name="BeforePublish" aftertargets="CopyAllFilesToSingleFolderForPackage">
    <getframeworkpath>
      <output taskparameter="Path" propertyname="FrameworkPath">
    </output></getframeworkpath>
    <exec command="$(FrameworkPath)\aspnet_regiis.exe -pef "connectionStrings" "$(MSBuildProjectDirectory)\$(IntermediateOutputPath)Package\PackageTmp"">
  </exec></target>

A few things to note here. The AfterTargets variable was a pain in the butt to find (and apparently only works well with msbuild 4.0). It is literally not documented anywhere officially (or nowhere I can find on the internet at the moment). But this will capture the moment before the files are moved to the destination file system location. The variables in the Exec Command node (such as $(MSBuildBinPath)) are all listed under the MSBuild documentation, which is what I ended up using. If you use quotes inside of the command, make sure to use “&quot” as it is an XML file, after all. If you have multiple sections that you want to encrypt, you will have to create an Exec Command node for each one.

Reference: How to: Extend the Visual Studio Build Process http://msdn.microsoft.com/en-us/library/ms366724.aspx
Reference: MSBuild Concepts http://msdn.microsoft.com/en-us/library/vstudio/dd637714.aspx
Reference: How to: Use Environment Variables in a Build http://msdn.microsoft.com/en-us/library/ms171459.aspx
Reference: GetFrameworkPath Task http://msdn.microsoft.com/en-us/library/ms164297.aspx

Impersonation permission issues

One of the first problems I ran across was the fact that when IIS has the impersonation flag set to true for the web application, it will use the impersonated account to access the RSA key in order to decrypt the configuration file. Unfortunately, there is no way around this if you MUST have impersonation: you will have to grant access to the key for each user who is using the application. An easy way to do this is to grant permission to the group needing access to the application. Unfortunately, if you have something like Windows Authentication turned on, that group may end up being “Domain Users” or goodness forbid “Everyone”:

aspnet_regiis -pa "ProjectKeyNameHere" "Everyone"

This sort of sucks in terms of security, I know. Right now, there really isn’t a better way of dealing with it without rewriting your application and turning off impersonation.

Cannot read key error

Check to make sure that permissions on the key are setup correctly. If it helps to debug, give permissions to everyone to see if the error goes away. If it does, find out which user/group actually needs read access to the key. If it doesn’t, here’s Google :).

Safe Handle error

“Failed to decrypt using provider ‘RsaProtectedConfigurationProvider’. Error message from the provider: Safe handle has been closed”

So this one is a pain, but again, it has everything to do with permissions. If you did everything right, and read the note above about not manually changing the permissions on the MachineKeys folder and using the aspnet_regiis tool instead, you shouldn’t have to worry about this.

On the other hand, if one of your co-workers decided that it was a good idea to change permissions on that folder without realizing the consequences it may produce, well, you have this cryptic error message to deal with.

First and foremost, check the permissions on the MachineKeys folder (%ProgramData%\Microsoft\Crypto\RSA\MachineKeys). You should see this little lock icon near the folder:

11-6-2013 3-31-10 PM

That means that general users can’t read the contents of the folder. This is a good thing. Now check your permissions for the “Everyone” group:

11-6-2013 2-59-56 PM

The !!!ONLY!!! permissions that the “Everyone” group should have checked are the “Special Permissions”. Digging a little deeper, we find out what those special permissions are:

11-6-2013 2-59-22 PM

Specifically, we have the following from the list provided by Microsoft:

  • List Folder/Read Data
  • Read Attributes
  • Read Extended Attributes
  • Create Files/Write Data
  • Create Folders/Append Data
  • Write Attributes
  • Write Extended Attributes
  • Read Permissions

and the following quote: “The default permissions on the folder may be misleading when you attempt to determine the minimum permissions that are necessary for proper installation and the accessing of certificates.”

If you grant any further permissions to the “Everyone” group, such as general read access, write access, or full control, you will see that not only will you break the functionality of the existing keys in the folder, any additional key you install to the folder will be improperly installed, as aspnet_regiis will complain about the safe handle. The former will start working once you fix the permissions on the folder (for the most part). The latter will not be fixed as they were improperly installed and will be stuck in a state of limbo. The aspnet_regiis tool will not even be able to delete those keys (it will complain about…the safe handle being closed). The only way you will be able to delete them is by manually going into the MachineKeys folder, finding out which key you have to delete (either by last modified time or by the key hash – this you can get by installing the key to another machine and comparing the first part of the filename for both). Then, once manually deleted, you can use the aspnet_regiis tool to reinstall the key.

Again, I don’t have to remind you to be very meticulous and careful when rooting around and modifying things in this folder as it can break much more than just your web application.

Reference: Default permissions for the MachineKeys folders http://support.microsoft.com/kb/278381

Adding MVC4 to a project: The operation could not be completed. Not implemented

While experimenting around with adding the MVC framework onto an existing C# Webforms project, I came across an interesting error when trying to add the MVC3/MVC4 project type GUID into the existing csproj file for the project (I tried both, seeing if there was something wrong with that). I came up with the following error when first opening the solution with the project (this was using Visual Studio 2012):

Visual Studio 2012 MVC4 Error

“The operation could not be completed. Not implemented”

That was very…descriptive. I went ahead and repaired the MVC4 package thinking that might fix it, but it was no good. Made sure Visual Studio was up to date – it was. I was pretty stumped. I took another look at the csproj file. Before adding MVC4 to the project, I had the following:


{349c5851-65df-11da-9384-00065b846f21};
{fae04ec0-301f-11d3-bf4b-00c04f79efbc}

So it’s your basic web application project {349c5851-65df-11da-9384-00065b846f21} combined with a C# project {fae04ec0-301f-11d3-bf4b-00c04f79efbc}. So, I went ahead and added in the MVC4 type {E53F8FEA-EAE0-44A6-8774-FFD645390401} to get the following code:


{349c5851-65df-11da-9384-00065b846f21};
{fae04ec0-301f-11d3-bf4b-00c04f79efbc};
{E53F8FEA-EAE0-44A6-8774-FFD645390401}

Seems like it should work. I compared the above to a newly created MVC project and apparently, the problem exists with the order that the project type GUIDs are listed in. I had added the MVC4 project type GUID after everything, whereas it needs to go before the others, like so:


{E53F8FEA-EAE0-44A6-8774-FFD645390401};
{349c5851-65df-11da-9384-00065b846f21};
{fae04ec0-301f-11d3-bf4b-00c04f79efbc}

Problem is, I can’t seem to find any sort of documentation for this behavior. It really shouldn’t matter what order the project type GUIDs are in, right? I can understand from a developer standpoint that MVC depends on the Web application framework which depends on the C# project – and in that way, the project can correctly identify the build order – but I thought that this was something that Visual Studio would have been able to figure out more easily (without having to list the project type GUIDs in the proper order). Anyways, after figuring that out, MVC worked with the existing Webforms application like a champ.

VMware Workstation Automatic Suspend/Start and Backup Scripts for Ubuntu

I had previously listed some scripts that I use for VMware Workstation management on the Intrinium blog, but have found a few annoyances with running them. So, I’m re-releasing the scripts in their modified form.

The objective performed by these scripts are as follow:

  • They provide a simple way to suspend all virtual machines on host shutdown and resume only a select few on host startup.
  • They provide a way to check if certain virtual machines are running. If they are not running, the script will send an alert (and attempt to start the machine).
  • Backups will occur on a per-machine basis (that can be set to be different than the running list) and will only suspend one machine at a time for backups, starting them back up once the backup is finished. The backup will be sent to a samba share in my example, but this can be configured to be anything you can cp/scp to.

I’ve also modified the scripts to be a little more flexible in terms of modularity (user set directories) and included some error checks. Other than that, you know the routine – test it before you run it and don’t blame me if you delete everything.

First, the startup script I’ve dropped as /etc/init.d/vmwaresuspend (and of course, run “update-rc.d vmwaresuspend defaults”). This script runs on startup and shutdown of the host, making sure to suspend all running virtual machines and start up only machines on the machine list when the host starts back up:

#!/bin/bash

### BEGIN INIT INFO
# Provides: vmwaresuspend
# Required-Start:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Required-Stop: 0 1 6
### END INIT INFO
USER="media" #the user to run VMware as
MLIST="/media/raid/vmware/machine.list" #the machine list for machines to START on startup (see example below)

case "$1" in

start)
if [ $# -gt 1 ]
then
        su ${USER} -c "vmrun start '$2' nogui"
else
        while read VM;
        do
                echo "Starting $VM"
                su ${USER} -c "vmrun start '$VM' nogui"
        done < $MLIST
fi
exit 0
;;

stop)
if [ $# -gt 1 ]
then
        su ${USER} -c "vmrun suspend '$2'"
else
        vmrun list|grep vmx$|while read VM
        do
                echo "Suspending $VM"
                su ${USER} -c "vmrun suspend '$VM'"
        done
fi
exit 0
;;

restart)
# Nothing to be done for restart
exit 0
;;
esac
exit 0

The machine list is simply the full path to the vmx file for the virtual machine, one per line. Example:

/home/vmware/vm1/vm1.vmx
/home/vmware/vm2/vm2.vmx

Next, the script that checks to see if VMs are running. I set the list to be the same one as the startup script above, but you can change it to a different one if you feel like it. I saved this in /root/cron/checkVMs.sh and updated cron to have it run this script every 5 minutes:

#!/bin/bash
MLIST="/media/raid/vmware/machine.list"
LOCKFILE="/media/raid/vmware/backup.lock"

if [ -f ${LOCKFILE} ]; then #checks for the lock file made by backup script
        exit 0
fi
# vmrun list | tail -n+2 | awk -F/ '{print $NF}' | rev | cut -d\. -f2- | rev ...or we can just ps grep :)
while read VM;
do
        if ! ps ax | grep -v grep | grep "$VM" > /dev/null
        then
                echo "$VM is down!"
                #include mail -s here if you don't receive output for cron jobs.
                #include "vmrun start $VM" if you want to start the VM automatically if down
                # - it might not work due to other factors (rebuilding vmware modules, etc)
        fi
done < $MLIST

Lastly, the backup script. Like with the checking script, you can set a different machine list for it if you are wanting to backup other machines. Do note, this script does try to suspend the machine before backup and then start it after the backup completes - if your machine was stopped and you prefer it to remain stopped after the backup, you will have to change the logic of the script. Otherwise, it will start up. I saved this as /root/cron/backupVMs.sh and set cron to run it every week on Sunday night:

#!/bin/bash
LOCKFILE="/media/raid/vmware/backup.lock"
BACKUP_DIR="/home/media/vmwarebackup"
SMB_DIR="//yourShareServer/vmwarebackup"
MLIST="/media/raid/vmware/machine.list"
SUSPEND="/etc/init.d/vmwaresuspend"
CREDFILE="/root/cron/smbcred"

if [ -f $LOCKFILE ]; then
        echo "A backup is currently in progress"
        exit 1
fi
touch $LOCKFILE

mount -t smbfs -o credentials=$CREDFILE $SMB_DIR $BACKUP_DIR

if mountpoint -q $BACKUP_DIR #did it mount? If not, bad things could happen to your tiny 60GB SSD drive
then
        find $BACKUP_DIR -mtime +30 | xargs rm -rf #remove backups over 30 days old
        datetime=$(date '+%d_%m_%y_%H_%M')
        mkdir $BACKUP_DIR/${datetime}
        cd $BACKUP_DIR/${datetime}
        while read VM;
        do
                mkdir $(basename ${VM})
                ${SUSPEND} stop $VM
                cp -R $(dirname ${VM})/* ./$(basename ${VM})/
                ${SUSPEND} start $VM
        done < $MLIST
        umount -l $BACKUP_DIR
else
        echo "Samba share failed to mount: $BACKUP_DIR"
fi
rm $LOCKFILE

Modify to your hearts content.

Uploading a file by .ajax() (jQuery) with MVC 3 using FormData

In one of my projects, I ran across a scenario in which I needed to upload a file and store it in a database. Sounds simple enough, right? Well, I like to be bit masochistic as it turns out, and I developed a heavy AJAX use application for performance and caching reasons. By default, AJAX calls are not designed and will not include file input types as post data. Everything else, sure, just not files, which kind of sucks when my AJAX heavy application required its use.

Luckily, there were a couple of ways around this. You can either create an iframe which gets passed the information for the upload or you can use the HTML 5 FormData function call. The former does have limitations with callback functionality, however is more widely supported. The latter won’t work if you are using Internet Explorer. Fortunately, since my application will only be used by browsers supporting HTML 5 functionality, including XMLHttpRequest Level 2, it turns into a non-issue, at least for now. You can check and see if your browser supports that functionality as well.

So like regular file uploads, AJAX file uploads using FormData aren’t much different, you have to set up your view file to have a form which supports multipart/formdata encoding and you have to have an input type of file with a name attribute. Pretty simple.

In the view (create):

@model HIDDEN.Models.doc_screenshot

Create

@using (Html.BeginForm("Create", "docScreenshot", FormMethod.Post, new { enctype="multipart/form-data"})) { @Html.ValidationSummary(true)
Add a screenshot
@Html.LabelFor(model => model.name)
@Html.EditorFor(model => model.name) @Html.ValidationMessageFor(model => model.name)
@Html.HiddenFor(model => model.lineItemID, "doc_lineItem")

}

The controller remains the same as well. In my project, I am receiving the function parameters as a Model type using JSON (which I enable in my main application by adding the JsonValueProviderFactory), therefore I actually have to find the file request and store it in a temporary variable. I accomplish this using the Request.Files[] list as shown below. Just to alleviate some confusion: I am using a custom Json response class in order to return debug or error information; replace with your own implementation.

In the controller:

[HttpPost]
public ActionResult Create(doc_screenshot doc_screenshot)
{
	JsonResponse res = new JsonResponse(); //I use a custom JsonResponse Class to send back error/debugging info
	HttpPostedFileBase file = Request.Files["upimage"] as HttpPostedFileBase; //the "name" attribute of the file input, in this case "upimage"
	if (file == null)
	{
		res.Status = Status.Error;
		res.Message = "You need to add an actual image.";
		return Json(res);
	}
	Int32 length = file.ContentLength; //get the file length
	byte[] tempImage = new byte[length]; //apply the length to the new variable so we can copy contents
	file.InputStream.Read(tempImage, 0, length); //grab file stream contents and store them in temp variable
	// You will probably want to add some checks here for filetype to make sure that you aren't getting something you don't want
	doc_screenshot.image = tempImage; //copy the image over to the model
	if (ModelState.IsValid)
	{
		res.Message = "Successfully created new screenshot";
		res.Status = Status.Ok;
		db.doc_screenshot.Add(doc_screenshot);
		try
		{
			db.SaveChanges();
			res.Id = doc_screenshot.id;
		}
		catch (Exception e)
		{
			res.Message = e.Message;
			res.Status = Status.Error;
		}
	}
	else
	{
		res.Message = "Model State error.";
		res.Status = Status.Error;
	}
	return Json(res);
}

Alright, basics out of the way, now we get to the FormData. In the javascript section of my code, I prevent the form from being submitted when the user hits the “upload” button and instead inject my own AJAX functionality. I detect whether the form that is submitted has enctype of “multipart/form-data” and create a new FormData string if it does. If not, I treat it as a standard AJAX form. Oh and since I’m using the dialog jQuery UI component, I make all calls to the form by using “#dialog form” – replace this with your actual form name in your own implementation.

In the javascript

$("#dialog form").submit(function (event) {
	event.preventDefault();
	action = $("#dialog form").attr("action");
	if ($("#dialog form").attr("enctype") == "multipart/form-data") {
		//this only works in some browsers.
		//purpose? to submit files over ajax. because screw iframes.
		//also, we need to call .get(0) on the jQuery element to turn it into a regular DOM element so that FormData can use it.
		dataString = new FormData($("#dialog form").get(0));
		contentType = false;
		processData = false;
	}
	else {
		// regular form, do your own thing if you need it
	}
	$.ajax({
		type: "POST",
		url: action,
		data: dataString,
		dataType: "json", //change to your own, else read my note above on enabling the JsonValueProviderFactory in MVC
		contentType: contentType,
		processData: processData,
		success: function (data) {
		   //BTW, data is one of the worst names you can make for a variable
		},
		error: function(jqXHR, textStatus, errorThrown)
		{
			//do your own thing
		}
	});
	$("#dialog").dialog("close");
}); //end .submit()

And that should get you rolling with using FormData with your MVC application for uploading files. The last resource I’d recommend if you need an alternative is using the uploadify jQuery plugin.