RsaProtectedConfigurationProvider: Not recommended for children under 5

So just an intro before getting into the nitty gritty, these are my experiences personally when dealing with the RSAProtectedConfigurationProvider combination of tools in order to get sections of my web.config for an enterprise web application encrypted and some of the hassles along the way. As with anything cryptosecurity-related, there is some cursing involved.

Create an RSA Key

The first step to getting RSA encryption into your web application’s web.config, you need to create a common key that can be shared across servers and/or development environments. To do this, open up a Visual Studio command prompt (or navigate a regular command prompt over to your %WINDIR%\Microsoft.NET\Framework(64)\{insert .net version}\ directory) as an administrator and type in

aspnet_regiis -pc "ProjectKeyNameHere" -exp

This will create a key for you in the %PROGRAMDATA%\Microsoft\Crypto\RSA\MachineKeys directory

MachineKeys folder

Each key is given its own unique, seemingly nonsensical filename. However, the naming convention is actually the calculated key hash based off of the keyname itself, followed by the machine key GUID, and separated by an underscore. The key hash is calculated by an interesting algorithm when it gets imported into the keystore, and the machine GUID can simply be found in the registry under HKLM:\SOFTWARE\Microsoft\Cryptography.

Quick Trivia Fact: The key “c2319c42033a5ca7f44e731bfd3fa2b5_…” is the key that is used by IIS to encrypt the MetaBase. Delete or mess with this key and the IIS server on the machine will cease to function.

So even though you can modify permissions to keys and even delete them directly from this directory, DO NOT MANUALLY EDIT PERMISSIONS OR DELETE KEYS FROM THIS DIRECTORY OR YOU WILL SCREW THINGS UP. Use the aspnet_regiis tool to manage your keys instead. Trust me on this (or take the risk and have fun troubleshooting later).

Exporting the key

Now, in order to share the key to other developers and/or machines, you will need to export the key into an XML file. This can be done by running the following command:

aspnet_regiis -px "ProjectKeyNameHere" "C:\location\for\key\KeyName.xml" -pri

This will give you a plain text XML file which describes the private key. As with anything that has “private” in its name, treat this file with respect and proper security procedures (aka, don’t throw this up on a public share, make sure you keep track of where it goes, etc).

Installing existing (XML) key to machine(s)

Once you send off the XML file in a secure fashion to another developer or machine, you can then run the following command on that machine in order to import the key into that machine’s RSA key store:

aspnet_regiis -pi "ProjectKeyNameHere" "C:\location\of\key\KeyName.xml"

Keep in mind, the “ProjectKeyNameHere” name has to be the same throughout the entire process, as this is the name of the key that will be identified in the web.config later on.

At this point, any administrative user on the box can now use the key to encrypt/decrypt information. No other (regular/non-admin) user will have access to read the key — this includes the user that IIS AppPools run under. The names or groups of the users that have this permission change based on environment, but is usually “NT AUTHORITY\NETWORK SERVICE”. In order to grant permissions for other users to read the key, run the following command:

aspnet_regiis -pa "ProjectKeyNameHere" "UsersOrGroups"

If you’re using the impersonation setting in IIS for your web application, read the “Impersonation permission issues” section below.

Adding key to web.config

In order for IIS to know which key to use for encryption/decryption, we first have to turn off the default RSAConfigurationProvider in the web.config and then add in our own key that we created. To do this, simply add the following to your web.config (directly under the root configuration node):

      <remove name="RSAProtectedConfigurationProvider">
      <add keycontainername="ProjectKeyNameHere" usemachinecontainer="true" description="Uses RsaCryptoServiceProvider to encrypt and decrypt" name="RsaProtectedConfigurationProvider" type="System.Configuration.RsaProtectedConfigurationProvider, System.Configuration, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a">

Change the assembly version number for System.Configuration.RsaProtectedConfigurationProvider to whichever version of .NET you are using for your application (e.g. for 2.0/3.5).

After saving the file, you will now be able to use aspnet_regiis to encrypt and decrypt sections of your web.config file as you see fit.

To encrypt (in this example, the connectionStrings node):

aspnet_regiis -pef "connectionStrings" "c:\path\to\web.config\"

To decrypt

aspnet_regiis -pdf "connectionStrings" "c:\path\to\web.config\"

Note that I supply the path to the config file, but I don’t use the filename itself in the parameter (e.g. Don’t use c:\path\to\web.config\web.config). Also note that you cannot encrypt the entire config file, only nodes under the root configuration node (it would be a bad idea to encrypt the entire file as the part to know how to decrypt the file would also be encrypted).

You will also be able to use tools such as the Enterprise Library Config Tool in order to do this for you automatically should you so desire.

Reference: ASP.NET IIS Registration Tool

Running publish commands to encrypt config

Personally, I find it easiest when I don’t encrypt any data in the web.config file until I’m ready to publish (so if I have to change a connectionString, I won’t have to manually decrypt, change, and then re-encrypt each time). This process will vary based on which method you use to publish your web application. I will only be covering the file system publish method, but make sure to check out the reference links below for compiling your own methods for publishing.

For MSBuild functionality, you can either store these commands in the main .csproj file if you want the action to occur globally across all the publish profiles that you have, or you can set it individually in each .publxml file (recommended).

Now, what I ended up doing for the file system publish was capturing an event after the build and web.config transform but before the files were moved to the filesystem location that was specified. Under the root Project node in the .pubxml file, I added the following:

<target name="BeforePublish" aftertargets="CopyAllFilesToSingleFolderForPackage">
      <output taskparameter="Path" propertyname="FrameworkPath">
    <exec command="$(FrameworkPath)\aspnet_regiis.exe -pef "connectionStrings" "$(MSBuildProjectDirectory)\$(IntermediateOutputPath)Package\PackageTmp"">

A few things to note here. The AfterTargets variable was a pain in the butt to find (and apparently only works well with msbuild 4.0). It is literally not documented anywhere officially (or nowhere I can find on the internet at the moment). But this will capture the moment before the files are moved to the destination file system location. The variables in the Exec Command node (such as $(MSBuildBinPath)) are all listed under the MSBuild documentation, which is what I ended up using. If you use quotes inside of the command, make sure to use “&quot” as it is an XML file, after all. If you have multiple sections that you want to encrypt, you will have to create an Exec Command node for each one.

Reference: How to: Extend the Visual Studio Build Process
Reference: MSBuild Concepts
Reference: How to: Use Environment Variables in a Build
Reference: GetFrameworkPath Task

Impersonation permission issues

One of the first problems I ran across was the fact that when IIS has the impersonation flag set to true for the web application, it will use the impersonated account to access the RSA key in order to decrypt the configuration file. Unfortunately, there is no way around this if you MUST have impersonation: you will have to grant access to the key for each user who is using the application. An easy way to do this is to grant permission to the group needing access to the application. Unfortunately, if you have something like Windows Authentication turned on, that group may end up being “Domain Users” or goodness forbid “Everyone”:

aspnet_regiis -pa "ProjectKeyNameHere" "Everyone"

This sort of sucks in terms of security, I know. Right now, there really isn’t a better way of dealing with it without rewriting your application and turning off impersonation.

Cannot read key error

Check to make sure that permissions on the key are setup correctly. If it helps to debug, give permissions to everyone to see if the error goes away. If it does, find out which user/group actually needs read access to the key. If it doesn’t, here’s Google :).

Safe Handle error

“Failed to decrypt using provider ‘RsaProtectedConfigurationProvider’. Error message from the provider: Safe handle has been closed”

So this one is a pain, but again, it has everything to do with permissions. If you did everything right, and read the note above about not manually changing the permissions on the MachineKeys folder and using the aspnet_regiis tool instead, you shouldn’t have to worry about this.

On the other hand, if one of your co-workers decided that it was a good idea to change permissions on that folder without realizing the consequences it may produce, well, you have this cryptic error message to deal with.

First and foremost, check the permissions on the MachineKeys folder (%ProgramData%\Microsoft\Crypto\RSA\MachineKeys). You should see this little lock icon near the folder:

11-6-2013 3-31-10 PM

That means that general users can’t read the contents of the folder. This is a good thing. Now check your permissions for the “Everyone” group:

11-6-2013 2-59-56 PM

The !!!ONLY!!! permissions that the “Everyone” group should have checked are the “Special Permissions”. Digging a little deeper, we find out what those special permissions are:

11-6-2013 2-59-22 PM

Specifically, we have the following from the list provided by Microsoft:

  • List Folder/Read Data
  • Read Attributes
  • Read Extended Attributes
  • Create Files/Write Data
  • Create Folders/Append Data
  • Write Attributes
  • Write Extended Attributes
  • Read Permissions

and the following quote: “The default permissions on the folder may be misleading when you attempt to determine the minimum permissions that are necessary for proper installation and the accessing of certificates.”

If you grant any further permissions to the “Everyone” group, such as general read access, write access, or full control, you will see that not only will you break the functionality of the existing keys in the folder, any additional key you install to the folder will be improperly installed, as aspnet_regiis will complain about the safe handle. The former will start working once you fix the permissions on the folder (for the most part). The latter will not be fixed as they were improperly installed and will be stuck in a state of limbo. The aspnet_regiis tool will not even be able to delete those keys (it will complain about…the safe handle being closed). The only way you will be able to delete them is by manually going into the MachineKeys folder, finding out which key you have to delete (either by last modified time or by the key hash – this you can get by installing the key to another machine and comparing the first part of the filename for both). Then, once manually deleted, you can use the aspnet_regiis tool to reinstall the key.

Again, I don’t have to remind you to be very meticulous and careful when rooting around and modifying things in this folder as it can break much more than just your web application.

Reference: Default permissions for the MachineKeys folders

Adding MVC4 to a project: The operation could not be completed. Not implemented

While experimenting around with adding the MVC framework onto an existing C# Webforms project, I came across an interesting error when trying to add the MVC3/MVC4 project type GUID into the existing csproj file for the project (I tried both, seeing if there was something wrong with that). I came up with the following error when first opening the solution with the project (this was using Visual Studio 2012):

Visual Studio 2012 MVC4 Error

“The operation could not be completed. Not implemented”

That was very…descriptive. I went ahead and repaired the MVC4 package thinking that might fix it, but it was no good. Made sure Visual Studio was up to date – it was. I was pretty stumped. I took another look at the csproj file. Before adding MVC4 to the project, I had the following:


So it’s your basic web application project {349c5851-65df-11da-9384-00065b846f21} combined with a C# project {fae04ec0-301f-11d3-bf4b-00c04f79efbc}. So, I went ahead and added in the MVC4 type {E53F8FEA-EAE0-44A6-8774-FFD645390401} to get the following code:


Seems like it should work. I compared the above to a newly created MVC project and apparently, the problem exists with the order that the project type GUIDs are listed in. I had added the MVC4 project type GUID after everything, whereas it needs to go before the others, like so:


Problem is, I can’t seem to find any sort of documentation for this behavior. It really shouldn’t matter what order the project type GUIDs are in, right? I can understand from a developer standpoint that MVC depends on the Web application framework which depends on the C# project – and in that way, the project can correctly identify the build order – but I thought that this was something that Visual Studio would have been able to figure out more easily (without having to list the project type GUIDs in the proper order). Anyways, after figuring that out, MVC worked with the existing Webforms application like a champ.

Effective Social Engineering: Another DEFCON 20 skytalk

The way a lot of security pen-testers do social engineering calls by phone – “Hey, I’m <so and so> from IT. You have a virus. I need your password.” Rinse, repeat, and occasionally, you will get one or two people who will actually give you their password. The other 99% of the time, employees have been trained not to give out their password and rightfully so. “Well, we got a username and password, so it still counts…” But you also wasted time calling 50 other people before you got the credentials, which most likely put the company on high alert.

At the DEFCON 20 Skytalk, the presenter with the alias “Tanks4U” talked about the inefficiencies and the problems with using the commonly employed methods of social engineering, and the companies who hire security guys to call <x> amount of people for statistical purposes – specifically if it’s an all or nothing goal like getting credentials. You didn’t get them to give you the password? Hang up. They challenged or questioned you? Hang up. Try again with some other person. And it totally compromises the whole point to social engineering.

“You’re not supposed to tell them to jump in a river. You are supposed to take them to the river and convince them that they are on fire.” Social engineering isn’t just about a cold call to a person, it involves an interaction with the person on the other side and observation of person to get the reaction you need. In effective social engineering, you need to know the following:

  • Know your goals – what details do you want to obtain? Username and password could be one, but what if you want the building layout? When does your cleaning staff come in? Who are the managers? Any details on the business that you can obtain can help you further in obtaining access to their business. Write them out on a piece of paper in a checklist. Make a scoring system. Ask them for things like the last four digits of their phone number, how their badges look like, last four digits of their SSN, etc.
  • Know your target – it helps to observe how people react when faced with certain situations for this one. Go to a coffee shop before hand and sit there for a few hours. Watch how people interact with one another. You don’t have to be good at knowing how to social engineer people, just good enough to figure out how to small talk with others. Ask them questions about the weather or talk about hardships (ever had a flat tire or kids?), or engage in other commonalities that people share. Make a connection.
  • Know yourself – sure you might know how to talk to others, but can you actually do it when it comes time? How will you react? Most people might get nervous and that is simply something you get over after you have some practice with social engineering. Public speaking also helps (try performing in a play and get your acting skills in shape). Or just sit in front of a mirror and practice your technique. Ultimately, though, it will come down to analyzing how you react in a live situation with other people. Remember the coffee shop? Go make small talk with those people that you were just observing.

In terms of the actual call or contact, performing the following steps will be key to establishing a good baseline for social engineering (or not! depending on the situation. Adjust accordingly with what you know above):

  • Initiation – “Hi, my name is <so and so>. Is this <target>?”
  • Pleasantries – “How are you doing? The weather sucks right now. How’s work? Busy today?” Establish a connection to your target through commonalities.
  • Establish a pretext – who are you pretending to be? What is your purpose? Establish authority.
  • Emotional injection – this is the start of the RIP phase (reactionary identity preservation). Ask them universal questions like, “Did you use Google today?” “Are you aware that we have an employee handbook? Did you read it?” Note that most of these are yes/no questions and that all of them should have the same response, no matter who you call (thus universal or almost rhetorical). This gets your target into an almost defensive stage and gives them a warm fuzzy feeling when they answer correctly – reassure them that they won the argument and play with their emotions. Get their responses (make them angry, worried, or comfortable).
  • Team up – once you get them to emotionally connect with you on the problem that you have (oh you have a virus infection), work towards getting your goal by teaming up and fixing the problem. “Hey, so let’s just take care of this virus infection quickly” or “let’s figure something out”. Prepare to get your goal.
  • Request an action – have them go to a website, give them their credentials, or open up a command shell and start typing things – whatever your goal is, ask them to do it now. If they refuse, don’t give up and try to obtain your goal some other way (Plan B, C, D…).
  • Closing – Once you accomplish your goal, make them feel like the accomplished something as well. Give them reinforcement saying something like, “You did everything right, good job” or, depending on other emotional responses, give them other feedback like warnings, “Don’t open bad websites” or “let’s just keep this between us; I’d rather not have to tell your boss about this so that you don’t get in trouble.” This is a good phase to make sure that they don’t go and tell others about the incident so that you have more opportunities to contact others at the business.

VMware Workstation Automatic Suspend/Start and Backup Scripts for Ubuntu

I had previously listed some scripts that I use for VMware Workstation management on the Intrinium blog, but have found a few annoyances with running them. So, I’m re-releasing the scripts in their modified form.

The objective performed by these scripts are as follow:

  • They provide a simple way to suspend all virtual machines on host shutdown and resume only a select few on host startup.
  • They provide a way to check if certain virtual machines are running. If they are not running, the script will send an alert (and attempt to start the machine).
  • Backups will occur on a per-machine basis (that can be set to be different than the running list) and will only suspend one machine at a time for backups, starting them back up once the backup is finished. The backup will be sent to a samba share in my example, but this can be configured to be anything you can cp/scp to.

I’ve also modified the scripts to be a little more flexible in terms of modularity (user set directories) and included some error checks. Other than that, you know the routine – test it before you run it and don’t blame me if you delete everything.

First, the startup script I’ve dropped as /etc/init.d/vmwaresuspend (and of course, run “update-rc.d vmwaresuspend defaults”). This script runs on startup and shutdown of the host, making sure to suspend all running virtual machines and start up only machines on the machine list when the host starts back up:


# Provides: vmwaresuspend
# Required-Start:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Required-Stop: 0 1 6
USER="media" #the user to run VMware as
MLIST="/media/raid/vmware/machine.list" #the machine list for machines to START on startup (see example below)

case "$1" in

if [ $# -gt 1 ]
        su ${USER} -c "vmrun start '$2' nogui"
        while read VM;
                echo "Starting $VM"
                su ${USER} -c "vmrun start '$VM' nogui"
        done < $MLIST
exit 0

if [ $# -gt 1 ]
        su ${USER} -c "vmrun suspend '$2'"
        vmrun list|grep vmx$|while read VM
                echo "Suspending $VM"
                su ${USER} -c "vmrun suspend '$VM'"
exit 0

# Nothing to be done for restart
exit 0
exit 0

The machine list is simply the full path to the vmx file for the virtual machine, one per line. Example:


Next, the script that checks to see if VMs are running. I set the list to be the same one as the startup script above, but you can change it to a different one if you feel like it. I saved this in /root/cron/ and updated cron to have it run this script every 5 minutes:


if [ -f ${LOCKFILE} ]; then #checks for the lock file made by backup script
        exit 0
# vmrun list | tail -n+2 | awk -F/ '{print $NF}' | rev | cut -d\. -f2- | rev ...or we can just ps grep :)
while read VM;
        if ! ps ax | grep -v grep | grep "$VM" > /dev/null
                echo "$VM is down!"
                #include mail -s here if you don't receive output for cron jobs.
                #include "vmrun start $VM" if you want to start the VM automatically if down
                # - it might not work due to other factors (rebuilding vmware modules, etc)
done < $MLIST

Lastly, the backup script. Like with the checking script, you can set a different machine list for it if you are wanting to backup other machines. Do note, this script does try to suspend the machine before backup and then start it after the backup completes - if your machine was stopped and you prefer it to remain stopped after the backup, you will have to change the logic of the script. Otherwise, it will start up. I saved this as /root/cron/ and set cron to run it every week on Sunday night:


if [ -f $LOCKFILE ]; then
        echo "A backup is currently in progress"
        exit 1

mount -t smbfs -o credentials=$CREDFILE $SMB_DIR $BACKUP_DIR

if mountpoint -q $BACKUP_DIR #did it mount? If not, bad things could happen to your tiny 60GB SSD drive
        find $BACKUP_DIR -mtime +30 | xargs rm -rf #remove backups over 30 days old
        datetime=$(date '+%d_%m_%y_%H_%M')
        mkdir $BACKUP_DIR/${datetime}
        cd $BACKUP_DIR/${datetime}
        while read VM;
                mkdir $(basename ${VM})
                ${SUSPEND} stop $VM
                cp -R $(dirname ${VM})/* ./$(basename ${VM})/
                ${SUSPEND} start $VM
        done < $MLIST
        umount -l $BACKUP_DIR
        echo "Samba share failed to mount: $BACKUP_DIR"

Modify to your hearts content.

Setting up LDAPS to Active Directory in Subsonic/Airsonic

…using an internal CA. Ok, so Subsonic is a really awesome music streaming tool that is multi-platform and lets you stream music to all sorts of devices. The really neat thing I found with it is that it supports LDAP authentication, which means I don’t have to recreate user accounts and keep track of different passwords for all the users in my home network – I just leave that task to my domain controller. I’ve ran into a few interesting issues trying to get it setup though, so for the sake of my sanity if I ever need to set it up again, and for those of you out there that may be struggling with the same issues, I’ve decided to write up a little howto guide.


So before continuing, I am assuming the following about your setup:

  • AD (Active Directory) is currently installed somewhere on your network, or you can use an LDAP compatible server.
  • LDAPS is enabled for AD (if you set up an Enterprise CA, this process becomes much easier) or for your LDAP server.
  • You have created an account whose sole purpose is to read AD information (the number of times I’ve seen someone use a domain admin account for something like this…is the number of times I’ve owned a network in mere seconds 🙂 ). You’ll need some sort of LDAP binder user that has read capability.
  • You have a list of users in a group on AD/LDAP for which you are granting access to subsonic. This can be a group called <insert name here> or simply just any user in the “Users” group.
  • Subsonic is installed on your Ubuntu system and you have a local admin account that you created for it. If using a docker container, you’ll need to additionally overwrite the cacerts file in the docker image. e.g.
    • (airsonic) -v /etc/ssl/certs/java/cacerts:/etc/ssl/certs/java/cacerts:ro
    • (airsonic-advanced) -v /etc/ssl/certs/java/cacerts:/opt/java/openjdk/lib/security/cacerts:ro
  • You installed HTTPS on subsonic. It’s pretty pointless to use LDAPS without also using HTTPS. Or run it through some kind of reverse proxy like nginx.

Configure the java keystore

LDAPS by nature requires the proper use of certificates, for security reasons of course. In order to verify the certificate from the AD server, the subsonic machine and AD machine must have a common CA that they trust. If you are using an internal CA to issue your LDAPS cert for your AD machine, all it takes is installing the same CA onto subsonic machine. Right? Well, partially.

You see, just like an application like Firefox will have its own individual keystore where it keeps the certificates it trusts, Java does the same thing. So just throwing the cert into /etc/ssl/certs/ won’t do you too much good. Fortunately, Ubuntu makes this process a bit easier.

Make sure you install the ca-certificates-java package (comes preinstalled with openjdk usually):

sudo apt-get install ca-certificates-java

Next, throw your CA cert into /usr/local/share/ca-certificates. Run the following command:

sudo update-ca-certificates

You should get output telling you that at least 1 new certificate has been added. This command will also automatically update your java keystore. You can check to see if your certificate has successfully been imported by running:

keytool -list -v -keystore /etc/ssl/certs/java/cacerts | grep "Your CA Name"

The default password for the java keystore is “changeit”.

Setting up Subsonic

Log into subsonic. Go to Settings -> Advanced and enable LDAP authentication. The biggest difference between LDAP and LDAPS here is the protocol (ldap:// vs ldaps://) and the port number changed (from 389 to 636). For the LDAP information, you will have to substitute your own information. For me, I have a group of users called “Subsonic” who are allowed access to the application. By default, the LDAP URL that is autofilled for you includes all users in the “Users” group, which is not what I wanted. I had to modify the LDAP search filter to include only users in the Subsonic group (if you want to include all users, you can just leave this alone):

LDAP URL:            ldaps://myADserver:636/cn=Users,dc=yourdomain,dc=com
LDAP search filter:  (&(sAMAccountName={0})(&(objectCategory=user)(memberof=cn=Subsonic,cn=Users,dc=yourdomain,dc=com)))
LDAP manager DN:     yourdomain\limiteduser

To explain further, the search filter looks for a return field of sAMAccountName which is the username of the user in the specific place you are looking. The Base DN starts looking at the root of the domain in the Users group (cn=Users,dc=yourdomain,dc=com) with the search filter of “objectCategory=user” (so any user account, but not computer or group accounts) AND a search filter of “memberof=cn=Subsonic…” which looks for the Subsonic group to be part of the user object in AD. Do note, there is a group object which has a list of users…but this is easier.

LDAP configuration in subsonic

If you are interested with fooling around a bit more with possible search strings and base DN configs, check out JXplorer.

Uploading a file by .ajax() (jQuery) with MVC 3 using FormData

In one of my projects, I ran across a scenario in which I needed to upload a file and store it in a database. Sounds simple enough, right? Well, I like to be bit masochistic as it turns out, and I developed a heavy AJAX use application for performance and caching reasons. By default, AJAX calls are not designed and will not include file input types as post data. Everything else, sure, just not files, which kind of sucks when my AJAX heavy application required its use.

Luckily, there were a couple of ways around this. You can either create an iframe which gets passed the information for the upload or you can use the HTML 5 FormData function call. The former does have limitations with callback functionality, however is more widely supported. The latter won’t work if you are using Internet Explorer. Fortunately, since my application will only be used by browsers supporting HTML 5 functionality, including XMLHttpRequest Level 2, it turns into a non-issue, at least for now. You can check and see if your browser supports that functionality as well.

So like regular file uploads, AJAX file uploads using FormData aren’t much different, you have to set up your view file to have a form which supports multipart/formdata encoding and you have to have an input type of file with a name attribute. Pretty simple.

In the view (create):

@model HIDDEN.Models.doc_screenshot


@using (Html.BeginForm("Create", "docScreenshot", FormMethod.Post, new { enctype="multipart/form-data"})) { @Html.ValidationSummary(true)
Add a screenshot
@Html.LabelFor(model =>
@Html.EditorFor(model => @Html.ValidationMessageFor(model =>
@Html.HiddenFor(model => model.lineItemID, "doc_lineItem")


The controller remains the same as well. In my project, I am receiving the function parameters as a Model type using JSON (which I enable in my main application by adding the JsonValueProviderFactory), therefore I actually have to find the file request and store it in a temporary variable. I accomplish this using the Request.Files[] list as shown below. Just to alleviate some confusion: I am using a custom Json response class in order to return debug or error information; replace with your own implementation.

In the controller:

public ActionResult Create(doc_screenshot doc_screenshot)
	JsonResponse res = new JsonResponse(); //I use a custom JsonResponse Class to send back error/debugging info
	HttpPostedFileBase file = Request.Files["upimage"] as HttpPostedFileBase; //the "name" attribute of the file input, in this case "upimage"
	if (file == null)
		res.Status = Status.Error;
		res.Message = "You need to add an actual image.";
		return Json(res);
	Int32 length = file.ContentLength; //get the file length
	byte[] tempImage = new byte[length]; //apply the length to the new variable so we can copy contents
	file.InputStream.Read(tempImage, 0, length); //grab file stream contents and store them in temp variable
	// You will probably want to add some checks here for filetype to make sure that you aren't getting something you don't want
	doc_screenshot.image = tempImage; //copy the image over to the model
	if (ModelState.IsValid)
		res.Message = "Successfully created new screenshot";
		res.Status = Status.Ok;
			res.Id =;
		catch (Exception e)
			res.Message = e.Message;
			res.Status = Status.Error;
		res.Message = "Model State error.";
		res.Status = Status.Error;
	return Json(res);

Alright, basics out of the way, now we get to the FormData. In the javascript section of my code, I prevent the form from being submitted when the user hits the “upload” button and instead inject my own AJAX functionality. I detect whether the form that is submitted has enctype of “multipart/form-data” and create a new FormData string if it does. If not, I treat it as a standard AJAX form. Oh and since I’m using the dialog jQuery UI component, I make all calls to the form by using “#dialog form” – replace this with your actual form name in your own implementation.

In the javascript

$("#dialog form").submit(function (event) {
	action = $("#dialog form").attr("action");
	if ($("#dialog form").attr("enctype") == "multipart/form-data") {
		//this only works in some browsers.
		//purpose? to submit files over ajax. because screw iframes.
		//also, we need to call .get(0) on the jQuery element to turn it into a regular DOM element so that FormData can use it.
		dataString = new FormData($("#dialog form").get(0));
		contentType = false;
		processData = false;
	else {
		// regular form, do your own thing if you need it
		type: "POST",
		url: action,
		data: dataString,
		dataType: "json", //change to your own, else read my note above on enabling the JsonValueProviderFactory in MVC
		contentType: contentType,
		processData: processData,
		success: function (data) {
		   //BTW, data is one of the worst names you can make for a variable
		error: function(jqXHR, textStatus, errorThrown)
			//do your own thing
}); //end .submit()

And that should get you rolling with using FormData with your MVC application for uploading files. The last resource I’d recommend if you need an alternative is using the uploadify jQuery plugin.

DEFCON 20: “Your network sucks” Skytalk

Note: the dialog noted here is a general representation of what was actually talked about in the presentation; it is not accurate and may not be the exact ideas stated by the speaker, as he was kind of getting drunk.

“Your network  sucks,” stated Anch with an almost slurring dialect as he began his Defcon 20 Skytalk. A drink in his hand to cure the previous night’s hangover, he informed the audience about how many networks he’s seen that suck by design. Sure, you can have network admins who can set up switches and firewalls, but there are a few points of security that you end up missing. Or, if you do end up having the security, you end up implementing it incorrectly, rendering it useless.

“Some people think that there is this magical device you can plug into your network which will automatically protect it from all hackers, this device is called an ‘IPS’; it is in fact a marketing term for an IDS that costs more and has cool blinking lights on it.” His main point ran clear: appliances don’t protect your network, you as the network administrator protect your network. If you believe that you can just put a device on your network and leave it alone, thinking that it will protect your assets, you are horribly mistaken. Each of these devices, for whatever preventive measures it takes, firstly cannot block all attackers or methods of attacks. There is always a way around it, though it may make life a little bit more difficult for the person who is trying to get in or around your appliance. New attacks and methods are bound to arise, and without configuring and updating that appliance for every possible vector, known and unknown, you cannot stop everything. And secondly, these devices usually work on layer 3 and doesn’t cover much in layer 2 of the network.

“DRINK!” the crowd yells as Anch looks at his cup in despair.

“I’m going to get so hammered…” as he takes another sip, “so people think that they can secure their networks by using VLANs…” Indeed, utilizing VLANs on your network can help segregate different machines and systems on your network into logical network groups. Problem is, if they are setup, they are usually setup incorrectly and misconfigured. The issue of VLAN hopping also arises, and while technically harder to do for an attacker, it is still possible. “VLAN segregation is not an effective security measure, it is simply a separation method.”

ACLs are great, if you take the time to set them up properly.” Problem is, most people don’t set them up properly. Also, they are, like VLANs, not the “perfect” solution, meaning that they shouldn’t exist on their own as the only security mechanism. In a way, ACLs function like a rudimentary firewall – great for keeping out the known and preventing easy attacks by filtering packets. The key is of course, filtering – while it may catch some or even most of the bad packets if maintained, it won’t get everything.

“And lastly, there is 802.1X, which sucks because it doesn’t have client support.” If your network has all Windows machines and only Windows machines, you really don’t have to worry about this…for the most part (again, configuration is key). However, once you start including Linux into the scope, lets just say that the complexity of configuring Linux clients becomes a whole different story. And for those using agent based solutions, you’ll probably have to bug your vendor to release a stable agent for each distribution of Linux that you currently use (unless you can persuade them to hand over the source code…good luck, bring plenty of cash).

“Here, I have a refill for you,” one of the audience members’ says as he hands Anch a cup.

“What is this stuff, it tastes like crap!” “Oh, it’s Heineken; screw that, I’m not drinking this stuff!” Anch exclaimed as he went and poured himself a Fat Tire. “So where was I…oh yeah, how do you fix your network so it doesn’t suck anymore.”

“Organization and monitoring are key. Your network diagram is the first thing you have to fix because it probably sucks.” Indeed, if you are like every other network admin who uses Visio to make a diagram, you are cutting yourself short on information, not to mention any updates to your network make that diagram practically unmaintainable. Your network diagram, to be complete, must contain the following at a minimum:

  • IP addresses for devices and machines
  • MAC addresses for devices and machines
  • Ports on switches correlating to the MAC addresses for devices and machines

Those are the basics. Then, you group computers and devices into logical segments based on functionality and network traffic type. You have your sensitive servers in one area, you have your mail servers in another, and file servers in yet another. The main purpose behind grouping servers is to then monitor the traffic coming to and from the servers. Any odd traffic? Investigate it. Watch logfiles and obtain data for system events. Find the norm for the server and then you can filter any anomalies. Don’t stop there though; investigate the norm for the server as well – it may have already been compromised (in which case, establishing the norm traffic pattern for the server would include the suspicious traffic). This is the job of the network admin – “fancy blinking lights” of a costly network device will never replace that functionality.

“The point I’m trying to get across,” Anch said while stumbling about a bit, “is that a condom is better than seran wrap that you currently have when it comes to your network…”

“It’s still better than tinfoil!” yelled an audience member.