Archive

Posts Tagged ‘Azure’

3 Things Azure ACS has done for me lately…

In my pet project Crossfit Benchmarks – I have three very specific security requirements and Azure ACS (Access Control Services) has allowed me to implement all three.

Here is the list of requirements:

  • Users must login to access secure sections of the web application AND my application will not do any of the following
    • provide authentication & authorization services
    • store & manage user credentials
  • Data stored in my application must be partitioned by individual users – i.e. your data is yours, mine is mine, ours data does not inter-mingle
  • The Restful web services that serve the data to my applications must be secured AND my services will not do any of the following
    • provide authentication & authorization services
    • store & manage client application credentials / keys / or shared secrets

Yeah – I know it sounds too good to be true right? But it was pretty easy to do all three.

User Authentication & Authorization

I was able to login to my Azure portal, navigate to the Access Control section, and simply create a relying party application that relies on various third party identity providers (Google, Facebook, Yahoo, and Windows Live). In this particular scenario – the relying party application is my ASP.NET MVC Web Application.  After just a few clicks I had enabled authentication & authorization for Facebook, Google, Yahoo, and Windows Live.  Each of these identity providers manage credentials, provide authentication – NOT my web applicaiton.

So the moral of the story here is that – for my web application I don’t care about authenticating and authorizing users.  As far as my web application is concerned – if Azure ACS says that the 3rd party Identity Providers have authenticated & authorized the person trying to access the secured sections of my web application then that is secure enough for me.

If you wanted to – you could even integrate your on-premise Active Directory with cloud based azure ACS – but I have never had to do this and frankly I think this scares the daylights out of many IT Organizations… can you say “private cloud” – oh wait… that’s a topic for another day.

Partitioning Data by Individual Users

The beauty of using the Identity Providers in Azure ACS is that they all give SAML Tokens back to my application. SAML has been around since ~2005 and it has typically been complicated to implement. This article won’t get too much into SAML or Tokens, but tokens are basically tidbits of user information that are handed out by identity providers. Tokens ARE NOT credentials – they are a set of “claims” that can be made about a security context, i.e. “name identifier” + “identity provider” can be used to uniquely identify a user accessing my system. I don’t know “who” this user is, just that their token is valid, the tokens came from a reputable source, and now I can partition data using these tokens handed to my application.

So how does all this magic work? It’s pretty cool… and here is the short version of my understanding of how it works…

  • User tries to access a secure section of my app
  • My app redirects user to Azure ACS because my app is configured to expect Azure ACS tokens
  • User logs into their identity provider and grants my app access
  • Identity provider issues tokens and redirects user back to Azure ACS
  • Azure ACS forwards user to where they wanted to go in the first place (my application)
  • My application inspects the tokens to make sure they are valid
  • If the tokens are valid – we will either create new data for the user OR load their existing data

So what happens if Facebook, Google, Yahoo, or Windows Live change the way that they issue name identifiers? I don’t know – I suppose that if they change the name identifier issued to my credentials – then all of the data on the internet relying on this name identifier value would be unreachable… so let’s hope they don’t do that…

So why is this cool again?

  1. I don’t have to store credentials for my users – I am in the business of storing data for my application, not your credentials
  2. I think it’s a major PITA for users to have to create account after account after account, etc, just to use a simple web app
  3. Each time a new application rolls out a new authentication & authorization scheme – the internet actually becomes less secure, because if the user uses the same username / password as they have in other systems, they have just duplicated that data now there is another means for hackers to get their data – in other words, the more times you share a secret – the less secure it actually is – at some point, someone will tell the secret again (i.e. get hacked)…

Securing Restful Services

In my pet project, I am planning on having an ASP.NET Web Client, Android Client, Windows 8 Client, and maybe an iOS Client. Each of these clients will use basic http to talk to my services, and I want to make sure that only clients that are authenticated and authorized can access my services API. In previous lives I have built this by hand using WCF and SQL Server, plus API Keys and HMAC validation. I really didn’t want to go there again.

So it turns out that Azure ACS has the concept of Service Identities. Basically, I can assign a set of shared secrets for each client. My client can then use the shared secret to request a security token from Azure ACS. Azure ACS will verify the shared secret and return a security token that can then be forwarded to my web service api.

Is this 100% bulletproof? No, people can crack my apps in the app store and figure out the shared secret, and build their own apps with my shared secret. This is not the end of the world – I am not dealing with financial or medical data – so if a hacker is really that bored – then go nuts. If I catch it, I can simply change the shared secret in Azure and push updated builds to the various app stores.

So – for now, it is secure enough. Once I actually start building the Android, Windows 8, and iOS versions I may revisit this. Right now, I am working on the ASP.NET MVC flavor, so they would actually have to hack my Azure Role to get access to the shared secret.

In Summary

Compared to other things I have seen in my previous years of building software and implementing security solutions – using Azure ACS has been pretty easy, it is pretty cheap – especially considering the time it saves and security it provides.  In future applications, if appropriate, I will continue to look for ways to use Azure ACS.

Categories: Software Development Tags: , ,

AppHarbor vs. Azure Web Sites

AppHarbor is Cool…

So I have a client’s web site running MVC4 hosted in AppHarbor.  It costs about 10$ per month to host this simple little site on AppHarbor.  One of the coolest selling points about AppHarbor was that I could simply commit the changes to github and the code would be automatically deployed to the web server.

AppHarbor says that the reason to use AppHarbor is – “AppHarbor lets you instantly deploy and scale .NET applications using your favourite versioning tool. Installing add-ons is just as easy.”

I totally agree – that is a great reason – and it’s very easy to auto deploy with AppHarbor.

But – I Think I Like Azure Web Sites Better…

Today I took the same web site, and enabled Azure Web Sites for my Azure account, and in < 10 minutes I had the same code running in Azure.  Pretty Amazing…

I simply push my local files to my azure remote repo and bam – web site deployed.  When / if Azure can hook into private repo’s – I will likely go that route.

Setting up Azure Web Sites and integrating it with GitHub was even easier than using AppHarbor.  I spent more time with AppHarbor compared to Azure Web Sites.

So What…

The price of each platform is roughly the same – both are really easy to implement – so who cares.  The customer has a gym to run – not an IT infrastructure or platform to manage.  So – why would I suggest Azure over AppHarbor.

Based on the types of platform and infrastructure services that my customer is likely to consume – and knowing the kind of maintenance and support that they will need in the future – I think that aligning my customer in the Azure Platform is probably a better fit for my them.

I think that Azure has potentially more platform options and likely will offer more services (in the future) than AppHarbor or its affiliates.

With Azure everything (the platform & services) will likely be in one place and more centralized – where as with AppHarbor or something else – it seems like many platform options and service offerings are fragmented.

To be fair to AppHarbor – I havent tried using any extra services – so it just appears fragmented.

The Last Question…

Now that the features of Azure are directly competing with what was cool about AppHarbor – I wonder if AppHarbor will remain relevant…

 

Online Artifacts…

Great Improvements to Azure Web Sites

Using Custom Domains, CNAMEs, and Aliases in Azure Web Sites

Must Read => Publishing a WebSite with Git

4 New & Exciting Releases from Microsoft

Recently Microsoft has released some new and exciting tools, frameworks, SDKs, and Operating Systems… i.e. new toys – for us Microsoft Engineers to play with.

Personally I am looking forward to incorporating these four items into my day to day engineering skills

Visual Studio 2012 & .NET 4.5

Visual Studio 2012 is exciting… mostly because its always great to get a new development environment and a new core framework.  The look of the new IDE sucks really bad – but it grows on you.  So far – it compiles faster, and seems to run faster than VS2010 – but not much.  At the end of the day its at least 100 million times better than eclipse…

What’s new in Visual Studio 2012

.NET 4.5 has a host of new things that are worth a quick mention – new things in the task parallel library, ASP.NET enhancements, WCF enhancements & simplifications etc.

What’s new in .NET 4.5

 

ASP.NET MVC4 + WebAPI

I have been using MVC since v1.0 and it has come a long way.  Version 4 is even better.  Simplification of creating restful services is just fantastic.  Additional support for supporting multiple mobile devices with mobile views make creating apps for web enabled mobile devices is easy.

One of the coolest things that I like is that I built a web app using MVC4 when it was in early beta and it continued to work after RTM with zero code breaking or re-configuration issues.

That effort was for Urban Survival Fitness – a pro-bono web site for a local recreation & fitness center located in Park Hill, CO.

Windows 8 UI

I got the chance to spend two weeks building a prototype application for my employer that used the Windows 8 UI html & js technology.  We basically ported parts of an existing cordova + html + js application into the Microsoft flavored Windows 8 UI.

This was an initial step at creating an app tailored for Microsoft Windows 8 UI and their tablet form factors – and presumably the next version of Windows Phone.

Getting Started with Windows 8 UI Apps

Azure SDK 1.7

At my office and for the apps that I build on my own – use a lot of azure cloud resources.  Cloud computing coupled with SOA is really cool – IMO – and therefore it is something that I am really passionate about.

First & foremost – the management portal is finally getting the love it deserves. 

Other things that show a lot of promise that I haven’t yet had a chance to explore are these features:

– Enhancements to Azure Caching

– Windows Azure Web Sites

– Deploy from Version Control

– Service Bus Enhancements

… and many other cool features.

What’s new in Azure Tools

What’s new in Azure

Categories: Technology Tags: ,

Why Doesn’t Azure Diagnostics Log to the WADLogsTable?

For the longest time I have been trying to figure out why none of my Azure instances were transferring log data into the WADLogsTable of my Azure Storage Account.

Here is the reason why this was happening… “no data is being transferred to the WADLogsTable in azure storage because nothing is being written to the local storage for azure diagnostics.  Nothing is being written to local storage because the Trace.XXX statements have been removed from the code.”

So my next question is “Why are my Trace.WriteLine statements being removed from my compiled code?”  I suspect the answer to this is because my Azure packages are built with MSBUILD.  My MSBUILD file calls CorePublish on each Cloud Project (ccproj) that needs to be packaged.  When CorePublish calls Csc.exe, the /define:TRACE switch is left out, and therefore all Trace.WriteLineStatments are removed from the compiled code.

So, why am I calling CorePublish from MSBUILD?  Well, I use TeamCity to build and ship my Azure cspkg and csdef files into Azure storage.

So, how did I fix the issue?  I added the value “DefineConstants=”TRACE”” to the end of my MSBUILD Command Line Parameters.

Here are my command line arguments:

/p:Configuration=Azure-Dev;TargetProfile=Azure-Dev;SolutionName=MyCloudProject.WorkerRoles;DefineConstants=”TRACE”

Here is what my build task looks like:

<Target Name=”PackIt”>
<Message Text=”Packing Azure with configuration=$(Configuration)” />

<PropertyGroup>
<PathToCloudProject>$(SolutionDir)\$(SolutionName).Cloud\</PathToCloudProject>
<CloudProjectName>$(SolutionName).Cloud.ccproj</CloudProjectName>
</PropertyGroup>

<Message Text=”PathToCloudProject: $(PathToCloudProject)” />
<Message Text=”CloudProjectName: $(CloudProjectName)” />

<MSBuild Projects=”$(PathToCloudProject)$(CloudProjectName)” Targets=”CorePublish”
Properties=”Configuration=$(Configuration);Platform=Any CPU;OutputPath=$(AzurePackageOutputPath)” />

</Target>

I learned about the DefineConstants trick from this blog post… thank you.

Categories: Software Development Tags: ,

Implementing YSlow Recommendations in ASP.NET MVC3

While working on the last release of My Inception Deck I decided to try and improve my YSlow score and ultimately improve the experience of the users using my site.

Some of the recommendation’s are just silly IMO, for example, if you have an expires header that is set to 72 hours, that is not considered a “far future” then YSlow starts docking points.

The list below is the list that I decided to address as I believed they would give me the most benefit:

  • Use a cdn
  • Add expires headers
  • Compress components with gzip
  • Put CSS at top
  • Put JavaScript at the bottom
  • Minify JavaScript and CSS
  • Avoid HTTP 404 (Not Found) errors
    After all of the things below are done, my release process looks like this…

  • Using visual studio, publish to the azure staging slot
  • While publication is in progress, run my powershell scripts to publish my Scripts and Content to Azure Blob Storage
  • After azure publication is complete, do my testing, and flip the vip
CSS & JavaScript Arrangement

This was easy.  In the code for My Inception Deck, all of the views derive from on of only a few layout pages.  In my layout page I use the razor @section to allow derived pages to inject stuff into various locations of the <head> and at various places of the <body>.

So I simply went into my master layout pages and reorganized some code.

Minify & Obfuscate javaScript, Minify CSS & Combine CSS

This task is the most tedious issue to tackle.  Why?  Because I had many CSS files floating around AND when they were combined I had CSS collisions.  I also had to update all the views that were referencing specific CSS files to rely solely on the CSS for the site.  However, with the help of YUI Compressor .NET I was able to get my css into two files, one file for the entire site and one file for the print view.  The @media print option was not going to cut it for me.

That being said, I followed the same paradigm for my javaScript files, I have a stand alone file for each view (not partial view).  For example, My Inception Deck has public pages, an internal dashboard view, and ten or so additional views for each step in the Inception Deck exercise.  So with the help of YUI Compressor .NET I was able to minify and compress each javaScript file.

I opted NOT to combine my javaScript files into a single file because, I am a javaScript NOOB and I did not namespace qualify my javaScript, so I can virtually guarantee that I will have function name and variable name collisions in the Global section.  I didn’t want to bite off that chunk of work just yet.

Implementation Details…

I created 3 msbuild tasks to run in post build.  One task to deal with the print css, one task to deal with the site css, and one task to deal with the javascript.

This is what my post build tasks look like:

$(MSBuildBinPath)\msbuild.exe "$(ProjectDir)MSBuild\CompressSiteCss.xml" /p:CssOutputFile="$(TargetDir)..\Content\Site.min.css" /p:SourceLocation="$(ProjectDir)Content\\"
$(MSBuildBinPath)\msbuild.exe "$(ProjectDir)MSBuild\CompressPrintCss.xml" /p:CssOutputFile="$(TargetDir)..\Content\Print.min.css" /p:SourceLocation="$(ProjectDir)Content\\"
$(MSBuildBinPath)\msbuild.exe "$(ProjectDir)MSBuild\CompressJs.xml" /p:SourceLocation="$(ProjectDir)Scripts\\"

 

Notice that I created three CompressXXX.xml files, this is where the YUI Compressor .NET MSBuild Configuration Settings Reside. 

Notice the syntax being used for the JavaScriptOutputFile attribute… this allows me to minify and obfuscate each file into a new file with ONLY the javaScript from the source file.  In other words, if I have 10 javaScript files listed in the JavaScriptFiles item group, I get 10 files with named XXX.min.js.

Here is that little nugget of code…

 

<Project xmlns="http://schemas.microsoft.com/developer/MsBuild/2003">
	<UsingTask
        TaskName="CompressorTask"
        AssemblyFile="..\..\Packages\YUICompressor.NET-MsBuild-Task.1.7.0.0\lib\net35\Yahoo.Yui.Compressor.MsBuildTask.dll" />
	<PropertyGroup>
		<JavaScriptOutputFile Condition=" '$(JavaScriptOutputFile)'=='' ">JavaScriptFinal.js</JavaScriptOutputFile>
	</PropertyGroup>


	<Target Name="MyTaskTarget">
		<ItemGroup>
			<JavaScriptFiles Include="$(SourceLocation)json2.js" />
			<JavaScriptFiles Include="$(SourceLocation)jquery-dateFormat-1.0.js" />
			<JavaScriptFiles Include="$(SourceLocation)jquery-format-utcdate.js" />
			<JavaScriptFiles Include="$(SourceLocation)jquery.cookie.js" />
			<JavaScriptFiles Include="$(SourceLocation)site.js" />
			<JavaScriptFiles Include="$(SourceLocation)xbreadcrumbs.js" />
			<JavaScriptFiles Include="$(SourceLocation)account\dynamicContentValidation.js" />
			<JavaScriptFiles Include="$(SourceLocation)account\login.js" />
			<JavaScriptFiles Include="$(SourceLocation)account\newUser.js" />
			<JavaScriptFiles Include="$(SourceLocation)admin\admin.js" />

			<JavaScriptFiles Include="$(SourceLocation)dashboard\boxDesign.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\cards.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\elevatorPitch.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\howBig.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\index.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\meetYourNeighbors.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\notList.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\risks.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\theSolution.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\tradeOffs.js" />
			<JavaScriptFiles Include="$(SourceLocation)dashboard\whyAreWeHere.js" />

			<JavaScriptFiles Include="$(SourceLocation)home\index.js" />

			<JavaScriptFiles Include="$(SourceLocation)print\printView.js" />

			<JavaScriptFiles Include="$(SourceLocation)WhatWillItTake\whatWillItTake.js" />

		</ItemGroup>

		<CompressorTask
            CssCompressionType="YuiStockCompression"
            JavaScriptFiles="@(JavaScriptFiles)"
            ObfuscateJavaScript="True"
            PreserveAllSemicolons="True"
            DisableOptimizations="Nope"
            EncodingType="Default"
            DeleteJavaScriptFiles="false"
            LineBreakPosition="-1"
            JavaScriptOutputFile="%(JavaScriptFiles.RelativeDir)%(JavaScriptFiles.Filename).min.js"
            LoggingType="ALittleBit"
            IsEvalIgnored="false"
            />
	</Target>
</Project>

 

Use a CDN

Up until this point all of my static content was chilling in the default Script and Content directory provided by MVC.  And all of this content was pretty much hooked up via the @Url.Content helper method that is provided by MVC.

So my overall strategy was to:

  • use 3rd party CDN for 3rd party script libraries
  • use my Azure CDN for my static content

So I whipped out VS.NET and created my own derivative of Nathan Tottens MVC Helper.

In addition to what Nathan did, if I am in DEBUG mode then I don’t want to use minified javascript AND I only want to use the Azure CDN when I have the AZURE compiler directive set.  So, by having a build configuration with DEBUG I can work locally and with a build configuration that has my AZURE directive defined I can run and debug locally but use the Azure CDN.

The code below shows how I use my MVC Helper for CSS and javaScript:

<link rel="shortcut icon" href="@Url.StaticContent("~/Content/images/favicon.ico")" >
<link href="@Url.StaticContent("~/Content/themes/base/jquery-ui.css")" rel="Stylesheet" type="text/css" />
<script src="@Url.StaticContent("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script>
<script src="@Url.StaticContent("~/Scripts/jquery-ui-1.8.7.min.js")" type="text/javascript"></script>

In my version of the StaticContent helper, I add some configuration settings to azure that allow me to enable / disable cdn entirely, change my CDN namespace without having to re-deploy my web roles, and append a query string to the requests for my CDN content.  In my case I use a query string to manage the versions, so for example when I do a release I will append ?v=20120329-0009 to the query string, which is my version of 03/26/2012 00:09.

Here is the code for my version of the @Url.StaticContent helper:

namespace System.Web.Mvc
{
    public static class StaticContentHelpers
    {

        private const string azureCdnUrl = ".vo.msecnd.net/cdn";
        private const string aspCdnUrl = "ajax.aspnetcdn.com/ajax";

        public static string StaticContent(this UrlHelper urlHelper, string contentPath)
        {

#if (DEBUG && !AZURE)
            return urlHelper.Content(contentPath);
#else
            if (String.IsNullOrEmpty(contentPath))
                throw new ArgumentNullException("contentPath");

            string[] parts = contentPath.Split('/');

            string file = parts[parts.Length - 1].ToLowerInvariant();

            if (cdnFiles.ContainsKey(file))
            {
                return String.Concat(urlHelper.RequestContext.HttpContext.Request.Url.Scheme, "://", aspCdnUrl, cdnFiles[file]);
            }

            string relativePath = urlHelper.Content(contentPath);

            string cdnNamespace = RoleEnvironment.GetConfigurationSettingValue("WindowsAzureCdnNamespace");
            bool azureCdnEnabled;
            Boolean.TryParse(RoleEnvironment.GetConfigurationSettingValue("WindowsAzureCdnEnabled"), out azureCdnEnabled);
            string cdnVersion = RoleEnvironment.GetConfigurationSettingValue("WindowsAzureCdnQueryStringParameter");

            if (azureCdnEnabled && ! String.IsNullOrEmpty(cdnNamespace))
            {
                var result = String.Concat(urlHelper.RequestContext.HttpContext.Request.Url.Scheme, "://", cdnNamespace, azureCdnUrl, relativePath);
                if (!String.IsNullOrEmpty(cdnVersion))
                {
                    result = string.Format("{0}?v={1}", result, cdnVersion);
                }

                return result;
            }

            return relativePath;
#endif
        }

        private static IDictionary cdnFiles = new Dictionary
        {
            // ASP.NET Ajax3.5
            { "microsoftajax.debug.js", "/3.5/MicrosoftAjax.debug.js" },
            { "microsoftajax.min.js", "/3.5/MicrosoftAjax.js" },

            // ASP.NET MVC3
            { "microsoftmvcajax.min.js", "/mvc/3.0/MicrosoftMvcAjax.js" },
            { "microsoftmvcajax.debug.js", "/mvc/3.0/MicrosoftMvcAjax.debug.js" },

            // JQuery
            { "jquery-1.6.4.js", "/jQuery/jquery-1.6.4.js" },
            { "jquery-1.6.4..min.js", "/jQuery/jquery-1.6.4.min.js" },
            { "jquery-1.5.2.js", "/jQuery/jquery-1.5.2.js" },
            { "jquery-1.5.2.min.js", "/jQuery/jquery-1.5.2.min.js" },
            { "jquery-1.5.1.js", "/jQuery/jquery-1.5.1.js" },
            { "jquery-1.5.1.min.js", "/jQuery/jquery-1.5.1.min.js" },
            { "jquery-1.5.js", "/jQuery/jquery-1.5.js" },
            { "jquery-1.5.min.js", "/jQuery/jquery-1.5.min.js" },
            { "jquery-1.4.4.js", "/jQuery/jquery-1.4.4.js" },
            { "jquery-1.4.4.min.js", "/jQuery/jquery-1.4.4.min.js" },
            { "jquery-1.4.3.js", "/jQuery/jquery-1.4.3.js" },
            { "jquery-1.4.3.min.js", "/jQuery/jquery-1.4.3.min.js" },
            { "jquery-1.4.2.js", "/jQuery/jquery-1.4.2.js" },
            { "jquery-1.4.2.min.js", "/jQuery/jquery-1.4.2.min.js" },
            { "jquery-1.4.1.js", "/jQuery/jquery-1.4.1.js" },
            { "jquery-1.4.1.min.js", "/jQuery/jquery-1.4.1.min.js" },
            { "jquery-1.4.js", "/jQuery/jquery-1.4.js" },
            { "jquery-1.4.min.js", "/jQuery/jquery-1.4.min.js" },
            { "jquery-1.3.2.js", "/jQuery/jquery-1.3.2.js" },
            { "jquery-1.3.2.min.js", "/jQuery/jquery-1.3.2.min.js" },

            // JQuery UI
            { "jquery-ui-1.8.11.js", "/jquery.ui/1.8.11/jquery-ui.js" },
            { "jquery-ui-1.8.11.min.js", "/jquery.ui/1.8.11/jquery-ui.min.js" },
            { "jquery-ui-1.8.10.js", "/jquery.ui/1.8.10/jquery-ui.js" },
            { "jquery-ui-1.8.10.min.js", "/jquery.ui/1.8.10/jquery-ui.min.js" },
            { "jquery-ui-1.8.9.js", "/jquery.ui/1.8.9/jquery-ui.js" },
            { "jquery-ui-1.8.9.min.js", "/jquery.ui/1.8.9/jquery-ui.min.js" },
            { "jquery-ui-1.8.8.js", "/jquery.ui/1.8.8/jquery-ui.js" },
            { "jquery-ui-1.8.8.min.js", "/jquery.ui/1.8.8/jquery-ui.min.js" },
            { "jquery-ui-1.8.7.js", "/jquery.ui/1.8.7/jquery-ui.js" },
            { "jquery-ui-1.8.7.min.js", "/jquery.ui/1.8.7/jquery-ui.min.js" },
            { "jquery-ui-1.8.6.js", "/jquery.ui/1.8.6/jquery-ui.js" },
            { "jquery-ui-1.8.6.min.js", "/jquery.ui/1.8.6/jquery-ui.min.js" },
            { "jquery-ui-1.8.5.js", "/jquery.ui/1.8.5/jquery-ui.js" },
            { "jquery-ui-1.8.5.min.js", "/jquery.ui/1.8.5/jquery-ui.min.js" },

            // JQuery Validate
            { "jquery.validate.js", "/jquery.validate/1.8/jquery.validate.js" },
            { "jquery.validate.min.js", "/jquery.validate/1.8/jquery.validate.min.js" },

            // ASP.NET MVC3
            { "jquery.unobtrusive-ajax.js", "/mvc/3.0/jquery.unobtrusive-ajax.js" },
            { "jquery.unobtrusive-ajax.min.js", "/mvc/3.0/jquery.unobtrusive-ajax.min.js" },
            { "jquery.validate.unobtrusive.js", "/mvc/3.0/jquery.validate.unobtrusive.js" },
            { "jquery.validate.unobtrusive.min.js", "/mvc/3.0/jquery.validate.unobtrusive.min.js" },
        };
    }
}
 
Uploading Static Content to Azure Storage

At this point I was in the last phase of the performance improvements.  My goals at this stage were to use powershell to and the windows azure powershell commandlets to do the following:

  • Compress each CSS and javaScript file using GZIP before uploading the file to blob storage
  • Set the content encoding on each blob being uploaded
  • Set the proper content type on each blob being uploaded
  • Set the expires header on each blob being uploaded

Microsoft has published a lab related to the Windows Azure CDN that demonstrates how to work with the CDN.  In this lab they provide a powershell script to publish files to blob storage.

I used this lab as a starting point.  In my version of the script, I load each file into a memory stream, compress the contents of the memory stream, and then push the compressed bytes into a blob.  In addition, I set the expiration to 72 hours (72 hours == 259200 seconds).  Finally I set the content type and content encoding values on the header.

One thing to watch out for is that if you work with the gzipStream before it has been disposed, your file will end up missing bytes and it will do something crazy, like chop the last 2 bytes off of you minified and obfuscated javaScript files.  These missing bytes will leave you with all sorts of crazy javaScript errors.

The last thing to note is that the Configuration.xml file that comes in the hands on labs needs to be modified with your storage account key and your storage account name.  You also need to set your source folder and container name.

My container names are “{myblobStorageHostName}/cdn/Scripts” and “{myblobStorageHostName}/cdn/Content”.  My source folder names are “C:\{pathToMyMvcSourceCode}\Scripts” and “C:\{pathToMyMvcSourceCode}\Content”.  So, you can see that the folder structure in blob storage mimics that of my local MVC folder structure.

If you are curious, here is the powershell script that I modified (lines 17-44 are where the GZIP magic happens)…

function UploadBlobs($container, $directory, $virtualPath)
{  
    Write-Output "Uploading $directory files..."
    Foreach ($file in Get-ChildItem $directory)
    {
        if($file.PSIsContainer)
        {
            UploadBlobs $container $file.fullname ($virtualPath + $file.name + '/')
        }
        else
        {
            Write-Output "Uploading $file"
            $blob = $container.GetBlobReference($virtualPath + $file.name)
            $blob.Properties.ContentType = GetContentTypeFromExtension $file.extension
            $blob.Properties.CacheControl = "public, max-age=259200"
            
            if(($file.extension -eq ".css") -or ($file.extension -eq ".js"))
            {
                $input = New-Object System.IO.FileStream $file.fullName, ([IO.FileMode]::Open), ([IO.FileAccess]::Read), ([IO.FileShare]::Read)
                $buffer = New-Object byte[]($input.Length)
                $byteCount = $input.Read($buffer, 0, $input.Length)
                
                if ($byteCount -ne $input.Length)
                {
                    $input.Close()
                    Write-Host "Faulure reading $file.fullName"
                    exit 2
                }
                
                $input.Close()
                $output = New-Object System.IO.MemoryStream 
                $gzipStream = New-Object System.IO.Compression.GzipStream $output, ([IO.Compression.CompressionMode]::Compress)
     
                $gzipStream.Write($buffer, 0, $buffer.Length)
                $gzipStream.Dispose()
                
                $compressedBytes = New-Object byte[]($buffer.Length)
                $compressedBytes = $output.ToArray()
                
                $blob.Properties.ContentEncoding = "gzip"
                $blob.UploadByteArray($compressedBytes)
                
                $output.Close()
            }
            else{
                $blob.UploadFile($file.fullName)
            }
        }
    }
}

function SetCacheControlNoCache($container, $resource)
{
    $blob = $container.GetBlobReference($resource)
    $blob.Properties.CacheControl = "public, max-age=0";
    $blob.SetProperties()
}

function GetContentTypeFromExtension([string]$extension)
{   
    switch ($extension)
    {
        ".png" { return "image/png" }
        ".htm" { return "text/html" }
        ".pfx" { return "application/x-pkcs12" }
        ".xml" { return "text/xml" }
		".css" { return "text/css" }
		".jpg" { return "image/jpeg" }
		".jpeg" { return "image/jpeg" }
		".bmp" { return "image/bmp" }
		".js" { return "text/x-javascript" }
		".zip" { return "application/zip" }
    }
	
    Write-Output "application/octet-stream"
}

$scriptDir = (split-path $myinvocation.mycommand.path -parent)
Set-Location $scriptDir

$sdkPath = resolve-path "$Env:ProgramFiles\Windows Azure SDK\v1.?\ref"

write-host $sdkPath

if ($sdkPath -is [Array]) 
  { $refFolder = $sdkPath[-1] }
else 
  {$refFolder = [string]$sdkPath}

[Reflection.Assembly]::LoadFile($refFolder + ‘\Microsoft.WindowsAzure.StorageClient.dll’)

[xml] $xml = get-Content "./script-configuration.xml"
$subId = $xml.settings.subscriptionId
$storageAccount = $xml.settings.storageAccount.name
$storageAccountKey = $xml.settings.storageAccount.key
$containerName = $xml.settings.containerName
$sourceFolder = $xml.settings.sourceFolder

Write-Host "Uploading files..."

$credentials = New-Object Microsoft.WindowsAzure.StorageCredentialsAccountAndKey -ArgumentList $storageAccount, $storageAccountKey
$account = New-Object Microsoft.WindowsAzure.CloudStorageAccount -ArgumentList $credentials, TRUE
$client = [Microsoft.WindowsAzure.StorageClient.CloudStorageAccountStorageClientExtensions]::CreateCloudBlobClient($account)

$timeout = New-Object System.TimeSpan -ArgumentList 0, 10, 0
#set the timeout to 5 minutes. this allows us to upload large blobs.
$client.Timeout = $timeout

$container = $client.GetContainerReference($containerName)
$container.CreateIfNotExist()
# set public permissions, only if necessary
$publicPermission = New-Object Microsoft.WindowsAzure.StorageClient.BlobContainerPermissions
$publicPermission.PublicAccess = [Microsoft.WindowsAzure.StorageClient.BlobContainerPublicAccessType]::Blob
$container.SetPermissions($publicPermission)
UploadBlobs $container $sourceFolder ''
#SetCacheControlNoCache $container "images/Destination.jpg"          

Write-Host "Done!"

Ship build artifacts to Azure Storage with MSBuild

As of late, I have been doing a lot of work with Windows Azure.  One of the pain points is that I am the only person on the team that can deploy our builds to Azure.  So, I decided to remove this impediment.

The biggest challenge was shipping the Windows Azure packages from the build server’s artifacts directory to Windows Azure Cloud Storage at build time.  To solve this issue I created a custom MSBuild task that will copy files from my build server’s artifacts directory into an Azure Blob Storage container of my choosing at build time.

After all was completed, this is what my deployment process looks like.

  • Kick off the build
  • When the build is complete, login to the Windows Azure Portal and upgrade the environment in question

The image below shows the artifacts that are created by the build.

image

The next image shows the artifacts, after they have been shipped to the Azure Cloud Storage Container

image

And finally, the last image shows how I upgrade my roles using the Windows Azure Portal.

image

The steps that I follow when upgrading my hosted services from the Azure portal are:

  • Login to the Windows Azure Portal
  • Select the hosted service that I want to upgrade
  • Click the upgrade button
  • From the upgrade deployment scree, click the “Browse Storage” button
  • Navigate to the storage container where my build server has shipped its files
  • Select the appropriate build artifacts
  • Click ok
  • Wait

The code below outlines what my MSBuild Task looks like…

image

The source code for the custom build task is located on github:  https://github.com/cbertolasio/Windows.Azure.Msbuild/

The Good, The Bad, and Windows Azure : On Blob Queues

So this is a quick post of my recent experience with Windows Azure Blob Queues.

Over the past few months I have been dealing with Azure and all of its glory…  Just like the weather, some day’s its clear skies and smooth sailing, and like others, well… thunder and lightning is followed by torrential downpours and occasionally there are even tornados…

The Good…
  • the blob queue API is pretty simple and easy to use
  • you can peek and read single or batches of messages
  • message body’s can be any data that can be utf-8 encoded, so JSON or XML works great, you could probably even use BSON or ProtoBuff for your message body but I haven’t confirmed this
  • limited to 8kb message size – so you are forced to keep your messages reasonably sized
  • if you need more than 8kb there is no reason that your blob message cannot store a pointer to a different blob resource
  • each message has a DequeueCount property so you automatically know how many times the same message has been put back into the queue
  • there is a rest API and a manage API
  • the queues are easy to access from web and worker roles
  • the queues are built on top of the Azure Blob subsystem which is independent from your other cloud services, so you tend to not have to worry about the infrastructure required to service the message queues
  •  
    The Bad…
  • you cannot peek or read an individual message using a query, you have to go in order
  • you can only peek or read 32 messages at a time, so if you want to see what messages are backed up after the first 32 you cannot
  • you cannot query messages based on metadata
  • limited to 8kb message size – can be a bad thing if you need to store large amounts of data
  • there doesn’t appear to be any out of the box monitoring on throughput and volume
  • this technology is very new for Microsoft and other competitors seem to offer more robust features
  • the APIs could be to simple
  • there is no way out of the box to set message priority so you would need to build this on your own
Some Practical Applications are…
  • if you want to reliably store a set of processing instructions that can be executed at a later time, blob queues are a good choice
  • if you want to reliably store a set of hooks or indicators regarding when certain events have taken place such that you can take actions on these events
  • blob messages can be saved into other storage mediums like SQL Azure such that they can be used again later, maybe for replay scenarios
  • blob queues are a good backbone for systems that want to implement the CQRS system
  • blob queues, when combined with asynchronous processing can be leveraged to increase overall system throughput
In Conclusion…

Working with blob queues over the last few weeks in practical applications has really exposed the good parts and some of the bad parts regarding blob messages and blob queues.  Thus far, I think there is a good balance between the simplest thing that could possibly work vs. having a bunch of features that you don’t need or wont use.  Considering the age of Azure queues, I think this feature is in a good place and I will be using this feature in my next production apps.  Hopefully everything will go according to plan.

The Good, The Bad, and Windows Azure

Just a quick post that outlines some of the trials and tribulations that I have encountered with Windows Azure over the last month of development with Windows Azure.

All in all it has been a big learning experience, and painful at times.  Documentation from MS is outdated and spotty at best, so you are left trolling the internet for solutions that you can trust.

Thankfully, I have had a Microsoft insider helping me out. 

MS Provides the Azure SDK that installs a Compute Emulator and a Storage Emulator.  These are intended to mimic the real cloud. Getting these installed is fairly straight forward.  Understanding how they work on the other hand, as well as attaching the debugger, and getting diagnostics information is a bit challenging.

Long story short, cloud platforms are great, and I still believe in Azure, but hopefully soon, they will do some things to make working with Azure easier for me, as an engineer.  Thus far I have spent 50% of my time dealing with behavioral inconsistencies between the local SDK and the real cloud, 25% idle time just waiting for packages to deploy and spool up, and maybe 25% of my time actually refactoring my code to work in a scalable high availability environment.

The nice parts…
  • If you structure your applications correctly, you can build highly available and highly scalable applications.
  • If you structure your applications correctly, you can click a button, publish a new instance of your application to a staging environment, test it, then click another button, and your application that was in “staging” is now in production.  That is srsly hot!
  • There are a lot of infrastructure resources at your fingertips, i.e. hosted services, cloud storage, multiple cores, huge disk volumes, etc.  The downside is that there is a boat load of stuff that you need to learn to get proficient with azure (but this is no different from rackspace or amazon).  The other downside is that you could inadvertently get a big bill if you aren’t careful about how you organize, orchestrate, and program your applications.
Things you should know…
  • You will need to create a “cloud project”, and, depending on your needs, add a “Web Role” and/or a “Worker Role” to this cloud project.
  • The cloud project can pull in existing “Web Projects” into web roles, and can also pull in existing “Class Libraries” as worker roles.
  • In order to “deploy” to your local development environment, you will need to set the cloud project as the startup project
  • If your web project that is associated with the web role, contains virtual directories (aka virtual applications in IIS 7.x) you will need to add special configuration to the cloud project’s ServiceDefinition.csdef, here is a great post “Running Multiple Websites in a Windows Azure Web Role
  • Your web applications may have trace diagnostics, well they don’t show up in the compute emulator by default, so you may need to “Implement Tracing to the Azure Compute Emulator
  • The out of the box storage emulator expects sqlexpress to be installed.  I never install sqlexpress, I normally install sql developer or better.  So in this case you will need to tweak your configuration settings for the storage emulator.  You can use the DSInit command from the windows azure sdk command prompt to configure sql server, read more here
  • You can use visual studio or 3rd party tools to browse your local storage account and diagnostics logs.  Check out this link to learn how to browse your development storage from the VS.NET IDE.  If you are into 3rd party tools, my favorite 3rd party tools for viewing local and cloud based storage and diagnostics information is to use Cerebrata Cloud Storage Studio and the Cerebrata Azure Diagnostics Manager
Things to avoid
  • virtual directories.  virtual directories were a pain in the neck before azure even existed, they are still very painful.  if your application has virtual directories, consider re-organizing your architecture, because it just gets more painful when you move from your local environment into the cloud.
  • mixing web services with web applications.  before the cloud, it is typically frowned upon to host your web services and your customer facing web site in the same application.  before you move to the cloud, you may want to make sure that your applications have proper separation of concerns.  you may want to add additional instances for your services and if the services are hosted by the customer’s web application, the you could have a scaling issue.
  • don’t count on things like app pool name, file system paths – every time you deploy a new instance, the instance gets a new app pool name.  if you are expecting your files to live in a known location like C:\somedir, you may be surprised to find that sometimes your files end up in E:\some directory that microsoft chooses or F:\some other director that microsoft chooes, etc.
  • loosing your certificates – various aspects of azure require certificates.  my advice is to generate some certificates with exportable private keys, check them into version control, and share them with the other developers on your team.  if Joe doesn’t have the same certificate as me, and he mucks with say the storage account that is baked into the configuration files, and he checks in his code, he will break my development experience and if either of us push to the real cloud the cloud will be broken.
  • writing to the local file system – many apps write to the local file system.  if you publish a new instance, you get a shiny new file system.  if you overwrite an existing publication, you get a shiny new file system, with all of your old files being MIA.  if you need to persist files that are critical to your app, use blob storage.
  • using assemblies that require 32 bit mode – if an app running in your web role has an assembly that requires 32 bit mode, then you will need to login to the remote desktop for your instance after you deploy it, hunt down the application pool associated with your virtual server or virtual application and set its “Enable 32 bit applications” property to “true”
  • trusting that VS.NET will publish all dependent assemblies into the proper publish directory – if you application has many dependent assemblies, like say, MS Sync Framework, MVC, SQL CE, etc, my advice to you is to create a directory just for dependent assemblies, and do this for each application that you need to deploy.  I literally spent hours tracking down assembly references that were non-standard .NET framework assemblies.

Things that need to be re-worked… seriously… no joke…

  • Logging, Monitoring and Instrumentation – the documentation on this is so fragmented its not even funny.  As a software engineer, I need to write a few lines of instrumentation code to see what my code is doing at runtime, and I need to be able to analyze that data.  MS has given us a flashy new SL portal that gives us a lot of power, how about giving us a few buttons to turn off / on Azure Diagnostics Tracing, and Windows Event Logs?  Why the hell do I need to write a program, custom code, or buy a 3rd party tool to enable this feature?
  • Stopping and Starting + Deploying Instances is seriously slow.  I spend 30 minutes between integration test cycles just on performing an upgrade of an existing instance and getting the instance online again.  There has got to be a better way.  Actually, I have been logging into the remote desktop of my instance and making tweaks there, but this is a seriously bad practice.  I have also resorted to emailing / ftp-ing files that I need to get into the cloud to some http addressable location, and then downloading them onto my instance.  Once again, this is a seriously BAD practice because my local environment is not in parity with what’s in production.