My Nuget feed does not contain the domain name used in the request

If you ever host your own nuget server with cassini, you may find that the host name used to request the feed, is replace by the private ip address in the content returned by the feed.

To fix this issue add this key:

<add key=”aspnet:UseHostHeaderForRequestUrl” value=”true” />

To the web.config for nuget server.

An associate of mine documented the issue more over at the NuGetGallery site

 

Categories: Software Development Tags: ,

Installing Node on an Amazon Linux EC2 Instance

A simple reminder on how to do this the easy way.

Basically, instead of installing GIT and compiling everything on the new EC2 instance, I simply do this.

curl it over from the node download site and unpacking it with tar.

then simply create a file that loads when any users profile is loaded….

So, here we go => ssh into your AWS EC2 Instance

cd /usr/lib
sudo su
curl https://nodejs.org/dist/v4.3.2/node-v4.3.2-linux-x64.tar.xz | tar -xJ

vi /etc/profile.d/node.sh
#add these two lines and save the file…
export NODE_HOME=/usr/lib/node-v4.3.2-linux-x64/bin
export PATH=$NODE_HOME:$PATH

logout, log back in, run $ node -v or $ npm -v

Thanks to my pair, Brad Matheson for helping me work through this.

Parsing “Date + Time + Time Zone Offset” in my very own DateTimeOffset

In one of my pet projects I let my users specify the date they did something, the time they did it, and I use moment js to inspect their browser’s time zone.  I take those three pieces of information and send them to the server so that C# can calculate the DateTimeOffset and save it into the database.

Sounds pretty simple?  Possibly I have over complicated it, or it really is not so simple, or possibly because I have never done it quite like this – it seems harder than it really is…

So if you want to skip straight to the code then here it is…

Using the controls in my web browser – lets say I pick a date of “03/01/2013” , and I pick a time of “12:45 AM”, and my the time zone (for whatever reason) is set to a non-standard time zone like Mumbai UTC+0530.

I need to get the value of “03/01/2013 12:45:00 AM +05:30” into a valid DateTimeOffset object and saved into my database.   The reason for doing this is because when I display dates to my user – I display them in “time ago” format, example -> “you performed this action about 5 minutes ago”, “you did this action about a month ago”, “you did this action about 4 hours ago”, etc.  So if the current time, is say 12:45 AM and the user enters today at 12:40 AM, and the browser says they did the action 5 hours ago – they get a little freaked out.

Anyhow – this is now forever in my journal – just in case I need to practice this black magic again…

Add New - Date Picker

User Picks a Date

Add New - Time Picker

User Picks a Time

Categories: Software Development Tags:

How To: Use Projections with MVC 4, WebApi, and oData

Overview

A quick journal entry about how to expose your own types using oData, WebApi, and MVC 4.

I like the concept of oData.  Consistent URIs and query syntax is important when it comes to web services and SOA in general. My reservations thus far have been that every sample I have seen puts your database entities and too much of your internal database schema on the wire.

In my opinion that practice is a very bad one. A more appropriate approach, may be to only surface the data that you need and to structure the data so that it make sense to the service consumer.

Relational structures don’t always make sense to service consumers.  Exposing internal database details to your service consumer does not really make their life any easier.

In the quick sample I will show you how I was able to expose my own POCO / DTO to an oData endpoint, using MVC 4 & WebApi.

Key Points?

Here is the sample code for this article / demo

This is where I started from -> Getting Started with oData & MVC 4

Aside from the normal things that you have to do to get oData working with MVC 4, EF, & WebAPI – here is the short list of things that I had to do expose my own DTO

  • Define my own DTO
  • Use an EF Projection to select data from the database and put it into an IQueryable<YourDto>

The sample code defines a DTO named WorkoutSummary.

The snippet below shows how I use entity framework to select data from the WorkoutLog, User, and Workout table and PROJECT that information onto the WorkoutLogSummary.


        [Queryable(PageSize = 5)]
        public override IQueryable Get()
        {
           return workoutLogs.Select(it => new WorkoutLogSummary { 
               Notes = it.Note, 
               DateCreated = it.DateCreated,
               UserName = it.User.IpNameIdentifier, 
               UserId = it.UserId,
               WorkoutLogId = it.WorkoutLogId, 
               WorkoutName = it.Workout.Name, 
               WorkoutType = it.Workout.WorkoutType.Name
           });
        }

Reservations

At the end of the day, my simple projection and oData implementation works.  I still have some pretty big reservations about the Microsoft MVC 4 WebApi implementation of oData support. Here are just a few of them…

  • There is a lot of black magic that takes place between the http uri and how that translates into a sql server query – controlling that stuff can be very important if your system has more than 20 records.  The caveat – I was on the ground floor of WCF so having control is my background – WebApi hides a lot – and this makes me nervous.
  • PageSize is a named property of the Queryable attribute that supposedly controls server side paging – well it kind of does – but it only controls the number of records returned from the web service AND NOT the number of records returned from the database.  So you could still end up fetching a million records from the DB and returning those to the oData controller – where the PageSize attribute will only take the top N.
  • If you want to properly limit & page the number of records returned from the database you need to make sure the $top query operator makes it to the database – so my testing shows that the client / service consumer always needs to pass in $top – or you will be returning ALL data from the database that matches the filter
  • This reservation exists even without oData, but my concern is that many people will just start supporting oData without thinking about their service consumers and their databases – they will not take the time to build good indexes in the database and they will not take the time to surface good & meaningful data structures. We will end up with a bunch of WebApi endpoints that support oData but leave databases using too many disk I/Os and CPU cycles. The service endpoints could end up being rejected by service consumers due to their performance and usability issues.

The Good Parts

Yes – there were some good things to take away here:

  • All of this works in Azure, controller still uses ACS security
  • Much of this framework and translation stuff is hidden / encapsulated for me into the Queryable attribute and the EntitySetController : ODataController
  • My sample is very simple – so it works, and when I need oData to read data it’s easy to implement – I can still use my tried and true patterns along side of oData Controllers
Categories: Software Development Tags: , ,

3 Things Azure ACS has done for me lately…

In my pet project Crossfit Benchmarks – I have three very specific security requirements and Azure ACS (Access Control Services) has allowed me to implement all three.

Here is the list of requirements:

  • Users must login to access secure sections of the web application AND my application will not do any of the following
    • provide authentication & authorization services
    • store & manage user credentials
  • Data stored in my application must be partitioned by individual users – i.e. your data is yours, mine is mine, ours data does not inter-mingle
  • The Restful web services that serve the data to my applications must be secured AND my services will not do any of the following
    • provide authentication & authorization services
    • store & manage client application credentials / keys / or shared secrets

Yeah – I know it sounds too good to be true right? But it was pretty easy to do all three.

User Authentication & Authorization

I was able to login to my Azure portal, navigate to the Access Control section, and simply create a relying party application that relies on various third party identity providers (Google, Facebook, Yahoo, and Windows Live). In this particular scenario – the relying party application is my ASP.NET MVC Web Application.  After just a few clicks I had enabled authentication & authorization for Facebook, Google, Yahoo, and Windows Live.  Each of these identity providers manage credentials, provide authentication – NOT my web applicaiton.

So the moral of the story here is that – for my web application I don’t care about authenticating and authorizing users.  As far as my web application is concerned – if Azure ACS says that the 3rd party Identity Providers have authenticated & authorized the person trying to access the secured sections of my web application then that is secure enough for me.

If you wanted to – you could even integrate your on-premise Active Directory with cloud based azure ACS – but I have never had to do this and frankly I think this scares the daylights out of many IT Organizations… can you say “private cloud” – oh wait… that’s a topic for another day.

Partitioning Data by Individual Users

The beauty of using the Identity Providers in Azure ACS is that they all give SAML Tokens back to my application. SAML has been around since ~2005 and it has typically been complicated to implement. This article won’t get too much into SAML or Tokens, but tokens are basically tidbits of user information that are handed out by identity providers. Tokens ARE NOT credentials – they are a set of “claims” that can be made about a security context, i.e. “name identifier” + “identity provider” can be used to uniquely identify a user accessing my system. I don’t know “who” this user is, just that their token is valid, the tokens came from a reputable source, and now I can partition data using these tokens handed to my application.

So how does all this magic work? It’s pretty cool… and here is the short version of my understanding of how it works…

  • User tries to access a secure section of my app
  • My app redirects user to Azure ACS because my app is configured to expect Azure ACS tokens
  • User logs into their identity provider and grants my app access
  • Identity provider issues tokens and redirects user back to Azure ACS
  • Azure ACS forwards user to where they wanted to go in the first place (my application)
  • My application inspects the tokens to make sure they are valid
  • If the tokens are valid – we will either create new data for the user OR load their existing data

So what happens if Facebook, Google, Yahoo, or Windows Live change the way that they issue name identifiers? I don’t know – I suppose that if they change the name identifier issued to my credentials – then all of the data on the internet relying on this name identifier value would be unreachable… so let’s hope they don’t do that…

So why is this cool again?

  1. I don’t have to store credentials for my users – I am in the business of storing data for my application, not your credentials
  2. I think it’s a major PITA for users to have to create account after account after account, etc, just to use a simple web app
  3. Each time a new application rolls out a new authentication & authorization scheme – the internet actually becomes less secure, because if the user uses the same username / password as they have in other systems, they have just duplicated that data now there is another means for hackers to get their data – in other words, the more times you share a secret – the less secure it actually is – at some point, someone will tell the secret again (i.e. get hacked)…

Securing Restful Services

In my pet project, I am planning on having an ASP.NET Web Client, Android Client, Windows 8 Client, and maybe an iOS Client. Each of these clients will use basic http to talk to my services, and I want to make sure that only clients that are authenticated and authorized can access my services API. In previous lives I have built this by hand using WCF and SQL Server, plus API Keys and HMAC validation. I really didn’t want to go there again.

So it turns out that Azure ACS has the concept of Service Identities. Basically, I can assign a set of shared secrets for each client. My client can then use the shared secret to request a security token from Azure ACS. Azure ACS will verify the shared secret and return a security token that can then be forwarded to my web service api.

Is this 100% bulletproof? No, people can crack my apps in the app store and figure out the shared secret, and build their own apps with my shared secret. This is not the end of the world – I am not dealing with financial or medical data – so if a hacker is really that bored – then go nuts. If I catch it, I can simply change the shared secret in Azure and push updated builds to the various app stores.

So – for now, it is secure enough. Once I actually start building the Android, Windows 8, and iOS versions I may revisit this. Right now, I am working on the ASP.NET MVC flavor, so they would actually have to hack my Azure Role to get access to the shared secret.

In Summary

Compared to other things I have seen in my previous years of building software and implementing security solutions – using Azure ACS has been pretty easy, it is pretty cheap – especially considering the time it saves and security it provides.  In future applications, if appropriate, I will continue to look for ways to use Azure ACS.

Categories: Software Development Tags: , ,

6 software engineering tools that have been helping me go faster…

January 29, 2013 1 comment

Early in 2013 I kicked off a new pet project.  The project is going full steam ahead, and,using the tools below, I am making pretty good progress:

kanbanize

This tool helps me go fast because its a quick  & light kanban board.  This is my first real go round with kanban.  so far kanban is working good for me.  Kanbanize, was cheap & easy to setup, someone else hosts it, members of the team don’t seem to mind it, we can all see the status of what is going on in the project.  So check it out if you want to jumpstart your kanban process.

github

It’s not like github is new or anything, and I have been using it for quite some time.  Thankfully – it is still helping me go fast.  In a matter of moments, I had 3 new distributed source code repositiores setup for my project and all team members were given permissions to the repositories.  Win!  Thanks github!

azure web sites

Wow – so MS & Azure finally got a deployment model that doesn’t take 30 minutes.  I have to admit – setting up two new azure websites for my pet project was seriously EASY!  In just a few moments I was able to provision a web site for my web application and my web services application.  Even cooler – in one of the more recent patches to Azure Web Sites – they enabled deployments from private github repositories   This was super cool because – now I can just deploy to my azure-deploy branch of github, and BAM – the code shows up running in the cloud moments later.

mvc4, webApi, twitter bootstrap, jQuery, knockoutJs

In November 2012 I got involved into a new contracting gig that required this technology stack.  MVC4 helps me build testable web apps really quickly.  With webApi I can build restful services quickly as well (but I kind of miss WCF).  Twitter bootstrap – the great thing about this is that my application just seems to work in the desktop on IE, FF and Chrome.  It also works great on windows 8, android tablets, iPads, plus apple and android phones!  So twitter bootstrap is a pretty cool UI framework to build with.  KnockoutJS helps me build MVVM web apps and widgets – jQuery helps me animate & poke around in the DOM and do ajax calls.

So for now – I think this is my “new” favorite technology stack!

SQL Server Data Tools for Visual Studio 2012

I always have been a database first kind of guy.  So that means that I typically have a database long before I have any code.  Call me kooky, but often times, using C# to define and create a real world, production ready, scalable, database driven application – seems like a lost cause.  Code first IMO is great for green field – lets get going REALLY quickly app, but I don’t like to skimp on DB design.

So this means that I need to be able to go from local databases to cloud databases quickly, efficiently, and without breaking the world.  SQL Server Data Tools for Visual Studio 2012 let me work the way that I prefer.  Here is what my process looks like:

  • create a data project from my local DB
  • create a data project from my cloud DB
  • make a change to the physical local DB
  • schema compare the the local db to the local project & update the local project
  • schema compare the local project the cloud data project
  • apply the updates to the cloud data project
  • apply the updates to the physical cloud db

Yes – quite a few more steps than code first – but IMO its safer, you have more control, and it tends to work right the first time.  So – since it’s less error prone, and less subject to “oops – I shouldn’t have run that code” – I have more of a safety net – and thus time is saved.

NCrunch

So – NCrunch is the most recent piece of software that I thought was good enough to buy personally.  Yes – IMO it was that good.

NCrunch helps me go faster because it executes my tests in the background. It also compiles my code for me automatically.  It shows me visually if the code I am looking at is covered by a unit test.

So what – why does that matter?  To me it matters because its ALMOST like having an automated build & CI server built right into my IDE.  I know when I am done with a line of code if the code compiles, if the tests are passing, and also if the code I am writing is event covered by a test.

So – this gives me the confidence to simply push code straight from my local development branch into my remote azure-deploy branch, and I feel really confident that the code will work in production.

Now – that being said – if I were in an enterprise – I would likely not advocate pushing straight into production without going through a few other checkpoints, but its fine for my own little pet project.

So – in summary those are 6 things that are helping me go faster as a professional software engineer.

2 Killer Articles from MSDN Magazine

Recently I read 2 articles that I really liked.  The first was on Windows 8 UI – the second was on OData, Entity Framework, and Azure Access Control.

Don’t Get Me Started (Nov 2012) – I love this column

The first read is from David Platt – who writes the ‘”Don’t Get Me Started” column for MSDN Magazine.  Basically – in this article he goes on to say that Windows 8 UI is NOT a magic bullet that will make your users love your app. 

Here is my favorite quote: “To take whatever whiz-bangery Microsoft releases and just throw it at a user hoping it will somehow make him happy (because it makes you happy, you hopeless geek) is bad programming. It’s lazy programming. To use the most toxic word in the geek vocabulary, it’s stupid programming.”

Another great quote – “The magic comes from knowing who your users are: what they want, what they need, what they only think they want, and what they haven’t yet realized they need. You need to know which problems they’re trying to solve and what they would consider the characteristics of a good solution. Only then can you say, “I know who they are and what they need—how close can I get to that with the toolkit I have?””

Read more here – it’s a great one pager

OData + Entity Framework + Azure Access Control

This article from the Oct 2012 issue shows how to use out of the box tools provided by Microsoft to create easy to use CRUD web services, that follow good URI patters (OData) and provide a layer of security OUTSIDE of the web service using Azure Access Control Services.

It’s a pretty good sample of how to take out of the box services and build something quickly.

However – there are a few caveats that I might point out… 1. it might not be a great idea to put your entire entity model on the wire – it could make things more complicated 2. I have seldom built SOA solutions that needed JUST CRUD operations – normally there are services that require asynchronous operations that span multiple 3rd party services and represent long running task based business processes – so simply fetching the top 10 records from table X doesn’t cut it in the real world.

That being said – this is still a good example of how to use these three technologies together.

Read More Here…

Follow

Get every new post delivered to your Inbox.

Join 57 other followers