ASP.NET WebAPI External Logins Via Single Page Application

Implementing an external login with ASP.NET MVC is pretty much done for you if you use the out-of-box Visual Studio template. However, if you create a single page application (SPA), there are a few hoops you have to jump through.

I’ve published an simple example on my GitHub so you can get right into the code. Amazingly, I found tons of links explaining the process; however, finding actual code that worked…well, I never really found anything (and no, Auth0 links don’t count).  Since Web content explaining the OAuth2 process is abundant, I’m not going to dive into detail on that other than the steps your application will take if you use the AccountController code for WebAPI as of today’s date. This will include the special-case needed to get an email address back from Facebook specifically.

First, I registered my application with Facebook which gave me and AppId and AppSecret value and modified the Startup.Auth.cs file where the default template has the Microsoft, Twitter, Google, and Facebook code commented out for your AppId and AppSecret registrations.

var fb = new Microsoft.Owin.Security.Facebook.FacebookAuthenticationOptions {
    AppId = "xxxxx",
    AppSecret = "xxxxxxxxxxxxxxxxxxxxxxxx",
    Provider = new Microsoft.Owin.Security.Facebook.FacebookAuthenticationProvider {
        OnAuthenticated = context => {
            context.Identity.AddClaim(new System.Security.Claims.Claim("FacebookAccessToken", context.AccessToken));
            return System.Threading.Tasks.Task.FromResult(true);
        }
    }
};
fb.Scope.Add("email");
app.UseFacebookAuthentication(fb);

In the WebApiConfig class, I made changes to implement JSON formatting.

config.Formatters.JsonFormatter.SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/html"));
config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
config.Formatters.JsonFormatter.SerializerSettings.Formatting = Newtonsoft.Json.Formatting.Indented;
config.Formatters.JsonFormatter.UseDataContractJsonSerializer = false;

Now, to get the email address back from Facebook, I had to import the Facebook SDK via NuGet and modify the RegisterExternal method of the AccountController

if(info.Login.LoginProvider == "Facebook") {
   var identity = Authentication.GetExternalIdentity(DefaultAuthenticationTypes.ExternalCookie);
   var accessToken = identity.FindFirstValue("FacebookAccessToken");
   var fb = new Facebook.FacebookClient(accessToken);
   var o = (Facebook.JsonObject)fb.Get("/me?fields=email");
   model.Email = o["email"].ToString();
}

Now I’m ready to implement my SPA, except for one minor tweak to give me more flexibility to handle the redirect back from Facebook post-authentication:

//Providers\ApplicationOAuthProvider.cs file
public override Task ValidateClientRedirectUri(OAuthValidateClientRedirectUriContext context) {
   if (context.ClientId == _publicClientId) {
      context.Validated(); // tweak/hack...semantics

      //Uri expectedRootUri = new Uri(context.Request.Uri, "/");

      //if (expectedRootUri.AbsoluteUri == context.RedirectUri)
       //{
           // context.Validated();
       //}
     }
 return Task.FromResult<object>(null);
}

Alright, now what is going to happen in my very boring Angular 1.x SPA application (which you can find in the Scripts folder) is an HTTP call to get the list of external logins which I then bind on the page using <A> tags to the providers (Facebook in this case) that open in a different window.

NOTE: please ignore the $http and other calls in the controller code, I put all of the example client app code into a single file for simplicity. Feel free to break things out into services.

$http({
     method: 'GET',
     url: 'http://localhost:2999/api/Account/ExternalLogins?returnUrl=http://localhost:2999/FacebookCallback&generateState=true'
 }).then(function successCallback(response) {
     vm.externalLogins = response.data;
 }, function errorCallback(response) {
     vm.error = response;
 });

<div ng-repeat="ext in vm.externalLogins">
 <a class="btn btn-primary" target="_blank" href="{{ext.url}}">{{ ext.name }}</a>
</div>

I have my redirect from Facebook (the URL I provided facebook) set to hit my FacebookCallbackController, which is actually an MVC endpoint. I need this because I’m expecting one of two cookies: .AspNet.ExternalCookie or .AspNet.Cookie. The first means I’m not yet registered, the second means that I am. So I simply pass that information back to my view:

public class FacebookCallbackController : Controller {
  public ActionResult Index() {
    ViewBag.MustRegister = Request.Cookies[".AspNet.ExternalCookie"] != null ? 1 : 0;
    return View();
  }
}

I concede, this might be a bit of a naive way of implementing this popup window mechanism, but I quite simply hate the idea of redirecting a user in a SPA, so I’ll take the popup window workaround which could probably be cleaner. My MVC View for the popup window, simply does the following to call back into the parent window and set the Angular route:

<script type="text/javascript">
   @Html.Raw(string.Format("window.opener.location.href = '/#/facebookCallback/{0}#' + window.location.hash;", ViewBag.MustRegister))
   window.close();
</script>

Now in my angular application, I can use my Angular facebookCallbackController to read the external token from facebook (looks like some unintelligible string like this “access_token=xxxxxx&token_type=bearer&expires_in=999999&state=xxxxxxx”)  as well as check my $routeParams to see if I need to register via the MVC View code above. If you have to register, then you must call RegisterExternal followed a (another) call to ExternalLogin:

var externalToken = $location.hash().split('=')[1].split('&')[0];
//...

if($routeParams.register == 1) {
   $http({
       method: 'POST',
       url: 'http://localhost:2999/api/Account/RegisterExternal',
       headers: { 'Authorization': 'Bearer ' + externalToken }
    }).then(function successCallback(response) {
         //...
 
           $http({
                method: 'GET',
                url: 'http://localhost:2999/api/Account/ExternalLogin?provider=Facebook&response_type=token&client_id=self&redirect_uri=http://localhost:2999/#/test2',
                headers: { Authorization: header }
            }).then(function successCallback(extRes) {
                 // This will be 'Bearer ZYXHASHYdeHashHasHas987....'
                

           }, function errorCallback(response) {
              //Error message
            });
    }, function errorCallback(response) {
       vm.error = response;
    });
}
else {
    // set the token and you're set
}

Now you can call your secure endpoints. Again, sample code is available at https://github.com/unboxedsolutions/WebApiExternalLoginSpa

 

Electron Walkthrough on Windows

Recently I’ve been working with Electron to build an application which will hopefully run well on both Windows and the Mac. I’m currently more focused on making sure the Windows application will work since that’s what the majority of the users will need. However, I found some of the documentation (specifically having to do with packaging, deployment, and automatic updates) to be inconsistent and some of the best tutorials are written using OS X.  My hope is that this blog post will help anyone developing on Windows to save a lot of time once you get past the simplicity of the Quick Start tutorial creating the package.json, main.js, and index.html files and running electron . from the command line interface (CLI).

In the interest of not assuming you already have Node, Node Version Manager (NVM), and Git installed (even just to download examples/quick-starts using the CLI) I’ll include what installed. On Windows, these all have installers you can run.

First, let’s create a new Electron application following the Quick Start tutorial:

  • If you do not already have it, download and install Visual Studio Code
  • Create a directory for your project. I’m using D:\dev\SampleApp
  • Open your directory using Visual Studio Code

sampleapp1

  • Follow the Quick Start tutorial by creating the package.json, main.js, and index.html files under the Write your First Electron App section.
  • Modify your quick-start package.json file to change the application name:
{
    "name" : "SampleApp",
    "version" : "0.1.0",
    "main" : "main.js"
}
  • Use the Node Package Manager (NPM) to install
npm install -g electron
npm install electron-prebuilt
  • From the CLI in your application directory, enter: electron . and your application should run.

sampleapp2

Debugging w/ Visual Studio Code

To debug with Visual Studio Code, we can use the Debug view to setup a launch.json file.

  • Click the Debug view on the left pane. You will see there are no configurations selected.
  • Click  the cog-wheel icon that says “Configure or fix launch.json” to create a launch.json file. Visual Studio Code will automatically create a new directory \SampleApp\.vscode
  • Modify the settings under “Launch” for runtimeExecutable and runtimeArgs
"runtimeExecutable": "${workspaceRoot}\\node_modules\\electron-prebuilt\\dist\\electron.exe",
"runtimeArgs": [
    ".",
    "--enable-logging"
 ]
  • Now you can set a breakpoint in main.js and press F5 to begin debugging.

sampleapp3

sampleapp4

Now you are able to debug your main process. As you continue your electron development within the browser, you will also want to be able to debug the application running in the render process. Fortunately, as you’ve seen illustrated, the Chrome Development tools open due to the openDevTools() call in the main.js. However, you can also install the Visual Studio Code extension for debugging Chrome (vscode-chrome-debug). To enable that debugger:

  • From the View menu, click Command Palette…
  • Select Install Extensions,  search for “Debugger for Chrome”, and then click install.
  • A blue Enable button will appear. Click Enable and then when prompted to restart, click OK.
  • In your launch.json file, create a new entry under “configurations”
{
    "name": "Debug",
    "type": "chrome",
    "request": "launch",
    "runtimeExecutable": "${workspaceRoot}/node_modules/.bin/electron.cmd",
    "runtimeArgs": [
        "${workspaceRoot}/main.js",
        "--remote-debugging-port=9222"
    ],
    "webRoot": "${workspaceRoot}"
},

WARNING: this will piss you off…the Chrome debugger is VERY TOUCHY so the safest (and probably only) way to use this debug configuration is to make sure that all Chrome.exe processes have been shut down. Close all of your Chrome browser windows and use Windows Task manager to kill any other Chrome.exe processes.

  • Now let’s add some simple JavaScript to test our debug configuration. First, add a new folder named “app” in your application directory and then create an app.js file inside of the app directory. Modify your index.html file to add a button with a click event to call a function. For example:
//app.js
function test() {
    console.log('click...');
}

<!-- index.html --&gt;
<button>Test</button>
<script src="app/app.js"></script>
  • Set a breakpoint on the console.log line of code and then press F5.
  • Click the Test button and you should hit the breakpoint.

sampleapp5

Honestly, the pain of debugging in Visual Studio Code in the renderer process is such a pain in the ass, right now it isn’t worth it. Better to use the Chrome tools by default, but for those of you interested in using Visual Studio Code instead, you also have that option.

Packaging and Deploying

By far the simplest way I’ve found to package and build an Electron application (so far) is to use electron-builder. Let’s create an installer for our application.

  • Create a folder named “build” in your application directory
  • You will need a valid 256×256 ico file named icon.ico.  You can open up Microsoft Paint, resize the canvase to 256×256 pixes and turn off Maintain Aspect Ratio and save the file as a .png.

icon

  • Next, find an online ICO converter such as http://icoconvert.com/ to convert your file to a .ico saving it to your build folder. An invalid file format will cause the builder to throw an exception.
  • Next, you can create or use an existing animated GIF for your installation. The file must be named install-spinner.gif.  I’m partial to “peanut butter jelly time” so I’m using that. 🙂
  • Use NPM to install electron-builder
npm install electron-builder
  • You must provide an “author” and “description” in your package.json file along with build and scripts section. Modify your package.json file to look like the following:
{
    "name" : "SampleApp",
    "description": "This is a sample application.",
    "author": "Sean Chase",
    "version" : "0.1.0",
    "main" : "main.js",
    "build": {
        "appId": " SampleApp",
        "app-category-type": "public.app-category.productivity",
        "win": {
            "msi": false,
            "iconUrl": "https://raw.githubusercontent.com/unboxedsolutions/ElectronSampleApp/master/build/icon.ico"
        }
    },
    "scripts": {
        "postinstall": "install-app-deps",
        "start": "electron ./app --enable-logging",
        "dev": "NODE_ENV='development' npm run start",
        "pack": "build --dir",
        "dist": "build --x64 --ia32"
    } 
}
  • From the command line, run the following build command from your application directory
node_modules\.bin\build --win --x64

Your application should build into a “dist” folder. Your icon file should appear on the Setup EXE file and your animated GIF (the dancing banana if you are into “peanut butter jelly time”) should appear when you run the setup file.

sampleapp6

sampleapp7

I if you have a code-signing certificate (which you will need a certificate from a CA), you probably already know how to perform code signing. If not, for now if you’d like to create self-signed certificate to understand the process, here are the basic steps.  This will be absolutely necessary if/when you publish your application. For walking through the tutorial, you don’t need to do this.

makecert -r -pe -n "CN=UnboxedSolutions Development PreRelease" -ss CA -sr CurrentUser -a sha256 -cy authority -sky signature -sv prerelease.pvk prerelease.cer
certutil -user -addstore Root prerelease.cer
makecert -pe -n "CN=UnboxedSolutions PCS" -a sha256 -cy end -sky signature -ic prerelease.cer -iv prerelease.pvk -sv npcs.pvk npcs.cer
pvk2pfx -pvk npcs.pvk -spc npcs.cer -pfx npcs.pfx
signtool sign /v /f npcs.pfx "dist\win\SampleApp Setup 0.1.0.exe

In a future blog post, I will tell you how to handle the squirrel events and we’ll create an endpoint for handling automatic updates using a simple REST service that returns data from the RELEASES file and returns the appropriate NuGet package (.nupkg file).

The files for this blog post are on my github repo at https://github.com/unboxedsolutions/ElectronSampleApp

 

Microsoft .NET Core on OS X and Linux

The potential impact of Microsoft .NET Core running on OS X and Linux  for creating modern Web apps, console applications, microservices and libraries is vast. Creating this open source, cross platform, modular  .NET platform, Microsoft have rebuilt the old .NET and ASP.NET platforms so that you can create applications that will run not only on Windows but on Mac and Linux too. This includes RHEL, Ubuntu, Linux Mint, Debian, Fedora, CentOS, Oracle Linux, and openSUSE.

The changes will define .NET for the next decade and it is aimed at solving today’s needs, with so much use of cloud applications and micro services.   .NET Core , the .NET framework and Xamarin will all continue to evolve for cross platform cloud and mobile as well as for Windows. Traditional ASP.NET will continue to be suitable for existing projects .

Code can be shared across the whole .NET family and your skills can be used on any too, so you can mix and match to suit your projects. Also as the .NET standard library is common to all .NET components the apps built using .NET Framework, .NET Core, ASP.NET and Xamarin will share common attributes in the future.

To get started with .NET Core on OS X or Linux you just need the .NET Core software development kit. If you go to the .NET Core home page it will guide you to the correct Software development kit for the operating system you are on and give you the steps to get started.

Visual Studio is now also available but you will need to the have SP3 installed and the .NET Core Tools for Visual Studio. There are also app tutorials at  .NET Core Tutorials so you will be creating apps in no time at all.

The .NET Core journey started about 2 years ago when it started to become obvious that the technology wasn’t keeping up with the needs of the users. It has evolved quite dramatically in that time  all the time striving for something that could cope with more varied requirements and an expanding base of developers.

We began to notice that other major Web platforms were using open source which the .NET framework was not. Developers were very keen on open source and our .NET framework  was clearly not delivering what they needed. Now ASP.NET is open source from top to bottom thus providing the capabilities that modern developers are looking for.

Thanks to all the users who were using the pre-1.0 .NET Core and ASP.NET Core and the feedback they provided we have been able to improve both user experience and performance and the 1.0 release is much better than it may have been.

if you are a developer and have not tried .NET lately give it a go. You can now use .NET on operating systems other than Windows with very few constraints, using familiar development tools. It’s power and productivity with all that open source and the support of Microsoft is bound to let you create any application you can imagine.

The World’s Most Influential Software Engineers

Embarking on a career as a software engineer does not usually correlate with anything approaching the limelight, as most of the best-known engineers are fare from household names. Nonetheless, a lack of fame and adoration from the public at large does not mean a paucity of influence, which can be shown when discussing software engineers who have had a meaningful impact on the lives of many.

Alan Turing

No list of people who were influential in any aspect of computer technology could leave off Turing. He famously helped decrypt German communications for Britain during World War II, and went on to design the ACE (Automatic Computer Engine). Turing also developed the Turing Test, which was essentially designed to test if a machine could think like a human being. In many different ways, Turing was ahead of his time, and though more than just an engineer, still belongs as the first person to head any list of important individuals in the engineering field.

Watts Humphrey

Watts Humphrey was a pioneer in the world of software engineering, and was highly influential because his accomplishments were centered on a line of thinking that the process was an essential aspect of overall quality. Humphrey developed the Personal and Team Software Process and even developed the first software license while working at IBM. He received the National Medal of Technology in 2003.

Fred Brooks

Brooks is probably best known for his classic book The Mythical Man-Month. In it, he concludes something powerful and counterintuitive to most – that adding more computer programmers to a project which is behind schedule will only take it further behind schedule. Brooks went on to found the University of North Carolina’s department of computer science, and has been the winner of many prestigious awards, most notably the National Medal of Technology in 1985, and the A.M. Turing Award, which is regarded as the highest honor one can get in the computing field.

Others to Consider

Steve McConnell: The author of the book Code Complete, which is regarded as a bible of sorts to those in the software development field. It is a whopping 900 pages, but is still thought of as a must-read.
Linus Torvalds: The writer of Linux, which is used around the world. Torvalds has won a plethora of awards, including the Millennium Technology Prize, which is one of the most sought-after in the field.
Marc Andreessen: He was a developer of Mosaic, which was one of the earliest web browsers available, and then went on to found Netscape, which was later acquired by America Online for more than $4 billion.

Ionic Android Debugging: There was a network error

Debugging an Ionic app using Visual Studio (Cordova Project) is very simple using Ripple; however, once you deploy to a device it becomes a lot more challenging. One way to help debug issues deployed to your android device is to use the livereload and consolelogs options:

$ ionic run android -l -c

This enables you to see console.log() statements in your code. Not as good as being able to step through the code in Visual Studio, but still very helpful for troubleshooting.

$ cordova plugin add cordova-plugin-whitelist@1.0.0

As a bit of a head scratcher, you can end up with an error once your app starts running on your android that says, “There was a network error” running on port 8100 by default. If that happens, open your index.html file add the following META tag.

<meta http-equiv="Content-Security-Policy" content="default-src 'self' data: gap: https://ssl.gstatic.com; style-src 'self' 'unsafe-inline'; media-src *">

iOS Development with Swift

Several months ago I made the mistake of updating my iPhone OS and it was rendered almost unusable. So I decided that Apple could pound sand and bought an Android Galaxy 6 Edge. One of the nice things about Android for a developer is how easy it is to create and deploy an application. I ran through some tutorials using Android Studio and was very impressed even though I’m not a professional Java developer. Coming from a very strong C# background, the switch was not difficult and was enjoyable. To make things even more fun, I started playing around with Ionic since I have been doing Angular development lately. The best bang-for-the-buck from a business perspective seems to be Ionic (hybrid apps); however, there is still a very solid demand for native Android and iOS development.

Having the opportunity to take a nice, long, much-needed Christmas and New Years vacation – I of course play the obligatory NHL16 and Witcher 3 on the Xbone…but like when I decided to learn to play the piano, I also committed to chipping away at becoming a better mobile developer – specifically, native iOS/OS X development on a Mac using Xcode. I bought a Macbook Pro and immediately went to developer.apple.com. The most trusted advice that I’ve read is that the first place to start is learning Swift and then going back to learn Objective C. In college, C was my favorite language and I haven’t felt to need to revisit it much since then, so this took some self-prodding. I managed to read through The Swift Programming Language (Swift 2.1) while trying out code examples using the Playground in Xcode, and while I doubt I retained everything I need to hit a running pace with Swift, I certainly think it was worth the time and effort. Besides that, this is exactly the same approach I’ve used with other languages that I’ve learned including C# and SQL – and I am pretty darn good in that arena.

So if you are just starting out with OS X and Xcode because your entire life has been based in DOS, Windows, and Linux here are a few things that I found to make life easier for myself starting out:

Karabiner – this is absolutely amazing if you are like me and very proficient with hotkeys in Windows and use them A LOT. Karabiner will let you map PC hotkeys such as CTRL-C, CTRL-V, END, HOME, etc.  While I certainly think if you are comfortable making the full switch to using the Command hotkeys in OS X, then by all means do that. Additionally, you have some flexibility built into OS X to swap the CTRL and Command keys; however, I found this lacking and ended up being painful for some applications as well as using Microsoft Remote Desktop to get to my PC.

System Preferences > Mouse > Scroll Direction Natural – I turned this option off so that the scroll-wheel up and down worked the same as the PC (it is reversed in OS X). However, admittedly when I do not have a mouse and keyboard attached, I like all of the default OS X settings on the Macbook Pro trackpad and keyboard just fine. I still use a natural PC keyboard after many years when I’m sitting at my desk.

If you are starting out with iOS development – I highly recommend the advice from John Sonmez about starting with Swift. Swift is a really nice language if you come from C# and javascript background and you will appreciate the ability to do new things like provide multiple return values via Tuples, and the ability to create both static and class methods (similar to C# static methods but can be overridden in the derived class).

Happy New Year!

Programming For Yourself Using System.Diagnostics

In the day-to-day grind of software development with .NET, some minor concepts are sometimes forgotten…and no…I’m not talking about design, unit testing, and QA (isn’t that what customers are for??) but implementing Tracing, or Assert functions. Remember Assert()???  No, you probably don’t because you are releasing DEBUG versions of your assemblies instead of RELEASE (that’s what the pros do, right??).

Let’s pretend for a few minutes that you are working on your own software project where you compile for deployments using RELEASE builds. We know that structured exception handling is important, but when should you Assert? The simple answer is: when you make an assumption that a condition should be impossible.

If the variable “amount” ends up NaN, your application will halt and you’ll get a nice dialog box with a stack trace and message (unless it is an ASP.NET application, which you can configuration a listener for as I will explain later). You should write assert statement liberally in your code. The benefit is the additional documentation your code provides explaining assumptions (and having those assumptions challenged and put in your face), and are removed when compiling in RELEASE configuration. If you look at your project properties in Visual Studio in the Build tab, you’ll see that both DEBUG and TRACE constants are defined.

Regarding tracing, I honestly prefer to simply implement log4net and call it a day; however, splitting hairs technically there is a difference between tracing and logging but do as you will…there are other options such as Elmah, or simply using a TraceListener. I’ve worked with other developers who don’t like having the log4net dependency in every assembly, so you could follow the anti-pattern of wrapping the ILog class.  Another alternative is to use the Trace class, if you aren’t doing that already, because RELEASE builds include the TRACE definition by default. Just like log4net allows you to define multiple loggers, the built-in Trace class also provides that ability. This is very handy in ASP.NET because you can turn in trace output and view the results on the page.

As an example, I’m going to create a TraceListener that is implemented using the log4net rolling file appender. This way I can simply use Trace methods, and have log4net do all of the heavy lifting. While you can simply using Trace.Write or WriteLine, I prefer to use TraceInformation, TraceWarning, and TraceError which give the distinction of the trace level as well as the string format ability. For example, I could use the following along with argument place holders ({0}, {1}, etc)

So the first step will be to use NuGet to install log4net. Then we need to write our Log4netTraceListener class:

Once we have that, we can configure both <log4net> and <system.diagnostics> to work together:

And that’s that!  Now any reference you make (for example a class library named “LibraryWithoutDependencies”) can use the Trace class and you can control the logging level using the application configuration file under <system.diagnostics><trace><listeners><add><filter initializeData> attribute as shown in the above example using the value “Information” – while log4net is left at the DEBUG level so everything will output. You can simple change Information to Warning when you release to production and modify when you need to troubleshoot.

So if our combined code-base consisting of a DLL named “LibraryWithoutDependencies” as well as a Console application named “Tracing” with our custom Log4netTraceListener, the code looks like the following. Notice how LibraryWithoutDependencies does not have any reference to log4net, it simply uses Trace.TraceError() in the exception handler.

When we run the application, there are a number of exceptions and our log file is written out to the logs folder in our application root containing the following information:

2015-12-12 14:51:15,952 [9] INFO  - Starting Main() function in Tracing.Program
2015-12-12 14:51:15,969 [9] WARN  - Remote service endpoint not found. Working offline.
2015-12-12 14:51:17,644 [9] ERROR - What could possibly have gone wrong? Seriously, who would call this method...much less instantiate this class?!,    at LibraryWithoutDependencies.Wtf.Go() in LibraryWithoutDependencies\Wtf.cs:line 8
2015-12-12 14:51:17,652 [9] ERROR - Query for an existing customer resulted in an Id value less than or equal zero. Value was 0.
2015-12-12 14:51:19,652 [9] ERROR - Exception caught while processing stuff: Customer Id not found.
Parameter name: id,    at Tracing.Program.Main(String[] args) in Program.cs:line 28

The source code is available at the following URL if you are interested:  http://www.unboxedsolutions.com/wp-content/uploads/2015/12/Tracing.zip

 

1 2 3 6
Page 1 of 6