Dev Humor Part 2

I enjoyed the previous one so much, I thought I’d also share part 2.

Enjoy ūüôā


Some Developer Humor

Wow, it’s been a while since I’ve posted.

A friend shared this video with me and I thought I’d break the article drought with some developer humor.

Enjoy ūüôā

IEnumerable vs IQueryable – Part 2: Practical Questions

If you’re unsure of the difference between IEnumerable and IQueryable, please read Part 1 first.

In this article we’ll take a practical Q&A approach to test your understanding of these interfaces and how the work.¬†Try figure¬†each question out first before looking at the answers.

Setting the stage:

  • We’ll be using Entity Framework (EF)
  • with a SQL database
  • that has 1 table “Person”.
  • Assume we have a global DbContext instance called db

Here goes, good luck…

# Question 1

  • On which line(s) does the¬†Database get queried?
  • How many times does the database get queried?
Console.WriteLine("Some code");
IQueryable<Person> people = db.People.Where(p => p.Name == "Niels");
IQueryable<Person> activePeople = people.Where(p => p.IsActive == true);
Console.WriteLine("Some more code");

foreach (var person in people) 
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");


  • Line 6. Because of deferred execution, the database will only be hit once the IQueryable is used. This is in¬†the foreach loop, when we need to access the first person¬†in people¬†collection.
  • Only once at Line 6. It’s perfectly valid to build on¬†previous IQueryable variables¬†appending filters as you go along. Once executed eventually, only 1 query will be executed, taking into account all filter expressions added.

# Question 2

  1. On which line does the Database get queried?

Console.WriteLine("Some code");
IQueryable<Person> people = db.People.Where(p => p.Name == "Niels");
IQueryable<Person> activePeople = people.Where(p => p.IsActive == true);
Console.WriteLine("Hello World");


  • Never. Because of deferred execution, the query¬†will not execute¬†until it’s used. Since it’s never used, it will never execute and only remain “intent”.

# Question 3

Person table has 100 records. 60 of these records are active and 40 inactive

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people or 60 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.ToList();
var activePeople = allPeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");


  • Line 1. ToList() forces the query to execute
  • 100 records will be returned. Line 1¬†executed the query and loaded 100¬†results to memory. Line 2 then filtered the collection in memory and¬†created a new collection in memory with 60 records.
  • Yes there is, simply remove the ToList()¬†in line 1. This means that allPeople will be IQueryable and finally only in the foreach (line 4) the database will be queried returning only 60 results into memory.

# Question 4

Person table has 100 records. 20 people’s names start with the letter “N”. Of these 20, 10 are active and 10 are inactive.

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people, 20 or 10 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.Where(p => p.Name.StartsWith("N"));
IEnumerable<Person> enumerablePeople = allPeople;
var activePeople = enumerablePeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");


  • Line 5. Even though we cast to IEnumerable (a memory collection),¬†deferred execution will still¬†only execute the query once needed (in the foreach).
  • 20¬†records will be returned. Even though the query only executed at line 5. We indicated in line 2 that “from here on out, we will work with the collection¬†in memory”. So once the foreach executes the query, Line 1’s query will execute against the database, returning 20 results into memory. Then Line 3 filters the 20 records in memory and creates a new collection in memory with 10 records.
  • Yes there is, simply removing Line 2¬†would keep everything IQueryable until it’s needed in the foreach (line 4). Then the database would only return¬†back the 10 results needed.

# Question 5

Person table has 100 records. 20 people’s names start with the letter “N”. Of these 20, 10 are active and 10 are inactive.

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people, 20 or 10 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.Where(p => p.Name.StartsWith("N"));
IEnumerable<Person> activePeople = allPeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");


  • Line 4. Even though we cast to IEnumerable (a memory collection),¬†deferred execution will still¬†only execute the query once needed (in the foreach).
  • 10¬†records will be returned. Even though we marked the collection at line 2 as IEnumerable, the IsActive is still applied to the IQueryable¬†before it’s cast to an IEnumerable
  • No. The query works fine. It¬†can be somewhat misleading with the IEnumerable¬†at Line 2, so ideally we should replace this with var. But with no performance difference.

# Question 6

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
var activePeople = db.People
     .Where(p => p.IsActive == true)
     .Select(p => p.Name + " " + p.Surname);

foreach (var person in activePeople)


  • Yes it works
  • Line 5. Deferred execution will ensure that we¬†only execute the query once needed (in the foreach).
  • The database¬†will query the database with a¬†statment something¬†like this:¬†SELECT Name + ' ' + Surname FROM...and a list of strings will be loaded into memory (not all the Person columns)

# Question 7

Same as¬†Question 6, except we’ve moved the Select¬†part into a separate method.

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
var activePeople = db.People
     .Where(p => p.IsActive == true)
     .Select(p => BuildPersonName(p));

foreach (var person in activePeople)

private string BuildPersonName(Person p)
   return p.Name + " " + p.Surname;


  • No it doesn’t work. A NotSupportedException will be thrown at runtime. Even though we’re doing exactly¬†the same as Question 6 and only moved the¬†Select part into a separate method, this query will fail.Why?¬†Remember that IQueryable builds up an Expression Tree from our LINQ and then translates that into a SQL Query. Since our LINQ references a method in our .NET code, how would SQL map .NET functions with the SQL query? It would try to do something like this:
    SELECT BuildPersonName(p) FROM Person p... which can never work since `BuildPersonName` is a .NET function and not a SQL function.

# Question 8

Same as Question 7, except we added a .ToList() after the Where(...)

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
  4. If it works. Is there an obvious way to improve the query?
var activePeople = db.People
     .Where(p => p.IsActive == true).ToList()
     .Select(p => BuildPersonName(p));

foreach (var person in activePeople)

private string BuildPersonName(Person p)
   return p.Name + " " + p.Surname;


  • Yes it¬†works.¬†Unlike Question 7, this is perfectly valid.¬†Since we return the database query results after the Where,¬†the Select¬†part will be handled¬†in memory and therefore¬†our LINQ can reference .NET¬†functions.
  • Line 2. The ToList()¬†will execute the query with the IsActive¬†filter and return the results to memory.
  • The database will return all columns in the Person¬†table, since the ToList() was called before the Select
  • Yes there is. Returning all columns from Person is less optimum than just returning the appended name.¬†Changing the code to look like Question 6 is the best for performance.



The above questions tried to cover all the different ways that IEnumerable and IQueryable could be used. I believe if you understand why each query behaved the way it did in the above questions, you can figure out any query behaviour.

IEnumerable vs IQueryable – Part 1

What‚Äôs the difference between IQueryable and IEnumerable? This is probably the second most frequent question I’ve been asked (number 1 has¬†to be¬†understanding delegates).

The purpose of this article isn’t to formally define the interfaces, but rather to paint an easy to understand picture of how how do they differ. Then in Part 2 we get practical with 8 code snippet questions where we can test our understanding of the topic.

The Difference (Short answer)

  • IEnumerable – quering a collection in Memory
  • IQueryable ‚Äď quering an¬†External Data Source (most commonly a database)

What is IEnumerable?

The IEnumerable interface exposes an GetEnumerator method, which returns an IEnumerator interface which allows us to iterate through the collection. In plain English: an IEnumerable is a collection of items you can loop through.

Did you know, even arrays are inherently IEnumerable. See this snippet below:

int[] nums = { 1, 2, 3, 4 };
bool isEnumerable = nums is IEnumerable<int>; // True

IsEnumerable in the above code is true. This kind of makes sense, since an array (just like IEnumerable) is a collection of items we can loop through.

What makes IEnumerable special?

The most special thing about IEnumerable is that we can query items using LINQ.

There are a bunch of LINQ methods (Where, Select, Take, OrderBy, First etc.) which are simply extension methods for the IEnumerable interface. Same as all extension methods, just include the namespace (System.Linq) and the whole range of LINQ extensions are available to filter our collection.

Something else that’s¬†very important to understand when using IEnumerable is Deferred Execution. In short, a LINQ Query only captures the intent. It holds off as long as it can and¬†only does the actual filtering when¬†any of the following happens:

  • Iterate the results (e.g. foreach)
  • Call ToList
  • Get a single result from the query (e.g. .Count() or .First())

Understanding Deferred Execution is key to using IEnumerable correctly. Not understanding this can lead to unnecessary performance issues. Check out Part 2 to see if you understand it correctly.

What is IQueryable?

Firstly IQueryable inherits from IEnumerable. This means inherently, it is also a collection of items that you can loop through. We can also write LINQ queries against an IQueryable.

IQueryable is used when querying a data source (let’s say a database).  So if we are using Entity Framework (EF), we can write a LINQ Query as follows and it will actually produce a SQL query:


In the¬†above, our LINQ Query was translated to¬†a SQL Query. When it is executed, the query will be¬†run against¬†our SQL database and return results to memory. Remember, the filtering does NOT happen¬†in memory, but in¬†the¬†database (read this sentence again to make sure you’ve got it).

How does LINQ suddenly become SQL?

There are 2 important properties on the IQueryable interface: Expression and Provider


  • Expression ‚Äď This is the Expression Tree built up from¬†the LINQ Query
  • Provider – Tells us¬†how to translate the Expression Tree into something else

In our case (using EF with a SQL database) what happened:

  • We created a simple LINQ query
  • This built up an Expression Tree
  • The Expression Tree gets passed to the Provider
  • Our provider translates the Expression Tree in SQL Query
  • As soon as we use our results (deferred execution), the SQL Query will execute against a database.
  • The results are returned and stored into memory.

Great uses for IQueryable

Think about the way IQueryable works¬†for a moment. Let’s say we have a¬†custom¬†Data Source like a file which¬†appends¬†data with some separators¬†we defined.¬†If we find ourselves, constantly reading these files and trying to sift through the¬†text to get hold of data; ¬†we could instead use IQueryables and create our own Query Provider. This will allow us to write¬†LINQ queries to get¬†data from our files.

Another popular place IQueryable is used is for ASP.NET WebAPI OData. We can expose our REST endpoints and allow the person using our Web Service to filter only the data they need without pulling all data down to the client first. OData is basically just a standard that allows us to use URLs to filter specific data.

Example: Let‚Äôs say our REST service¬†returns a list of 100¬†000 People: ( But in our app we only want the people whose surnames contain the search text ‚ÄúFilter‚ÄĚ.

Without the power of IQueryable and OData, we would either have to:

  • Pull all 100¬†000 people down to our client and then locally in memory filter for those¬†10 people with surname “Filter” that we actually need.
  • Or create an endpoint specfically for searching for people by surname,¬†passing a query string parameter ‚ÄúFilter‚ÄĚ.

Neither of these are great. But using Web API with OData, we could create a controller that returns IQueryable<Person> and then allow our app to:

  • Send a custom URL:$filter=contains(Surname,’Filter’)
  • On the server, the IQueryable Expression Tree is built up from the OData URL
  • The Provider translates the Expression Tree to a SQL Query
  • The SQL executes against the database only getting 10 items¬†from it
  • These 10 items are returned to app as the client’s requested format (e.g.¬†JSON)

So with the power of IQueryable¬†and OData, we indirectly queried the database via a URL, without having to write server code and didn’t have to pull data we did not need. (less bandwidth, less server¬†processing and minimal client memory footprint)

Side note: LINQ Query Syntax vs Extension Methods

Not directly related to the topic, but a question I’ve been asked several times as well.¬†Is it better to use¬†Query Syntax or Extension methods.

Query Syntax:

var result = from n in nums
             where n > 2
             select n;

Extension Methods:

var result = nums.Where(n => n > 2);

They both compile down to the Extension Methods in the end. The Query syntax is simply a language feature added to simplify complex queries. Use which ever is more readable and maintainable for you.

I prefer to use Extension Methods for short queries and Query Syntax for complex queries.


If you missed everything, just remember this:

  • IEnumerable – queries a collection in Memory
  • IQueryable ‚Äď queries an¬†External Data Source (most commonly a database)

If you are comfortable with these interfaces, go to Part 2 and test yourself with 8 quick code snippet questions on the topic.

Should my code be “Technical” or “Domain” focused

  > How do we structure our solution?
  > What do we name our files?
  > How do we organize the folders in the project?
  > How do we structure our code regions?

It’s probably safe to say we’ve all sat with these questions and still do every time we¬†expand our projects or create¬†new ones.

The big¬†question that this article addresses is whether we should organize¬†our¬†code based on the¬†“Domain” or rather on¬†“Technical” implementations.

Let’s quickly define both and see¬†which is better.

Technical Focus

This approach organizes code with a¬†technical or functional focus. This is a more traditional way of organizing an application, but still very much in use today.¬†Let’s see¬†how this would look practically

Code Regions

Regions are defined according to the functional implementation. If it’s a method and it’s public it goes to Public Methods, regardless of what the method does.


Project Layout

For example creating and MVC application, File -> New Project lays out the folders with a technical focus. If you create a View, regardless of what it does, it goes into the Views folder.


Solution Architecture

The traditional layered architecture is a very common practice. This approach organizes the projects according to the function. If I have a Business Logic or Service class, it will go into the Domain project, regardless of what it does.


In short it’s a “What it IS” approach

You’ll see that in each of the above cases, we’ve organised according¬†to what something IS and not what it DOES. So if we’re developing¬†a Hospital application, a Restaurant Management system, or even a Live Sport Scoring dashboard, the structure for these vastly different¬†domains¬†will look almost identical.

Domain Focus

This approach organizes code with a domain or business focus. The focus on the domain has definitely been popularized by designs¬†such as DDD, TDD, SOA, Microservices etc. Let’s see¬†how this would look practically:

Code Regions

Regions are defined according to the domain¬†implementation. Anything related to “Admitting a patient to the Hospital” will go in the Admit Patient¬†region, regardless of what it is.


Project Layout

Taking the MVC example mentioned earlier for Project Layout,¬†we would now see folders according to the specific domain. If we¬†create something that is related to “customer feedback”, it would go in the CustomerFeedback¬†folder, regardless of what it is (view, controller, script etc.)


Solution Architecture

Architecture would be based around a type of SOA or Microservices approach, where each domain would exist independently in it’s own project.¬†If we¬†have new domain in a live sport scoring app such as “Cricket”, we¬†would create a new project for Cricket and everything related to it will go in there¬†regardless of¬†what it is.


In short it’s a “What it DOES” approach

You’ll see that in each of the above cases, we’ve organised according¬†to what something DOES¬†and not what it IS. So once again, if we’re developing¬†a Hospital application, a¬†Restaurant Management system¬†and¬†a Live Sport Scoring dashboard, the structure for these vastly different¬†domains¬†will look completely different.

So which is best?

Firstly let’s just put it out there that there’s a “3rd approach” as well, a hybrid between the 2. For example, we¬†could have a¬†Properties region¬†(which is technical), and then a Admit Patient region (which is domain) for all¬†domain related methods.

So which is best? Well let’s see…

Why Technical is better than Domain

1. Every project’s layout and all page regions are identical.

We as developers are often very technically oriented, so this would feel right at home as we¬†can feel in control even if we’re clueless about the domain.

2. Less pieces

Since there are only so many technicalities within a project, once we’ve grouped by them, the number of regions, folders or projects will never grow.

3. Layer specific skills or roles

If the development team’s roles in a project are technical-specific, this approach is great.¬†Each developer has their specific folder or project which they work on and maintain. For example you have one¬†developer only creating views, another only doing domain specific validations, another only focusing on data access etc.

Why Domain is better than Technical

1.¬†We’re solving business problems

As technical as we developers can be, at the end of the day, if we’re not solving domain specific problems, we’re failing as software developers.¬†Since business is our core and the technical only the tool to get us there, organizing code, folder and projects by domain makes much more sense.

2. Scales better

When the application expands or the scope widens, it often means that the new implementations¬†don’t¬†affect or bloat existing code as each domain is “isolated” from the next ¬†(closer adherence to the Single Responsibility and Open/closed¬†principles).

3. Everything is together

Often developers are responsible for all or at least most layers of technical implementations. If we for instance had to now expand our Live Sport Scoring web dashboard to include tennis, we very easily end up working with data access code, business rules and validations, view models, views, scripts, styles, controllers etc. and these are for a typical web application implementation. We could easily have a few more.

The point is, we often work with all of these while solving a single domain problem. So if we for example have a tennis folder and our tennis specific scripts, styles, views, controllers etc. were together, that would already be much more productive.

4. Reusable

This only really affects architecture, but if a project is built and isolated by domain, it become¬†reusable by different applications¬†on it’s own. In an enterprise environment, this is really useful.

For example if a large corporate¬†business has internal procurement rules or procedures, but¬†the business has many different systems¬†for it’s¬†departments, be it the¬†cafeteria, HR, finances etc. then an SOA-type approach would enable you to have one project which handles all the procurement procedures and all the different flavours of applications¬†can go through this procurement service, ensure that both the correct procedures and the¬†same procedures are used for every procurement for every department.


So I haven’t yet said which one is best. For me personally, my bias definitely lies more¬†with organizing projects around the domain.

Once again, this is no silver bullet answer or solution, but remember that there are most definitely the wrong approach for a specific project or problem. Here are some questions that we should ask, testing our approach to existing systems:

  • Are there any areas where we suffer under lack of productivity?
  • If so, would the different approach be better?
  • If so, would changing the approach be too great an adjustment for the benefits it would provide?

But the ultimate questions really are:

  • Are¬†the business needs currently being met?
  • And are the developers happy and in consensus with the approach?

As¬†the good old saying goes: “Don’t fix¬†something that’s not broken”.

I’d love to hear thoughts from your experience with either¬†approach and any¬†opinions, short falls or benefits you’ve experienced

ASP.NET Nuggets – Tag Helpers

In¬†ASP.NET Core 1.0 MVC (Previously referred to as MVC 6) they’ve introduced Tag Helpers, which replaces the¬†old¬†Html Helpers. The idea is that we can create standard Html¬†markup but still have the ability to allow the server to “enrich” this¬†markup without being too obtrusive.

The old way (Html Helpers)

Let’s say we need to create a form, that posts data to the Save action on a PatientController. We have an html form, a label and input TextBox:

using (Html.BeginForm("Save", "Patient", FormMethod.Post, new { @class = "form-control", data_extraInfo = "myextrainfo" }))
   @Html.LabelFor(x => x.Name, "First Name", new { @class = "control-label" })
   @Html.EditorFor(x => x.Name, new { htmlAttributes = new { @class = "form-control" } })

Here’s some more code. We have¬†2 different anchor tags, the first being incorrect and the second correct. The right thing¬†that should happen is¬†to create¬†a link saying “Go Back”, which will call the GoBack action on the PatientController with PatientID as a parameter.

@Html.ActionLink("Go Back", "GoBack", "Patient", new { PatientID = Model.ID })

@Html.ActionLink("Go Back", "GoBack", "Patient", new { PatientID = Model.ID }, null)

There are several difficulties with this code above:

  1. We have no idea how exactly how the html actually renders (have to run and inspect).
  2. The closing form tag is a curly brace and in a large page, it’s difficult to tell if the curly brace we see closes the form or is it¬†actually to close a `loop` or `if` statement.
  3. Simple Html attributes need to be created anonymous types (not transparent).
  4. Since anonymous Html attributes are C# anon types, some attributes conflict with reserved C# keywords (such as class has to become @class).
  5. If we¬†want some data-dash¬†attributes for our¬†client-side¬†code to use, we¬†have to¬†use underscores as we¬†can’t use¬†dashes in C# variables.
  6. The `LabelFor`, expects htmlAttributes parameter, so we say `new { @class = “…” }`, but the `EditorFor` expects additionalViewData, so we’d¬†have nest the Html attributes like this `new { htmlAttributes = new { @class = “…” } }`. Certainly this¬†is a very error-prone approach.
  7. In the 2nd¬†code snippet we¬†can see how adding the null¬†parameter at the¬†end makes the action link behave correctly. This is because the first¬†uses¬†a¬†different overload that actually omits¬†the controller and so the “Patient” string actually is incorrectly passed through as `RouteData` and the `RouteData` as `HtmlAttributes` (so easy to get it wrong as it compiles fine).

The new way (Tag Helpers)


Html Helpers¬†get the work done, but there’s now a much more efficient way. Here’s the same result¬†using Tag Helpers:

<form asp-controller="Patient" asp-action="Save" method="post" class="form-control" data-extraInfo="myextrainfo">
   <label asp-for="Name" class="control-label">First Name</label>
   <input asp-for="Name" class="form-control" />

And here’s the¬†action link using Tag Helpers:

<a asp-controller="Patient" asp-action="GoBack" asp-route-PatientID="@Model.PatientID">Go Back</a>

You’ll notice in the above 2 snippets, we’ve simply written standard Html Markup and the server enriched parts prefixed with asp-.¬†Introducing Tag Helpers has helped us¬†overcome all 7¬†of the¬†difficulties mentioned earlier.

The beauty of Tag Helpers is that they’re truly WYSIWYG (What you see is what you get). Now we have the benefit of enriching our Html¬†with server code, but¬†still just write Html.

Exciting times for .NET developers

It‚Äôs definitely a good time to be a .NET developer. Microsoft has been around for a very long time and have often been labelled (rightfully, I suppose) as ‚Äúslow‚ÄĚ and ‚Äúclosed‚ÄĚ in their approach, isolating their products and services solely to users on their platform. But this has changed drastically in more recent years. There are many reasons to be excited.

They’ve gone Agile

Don’t believe me? See this interesting article from Steve Denning on . A company of 128 000 employees not only adopting the agile approach but doing so very successfully is no small feat.

Much of their recent development is completely open-source on a GitHub. Now anyone can see their progress, use or test pre-releases, provide feedback or even modify code on their behalf and commit it for review and approval. The earlier you get feedback on a product, the more solid the foundation and sooner you end up with a stable release.

.NET Core is Cross-platform

Yip, you can now host your ASP.NET Core 1.0 web site on anything from a Mac, Linux or even Raspberry Pi. How is this possible? .NET Core has been built completely modular and the .NET assemblies can be deployed as NuGet packages without having¬†to ‚Äúinstall‚ÄĚ the framework first. As for the runtime, .NET core has what’s called the¬†DNX which hosts the application on any of the mentioned platforms, which has¬†the CoreCLR, so we don‚Äôt lose the managed goodies like garbage collection.


Here are some other ways which doors have opened for developers from vastly different technology backgrounds:

  • Visual Studio Code is a free version of Visual Studio running on Windows, OS X or Linux
  • There is built in tooling¬†for building cross-platform hybrid Cordova mobile apps¬†(TACO) VS, no more command line compiling as in the past.
  • Native Windows Mobile or Store apps (UWP) can also be written with HTML and JavaScript back-end (this enables pretty much¬†every¬†web developer to be able to create Native Windows apps without a steep learning curve of¬†XAML and C#)
  • Visual Studio has first class support for GitHub source control directly from VS
  • Azure has support for pretty much¬†any popular platform, development technology, source control etc.
  • VS also has built in¬†support¬†for popular¬†task runners such as Gulp or Grunt and package managers such as bower and¬†npm
  • If you prefer create sites with¬†NodeJS, VS even has tooling for that
  • Even though this has been around for quite some time, if you have different language backgrounds such as¬†Python or Ruby, you can create Desktop of Web projects from VS with these.¬†For example it blew me away that you can create a¬†WPF application¬†having a¬†XAML front-end with Python code-behind. (This makes use of the .NET’s DLR¬†which¬†bridges the gap allowing dynamic typed languages such as Python to run on the .NET framework).

The point to take from this is that the focus of Microsoft is no longer a attempt at a form of monopoly, but creating platforms and tools that would invite different developers to freely use their products, tools and frameworks (and I assume the goal is to ultimately get them to use Azure)

They went big with Azure

Microsoft‚Äôs cloud platform Azure, is huge. I always thought Google’s cloud platform was big, with every second guy having a Gmail account of¬†up to 15 GB free storage. But Azure has topped that, being bigger than both Google and Amazon Web Services (AWS) combined.

Azure seems to offer everything and a kitchen sink, there’s so much to it. From my experience, I’ve enjoyed the simple ways to host back-end mobile, web and¬†data services for some applications, but I feel I haven’t even touched the tip of Azure’s iceberg features.

It’s also a great platform to all a local network to move to the¬†cloud using their¬†‚ÄúInfrastructure as a Service‚ÄĚ (IaaS) or even ‚ÄúPlatform as a Service‚ÄĚ (PaaS). This obviously saves cost and time¬†spent on¬†hardware and software¬†maintenance, updates, hotfixes etc.

The whole payment model is based on “pay for what you use” and allows easy scaling up or down resources¬†as needed.¬†I’ve got a couple of tiny prototype applications running on Azure at the moment and so far everything’s still free because I use less than 30 MB database storage and¬†have, well, probably no traffic at all to the sites.

Starting fresh

Haven’t we all had those projects where our great designs or approaches seem to get in the way years down the line as things change?

This is interesting, because if there‚Äôs any company that has years of ‚Äúbackward‚ÄĚ compatibility caked into their software which they‚Äôd rather wish they‚Äôd have done differently or as times changed, the way their API’s get used changed,¬†it‚Äôs Microsoft. Backward compatibility means stability but also often means lack in performance and scalability over time (especially if you’re still supporting legacy API’s from a decade ago).

Someone in Microsoft was bold enough to make the call for some¬†rewrites. Off the top of my head, these are things they’ve recently completely rewritten from ground up:

  • The C# Compiler (Rosyln)
  • NET Core
  • ASP.NET Core 1.0
  • Entity Framework Core 1.0

These are only the ones I know about and they’re not small either. Besides Roslyn, nothing is directly “backward compatible”, but rather “conceptually” compatible, transferring existing concepts to the new frameworks rather than simply porting code as is.

In case you were wondering, ASP.NET Core 1.0 was initially called ASP.NET vNext and then became ASP.NET 5 with MVC 6, which ran on¬†.NET Core 5 using EF 7. Now that’s a mouthful, so last week they‚Äôve announced it‚Äôs been renamed to Core 1.0 (makes sense for a¬†rewrite to start again at 1.0). So at least for now, it‚Äôs referred to as:

  • ASP.NET Core 1.0
  • NET Core 1.0
  • Entity Framework Core 1.0

Performance matters

It‚Äôs no longer fair to label Microsoft products as slow. There are a lot of smart people that have put much effort into reducing memory footprints and optimizing performance. To name a few performance benefits as a developer I’ve picked up on recently:

  • If you‚Äôre running .NET Native (such as¬†UWP apps) you get the performance of C++ and the productivity of managed C#
  • The RyuJIT compiler [link to other article] means your app will just be a bit faster without doing anything, especially the start-up times.
  • And here‚Äôs my favourite, ASP.NET Core 1.0 benchmarks when compared to Google‚Äôs NodeJS web stack.
    • On a Linux server ASP.NET Core is 2.3x faster
    • On a Windows server, it‚Äôs more than 8x faster with 1.18 million requests per second!


Want to see some code

I’ve been exploring and keeping an eye on ASP.NET Core 1.0 as it goes through the pre-release phases. I’ve personally found it to be quite an big change from ASP.NET 4.6 and hope to be sharing a few nuggets soon on some great features I’ve enjoyed when I get the time.

Switching languages – Common mistakes

These days I¬†generally¬†work¬†with C#, VB.Net, JavaScript and SQL.¬†Switching between the different languages with¬†their constructs has caught me a few times with subtle bugs. I thought I’d post a few simple¬†little “gotchas”¬†I’ve encountered.

VB.Net Nothing is not C# null

Most of my development experience is in C#, so this one was a little strange when I encountered it. Nothing is NOT the same as null. Nothing in VB.Net is actually the same as default(T) in C#.

'This is perfectly legal in VB.Net
Dim myGuid As Guid = Nothing
//Whilst this will not complie in C#
Guid myGuid = null;

// The actual conversion of the Nothing to C# is
Guid myGuid = default(Guid); // Which is the same as Guid.Empty

So here’s the gotcha that caught me:

Dim myGuid As Guid = Nothing
If myGuid = Nothing Then
'// This is True
End If
If IsNothing(myGuid) Then
'// This is False, careful!
End If

The first check = Nothing checks if myGuid = default(Guid), which it does.¬†The one that caught me was IsNothing at first glance should do the same, but it doesn’t. That’s because IsNothing method is only intended for reference types. So IsNothing is probably the closest equivalent to the C# == null check

From MSDN:

MSDN: IsNothing is intended to work on reference types. A value type cannot hold a value of Nothing and reverts to its default value if you assign Nothing to it. If you supply a value type in Expression, IsNothing always returns False.

JavaScript boolean comparison

A simple gotcha that has unfortunately¬†caught me more than once is a boolean comparison with a string value. Let’s say we¬†read a hidden input field¬†value which holds¬†a boolean value and do a simple check on it:

var isValid = $("#hiddenValid").val();
if(isValid) {
   // ALWAYS TRUE, as this actually checks if isValid is defined

if(isValid == true) {
   // ALWAYS FALSE, as we're comparing string with bool

if(isValid == "true") {

The is actually a very obvious mistake as we’re comparing different types, but isn’t so easy to track down once it’s in code. Working mostly in¬†C#, the¬†first 2 methods look perfectly correct at first glance.

SQL Server Not Equals

A¬†few months ago, whilst debugging a report, I stumbled on this one.¬†Once again, it’s a relatively simple little gotchas, but¬†not at all obvious to track down. Lets say we have an table called Employee with the following data:

We have 7 empoyees. Steve and Adam are new interns and will only officially get a position once their 3 month probation is over (until then their position is null).


We must produce a report listing all employees except the CEO. Sounds easy enough:

--This is wrong
SELECT * FROM Employee
WHERE Position <> 'CEO'

--This is right
SELECT * FROM Employee
WHERE (Position IS NULL OR Position <> 'CEO')

-- This is also right
SELECT * FROM Employee
WHERE ISNULL(Position, '') <> 'CEO'

The¬†first SQL query looks perfectly fine. Get everyone where not equal to ‘CEO’. However since the <> in SQL actually checks greater than and less than and since NULL mean no value exists, we cannot do any type of comparison against a non-existant value, so it will ignore the NULL values.

This isn’t a SQL specific thing, it applies to all languages¬†that¬†have nullable types. But in C# for instance, a¬†design decision was made to include null items in comparisons for ease of use (even though it’s semantically not really¬†correct).

That’s it

Are there any simple gotchas or traps you’ve run into switching between languages? If so, please feel free to post in the comments.

Property Explosion with Roslyn API


Yip, we’ll start at the end. This article aims to show the powers of Roslyn and hopefully inspire some great ideas¬†to help grow the already Massive eco-system of free tooling for Visual Studio (VS).

The final product is this life changing VS Refactoring tool, called Property Explosion


Go give it a try, in Visual Studio go to Tools -> Extensions and Updates -> Search for Property Explosion in the online section -> Download and Install -> (Restart VS when done)

Now open up any C# project, click on a property (either Auto-property or Full-property) and¬†Hit Ctrl + .¬†and you’ll get a suggestion to either Explode the property (make it full) or Crunch it (convert to Auto property).


Life Changing Stuff! ¬†If you’re interested in how we got here, read on.

This article will discuss, installation, touch on high level overview of code and VSIX deployment. My source code is available on GitHub (see links at the bottom).

Now the Beginning

Now we’ve seen the future and taken away from the¬†dramatic climax (like watching The Village for the second time).

So,¬†What is Roslyn?¬†As quickly mentioned in a previous article, Roslyn is an open-source implementation of the C# / VB.Net compilers. But one of the¬†key features of is the API it exposes allowing us to¬†create analytical and refactoring tools. We’ll be using the Roslyn API to build a VS Refactoring tool.


You’ll need Visual Studio 2015 and the Roslyn SDK (If you don’t have the SDK installed, you can install it directly through VS, which is pretty neat).

  • Open up Visual Studio 2015
  • File -> New Project
  • Under Visual C# go to Extensibility.

If you have installed Roslyn SDK:

  • Click on Code Refactoring (VSIX) Project and OK¬†

If not:

  • Choose Install Visual Studio Extensibility Tools¬†directly out of Visual Studio and hit OK


  • Then Download the¬†.NET Compiler Platform SDK¬†hit OK and download it


  • Now you can choose¬†Code Refactoring (VSIX) Project and OK¬†

Getting started

Now that you’ve¬†created a¬†Code Refactoring project, you’ll see there’s some default plumbing set up.

There are 2 projects created.

  1. A Portable class library, which holds the refactoring entry point and code logic.
  2. A VSIX project, which only holds a vsixmanifest file and references the class library.

Make sure the .VSIX project is set as a startup project. A .VSIX file is basically a Visual Studio extensions installation file.¬†This is what get’s deployed to the Visual Studio Gallery enabling¬†users to¬†download the¬†tool.

You should now be able to hit F5 and debug the project as it. This will open up another instance of VS 2015, with the VSIX installed and attaches the debugger to the new VS instance.

  • Open up a project (or create a new¬†Console App).
  • Now click on the class and hit¬†Ctrl + .
  • You’ll see a suggestion pop up to reverse the¬†class name e.g. Program becomes margorP.

To debug and step through the code and see what’s happening is really easy:

  • Put a breakpoint in the¬†ComputeRefactoringsAsync method in the¬†CodeRefactoringProvider class
  • Now in the new VS instance, click on a class name again and hit Ctrl + . and the breakpoint will be hit.

How does code refactoring work?

  1. User hits Ctrl + . and VS will look for installed VSIX tools
  2. The Roslyn API will call the Entry Point in custom code (CodeRefactoringProvider)
  3. Custom code makes decisions on which Code Actions to register.
  4. User sees a little Context Menu pop up with Code Action(s) available
  5. User clicks on the Code Action, and code is refactored.

Property Explosion Code

It’s a bit out of scope to go through each piece of the Property Explosion code. We will however take a high level look at what was done and why.

Here’s the¬†PropertyExplosion Repository on GitHub. Click¬†Download to Zip, unzip¬†and open up the solution

So now in the context of the Property Explosion code, a user hits Ctrl + . and the Entry Point in our custom code gets hit.

Registering Code Action(s)

First thing is we do at Entry Point is¬†to check if we’re dealing with a property and if so, do we have an Auto Property or a Full Property? If we have an Auto Property,¬†we need to register the Explode Code Action, which can expand the property. Similarly, if we have a Full property we register the Crunch Code Action, to collapse¬†property to Auto Property.

Now we need to rewrite our code

We could simply do some code refactoring directly in the¬†code action method, but it seems¬†the preferred scalable approach is to use¬†a Syntax Rewriter.¬†In¬†our project, we’ve got the¬†PropertyCollapser and PropertyExploder rewriters. Syntax rewriters work on the Visitor Design Pattern. It’s a tedious one to get used to if you’ve never worked with it before, but in the Syntax Rewriter, you can see how and why it’s very useful.

What that means in our case is that we¬†call visit on a Node and then the each element in the Syntax Tree will be traversed and “visited”. If the current element being visited is one that we care about (for example the property in question), then we¬†can override the Rewriter’s¬†Visit method for¬†the specific node type and manipulate the Syntax Node.

public override SyntaxNode VisitPropertyDeclaration(PropertyDeclarationSyntax propertyDeclaration)
  if (propertyDeclaration == this._fullProperty)
    //... Create a new property ...
    return newProperty;

  return base.VisitPropertyDeclaration(propertyDeclaration);

See the code¬†above, where we override the VisitPropertyDeclaration¬†method. This method will be hit for every property that is traversed in the Syntax Tree. Since we only care about one specific property in question, we do a check. Is this the property we’re busy refactoring? If Yes, then we go on to build a new property and return it.

That’s easy enough, replacing an existing Node. But how do we add a new node with the Visitor Pattern? Let’s take¬†adding a new¬†field. We check our visit method if we’re¬†busy with the Property’s Parent, if so, we create a new field, insert it and return the¬†“updated” parent. Easy as that.

Use the existing code as reference, put some breakpoints down and see how the Visit methods get called and used to refactor Nodes.

Now we simply return the new root (which is the one modified by the Rewriters) to the Document and our code is successfully refactored.


Once you’re happy with the refactoring tool, it’s time to deploy. Deployment is really easy.

  1. Check that you’re happy with the config in the¬†vsixmanifest file
  2. Change VS build type to Release, and build the solution.
  3. Go to the bin\release folder and you’ll find a .VSIX¬†file.
  4. Login or Register at Visual Studio Gallery
  5. Click Upload and Upload the .VSIX file
  6. Once Uploaded, make sure to click Publish and Your tool is available for download immediately from VS Gallery or directly from VS (Tools -> Extensions and Updates).

Things to consider

1) “Rewriting” code is more complicated than a simple statement

We need to think like a¬†compiler rather than developer. It’s crucial to understand that we work with these 3 things to build up code:¬†Syntax Nodes, Syntax Tokens and¬†Trivia.

Let’s take this statement for example:

string myString = "Hello World";

That’s as easy as it gets. We see a simple one liner. The compiler sees Nodes, Tokens and Trivia:


It’s very demoralizing realising the complexity of a simple statement. But it needn’t be.¬†If you’ve installed the .NET Compiler Platform SDK, go to View ‚Äď> Other Windows ‚Äď> Roslyn Syntax Visualizer.¬†This Window is a life saver as it demonstrates your current code’s Syntax Tree. DON’T BE A HERO, USE THE VISUALIZER!!!

If you’re wondering how¬†something should be expressed, code it,¬†click on it and the Roslyn Syntax Visualizer¬†will¬†show you a granular break down of its Nodes, Tokens and Trivia.


  • These are the main building block¬†of the syntax tree (the blue ones in the above image).
  • Statements, Expressions, Clauses, Declarations etc.


  • These are almost like little extras. They cannot be a parent to other Nodes or Tokens¬†(the green¬†ones)
  • Keywords, Literals, Semi-colons etc.


  • These are there for formatting purposes¬†and don’t affect code directly¬†(the white/grey¬†ones)
  • Comments, Whitespaces, Regions etc.

2) How to find a Member

The 2 ways of finding something is done via the Syntax Tree or Semantic Model.

Syntax Tree

  • Not much extra info on a Node (e.g. can’t determine if and where a¬†member is¬†referenced)
  • Optimized for performance.

Semantic Model

  • Rich with¬†extra compile time info¬†(e.g. references to a member)
  • Much slower than the Syntax Tree because¬†it often triggers a compilation of code

Try always use Syntax Tree¬†unless you can’t, then use Semantic Model. Semantic Model is very powerful and¬†is an important feature of the Roslyn API.

3) Treat the vsixmanifest file with care

I managed to mess up my vsixmanifest file. I was able to build the project, deploy it, download & install it but nothing happened. I thought the fault was in the code, so I tried debugging, which stopped working as well. No logs, no error messages, nothing works any more.

You’ll probably want¬†to configure the compatibly of VS and .NET framework versions that your tool can run on. So much like any config file, take¬†care when making changes.

If you’ve messed it up, create a new Refactoring Code Project and¬†use the fresh vsixmanifest file as a reference to fix the existing one.

That’s it…

That’s it from a high level overview for building a code refactoring tool using the Roslyn API. I hope it provided you with an idea to get started.¬†Please feel free to download my code, step through it, use it, improve it, abuse it or sell it for millions.

Some links…

A world of change…

The world of technology and development looks vastly different now than it did 3 or 4 years ago. With all this change it’s bound to happen that new lingo is tossed around, moments before another new best thing hits the development world.¬†If we don’t embrace the change and open our minds up to learn, we quickly feel like a fish out of water¬†and are left behind.

About 2 years ago I was considering to gain some¬†skills outside of .NET, especially as the market for open-source and cross-platform was becoming more demanding and¬†making some¬†noise.¬†However, I’m delighted to see what Microsoft has been busy with lately and it seems they’re embracing the market change as well and are¬†steering themselves in that direction. Let me touch on some of the¬†changes to¬†frameworks, compilers, application models and IDE and see if we can¬†make sense of¬†them.

A disclaimer: this is written from a .NET perspective and not objectively to development as a whole.

.Net frameworks

After .Net 4.5 there are a couple of new frameworks that have made an appearance. Why so many and what are they?

  • .NET 4.5.1
  • .NET 4.5.2
  • .NET 4.6
  • .NET Core
  • .NET Native
  • .NET 2015

.NET 4.5.1

The biggest reason for this release is Windows 8.1. Both Windows 8.1 Store Apps and Windows 8.1 Phone Store Apps need .NET 4.5.1.

Some other smaller enhancements & features:

  • JIT improvements (Specifically for Multi-core machines)
  • 64-bit edit and continue (without stopping app)

.NET 4.5.2

There are 2 noticeable changes in the .NET 4.5.2, one for ASP.NET and other (yes believe it) for Windows Forms.

ASP.NET: The bigger change in¬†ASP.Net is probably the HostingEnvironment.QueueBackgroundWorkItem. In the past if you suggested to “fire a task on a seperate thread and forget about it”, serious red flags were raised. This was¬†because IIS needs to¬†recycle your application regularly and if it happened to do so while your task was busy, the work would never complete.¬†HostingEnvironment.QueueBackgroundWorkItem allows you to “fire and forget”, return a response to the user and the task can safely continue (only up to a¬†max of¬†90 seconds).

Windows Forms: As some devices support higher and higher resolutions, the scaling in WinForms became a problem. Things such as the little drop down list arrow became absolutely tiny. NET 4.5.2 has introduced a feature to allow resizing for High resolutions, solving this problem.

.NET 4.6

This is the next full version of the .Net framework. There are a whole bunch of new features and improvements. Some of the many new features include :

  • Better event tracing
  • Base Class Library (BCL) changes.
  • New Cryptography API’s
  • Plenty ASP.NET enhancement

But probably the most notable feature for me is¬†the new JIT compiler,¬†RyuJIT. This is a 64-bit JIT compiler optimized for 64-bit computing. Great thing is you’ll get better performance without actually doing anything (on 64-bit machines).

.NET Core

This guy has made some headlines. Imagine a .NET Framework that could be deployed via NuGet. No need for specific framework prerequisites to be installed, but a¬†framework that ships with the application.¬†Imagine no more, this is what .NET Core has brought to the table. .NET Core is also modular which means that you don’t need all part of the framework, only those that you care about.

The biggest features though in my opinion, is that¬†.NET is no longer limited to Windows. It is a cross-platform implementation of the .NET Framework. Yip, we can now deploy .NET applications on Linux or Mac. It’s important to note that .NET Core does not have everything that the full .NET Framework has yet.

.NET Native

I’m sure that you’ve heard hard core C++ junkies say “If you¬†wrote this in C++ it would be much faster”. Why don’t we all switch¬†to C++ for a little performance gain?¬†That’s easy. Productivity almost always trumps Performance. Does it really matter ¬†to the client that it took 350ms to execute instead of 150ms? What matters is that it took only 10 minutes to develop instead of 30! Not just that but also that we¬†can rest assured that our memory is safely managed by the CLR’s Garbage Collector.

Well .NET Native is an interesting twist to this age old tale. It allows you to compile code directly into native (machine code) instead of IL code (which only gets converted to native code at runtime by the JIT compiler). This way it avoids needing to run on the full CLR as the usual .NET applications do, but still includes a refactored runtime for Garbage Collection.

Can we still step through our code and edit and continue? Fortunately yes we can. When “Debugging” the code actually runs off the CoreCLR (Part of .NET Core) and is not natively compiled. This also prevents¬†extended¬†compilation times each time we debug.

Some benefits using .NET Native:

  • Faster startup times (JIT doesn’t need to convert to native code at runtime)
  • Smaller memory footprints (optimizations made to chuck out¬†what we¬†won’t need at runtime)
  • C++ compilation with C# productivity.

Of course this coin also has 2 sides. Limitations:

  • Must compile into specific architecture (since JIT used to handle this, we must now make both x86 and x64 builds)
  • Limited (currently) to Windows Store development

.NET 2015

.NET 2015 is¬†an umbrella term for¬†these new .NET “components”:



What’s new on the compiler forefront and why should we care? Whether¬†you’re indifferent to understanding the different compilers and their benefits or¬†care deeply about the matter,¬†knowing what’s new and what that means for development is important. So what is new?

  • Roslyn
  • RyuJIT
  • .NET¬†Native Compilation

Before I jump into these, We very briefly need to highlight how a .NET application compiles and executes.

  1. We write some C#
  2. Compile. This runs our code through a compiler (c# compiler in our case Рcsc.exe) which outputs IL (Intermediate Language) code.
  3. Running our application, the JIT compiler (Part of CLR) converts IL code to native code as needed
  4. Native code is executed and cached (in memory)


This image was taken from this blog, which does a fantastic job at explaining the basics of the JIT compiler.


Roslyn is a rewrite (from ground up) of the C# and VB.NET compilers. It’s an open source solution, allowing us as developers to not only view how items get compiled, but also get our hands dirty in customizing compilation (if needed). The compilers have been written in their own languages (C# compiler code is C# and VB.NET compiler code is VB.NET).

Probably the most important features that Roslyn brings to the table is a set of API’s that allow us to create some interesting things such as implement¬†customized intellisense or refactoring. The API¬†allows us to do¬†static analysis¬†which means we can analyse code without actually having to execute it.¬†Also¬†Roslyn can¬†compile code “on-the-fly”. This means that you don’t need to recompile code before running it again, as the Roslyn compiler will do this for you in memory.

A silly refactoring example¬†might be, in Visual Studio, clicking on a global¬†variable, hitting Ctrl + . and then then¬†choose our custom “Convert to Property”. ¬†Our code written with Roslyn API will then grab the variable, analyse it, perhaps adapt it to some naming standard and convert it to a property. We can now easily build and deploy our refactoring tool to NuGet¬†allowing others to easily download and use our tool.

Although Roslyn is probably not something most developers will get their hands dirty with, it certainly will open a flood-gate for Visual Studio productivity tool and extensions.


Although JIT compilers¬†are nothing new,¬†the¬†next-generation 64-bit JIT compiler for .NET¬†has been released and dubbed¬†RyuJIT. It’s¬†performance is a lot better compared to the previous 64-bit JIT compiler.¬†The heaviest workload of a JIT compiler is at startup as it starts converting IL code to native code and caches it in memory. RyuJIT now starts up to 30% faster.

.NET Native Compilation

We’ve already touched on .NET Native and¬†what makes it so valuable. The .NET Native compiler compiles all our code (including .NET Framework and 3rd party) code directly into machine (native) code.¬†We’ve discussed the advantages of this earlier. This is just to mention the new compilation chain .NET Native uses to¬†magically¬†convert¬†code to machine.

Application Models

So we’ve looked at¬†the new Frameworks and touched on new compilers. There’s one more area that also boasts change. The application model. The one making the most noise¬†IMO is ASP.NET 5, but UWP for Windows 10 is still worthy of¬†some attention:

  • Universal Windows Applications (UWP)
  • ASP.NET 5.

Universal Windows Applications (UWP)

UWP for Windows 10 has been recently released. What is UWP? Basically it allows us to develop a single application which will be able to deploy to¬†a whole range of all¬†Windows devices. Desktop, Mobile, XBox, Surface Hub etc…

The great thing is UWP¬†supports quite¬†a variety of¬†languages: C++, C#, VB, JavaScript,¬†HTML, XAML. So, whether you’re from a Web Background (and are familiar with HTML and JavaScript) or a WPF background (XAML and C#) you’ll be able to comfortable develop the apps to your strengths.

The biggest feature for me is that UWP is now optimized by the .NET Native runtime (I won’t go through the benefits of this again. See the .NET Framework section of why this is cool).


ASP.NET 5 (previously referred to as ASP.NET vNext) has been released and boasts some great¬†and interesting features and changes. First off, a lot has changed and to touch on all changes is out of the scope of this article.¬†I’ll mention some things that stood out for me personally.

  • First and top of my list, ASP.NET 5 can run both on the full .NET Framework and on .NET Core.¬†Running on .NET Core means it’s now possible to host our¬†sites on OSX¬†or Linux.
  • ASP.NET does no longer support WebForms, but only ASP.NET MVC
  • No more VB¬†yet?¬†Only C# is supported at the moment.
  • Some great TagHelpers¬†which are closer to pure HTML¬†to be used instead of the usual Razor HtmlHelpers.
  • Support for some popular client-side libraries such as¬†GruntJS, NPM and Bower.
  • Built-in support for Dependency Injection

Plenty more info and goodies to be found on the ASP.NET 5 site


Tying together all the new features, we have a new IDE (Visual Studio 2015).

Visual Studio 2015

Visual Studio 2015 RTM has been out since mid-late July (I think) and using it for a couple of weeks I’ve noticed some cool features:

  • We can now debug lambda expressions. This is VERY cool. (Quick watch a collection, run a Linq Query and get results immediately)
  • Visual Studio has built in support for Cordova. Previously we’ve needed to compile from the command line.
  • When running from one breakpoint to another, the elapsed time shows. (No more manual timer code to check¬†how long a method took)
  • Compiler support for C# 6 (and VB 14 of course).
  • VS Premium and Ultimate merged into Enterprise. So if you previously had¬†premium account, you’ll now get¬†“upgraded” to Enterprise. This finally allows the¬†use of CodeLens feature (been around in VS 2013 already)
  • Includes a built in Android Emulator which can be used to debug Xamarin / Cordova apps.


Seeing the effort and improvements that Microsoft has put into some new products recently and how their shift towards a more open-source ecosystem and cross-platform¬†intentions,¬†I believe there are exciting times ahead and¬†at least for the next while,¬†in my opinion it’s looking both promising and safe to be a .NET developer!