Posted in .NET Development

Machine Learning – Tic Tac Toe in C#

A while back I created a Tic Tac Toe game in Python and trained an opponent (or bot) using an Artificial Neural Networks (ANN). At that stage I really wanted to try out some reinforcement learning, but since it was a pet project on the side and I’m not as comfortable with Python, I just didn’t end up getting to it.

So…. recently I thought, let me accept the challenge but do it in C#. It certainly was a lot easier to implement in .NET for me. I also got to finally get my hands dirty with Blazor WebAssembly which was great. Being able to run client side code in C# (no JavaScript) is really a revelation. There were some limitations with running MiniMax algorithm in WebAssembly on the Client-side if you want to check that out in the Part 2 video (below)

I’m happy to have finally wrapped this up, so I hope you enjoy it as much as I’ve enjoyed making it.

Play the game here: https://tictactoe.filteredcode.com/
Code on GitHub: https://github.com/NielsFilter/TicTacToe.Net
YouTube videos: YouTube: filteredCode – TicTacTic AI in C# Playlist

Part 1: Creating the game in c#

Creating the game and laying the foundations to create AI bots

Part 2: Creating a MiniMax algorithm bot

Creating a MiniMax algorithm bot

Part 3: Creating the Q-Learning bot

Created a Reinforcement Learning (Q-Learning) bot

More bots coming?

My goal was Reinforcement Learning and it worked out really well. Now that I’ve ticked that box I’m happy to leave it as is (for now). All this is done in my own private capacity and spare time is a luxury I don’t have too much of at the moment.

But…

If I were to pick this up again in the future, 2 ideas that I wanted to play around with are:

  1. Supervised learning algorithm – Linear Regression (or something similar)
    • The thought is to capture all states from other games and “featurize” the various states. For instance features like “Occupies Middle”, “2 of yours with an empty space” (naming is hard…)
    • With decent features defined, the plan would be to make use of ML.Net to train
  2. Use a Neural Network to solve
    • Not as important to me since I’ve done this in Python already
    • But the thought of testing out a package like TensorFlow.NET package and seeing if the feature set and level of support is good enough to do some production Neural Networks purely in .NET.

Posted in .NET Development

Using docker for development

Docker has become a standard in the way we package and deploy our applications. But docker can provide some benefits outside of just “deployment”

VS Code – Remote Containers

Now this is a revolutionary concept. An isolated, repeatable, sharable development environment.

You can connect to a container in VS Code, develop and debug as if you have all the dependencies installed locally and when you’re done, simply remove the container again.

Startup dependencies with docker compose

Developing and debugging services locally can be challenging when they make use of a lot of external services.

Here I share some challenges I faced and an approach I took with Docker Compose to startup the multiple dependencies I need to develop an API service.

Posted in .NET Development, News

What’s new in .NET 6 and C# 10

I’ve been looking at some of the new .NET 6 features that are currently in preview. There are few neat little features that I wanted to share

Minimal API

New ASP.NET projects have barely any “boiler plate” code and removed a lot of the noise away to leave us with only the code that really matters.

We don’t even have a startup / program class, nor a Main() entrypoint.

DateOnly and TimeOnly

Finally it’s now possible in C# to indicate that we are using “only” dates using DateOnly without lugging the time along with us.

There’s also TimeOnly which is slightly different to TimeSpan since it indicates a time within a day.

Enumerable Chunk

Paging Lists hasn’t been too bad. (Skip and Top). Paging Enumerables with yield returns have been tedious. We need to get the get the enumerator, move through and keep state of items to page / chunk them.

The new Chunk extension on Enumerable makes this much simpler

Global usings

This is really neat! We can now define some using statements which are repeated in all the classes as global and now we no longer need to define them in each class anymore.

A great way to keep our classes lean.

New LINQ extension methods

LINQ has always been great. Getting some fresh extensions methods makes my day.

Check out these great new extension methods with the ...By suffix.

e.g. MinBy & MaxBy

Defaults with defaults

The OrDefault extenions methods have been great since we haven’t had to do if (extists) checks before getting an entry.

Even better, we now can define our own defaults rather than having to rely on the data type defaults anymore!

This really is just the tip of the ice berg

These are just a few of the great new features with .NET 6 that stood out for me. There’s a lot more coming with .NET 6.

Also, major .NET upgrade of late comes with a plethora of performance enhancements and optimizations.

Exciting times…

Posted in .NET Development, Tutorials

IEnumerable vs IQueryable – Part 2: Practical Questions

If you’re unsure of the difference between IEnumerable and IQueryable, please read Part 1 first.

In this article we’ll take a practical Q&A approach to test your understanding of these interfaces and how the work. Try figure each question out first before looking at the answers.

Setting the stage:

  • We’ll be using Entity Framework (EF)
  • with a SQL database
  • that has 1 table “Person”.
  • Assume we have a global DbContext instance called db

Here goes, good luck…

# Question 1

  • On which line(s) does the Database get queried?
  • How many times does the database get queried?
Console.WriteLine("Some code");
IQueryable<Person> people = db.People.Where(p => p.Name == "Niels");
IQueryable<Person> activePeople = people.Where(p => p.IsActive == true);
Console.WriteLine("Some more code");

foreach (var person in people) 
{ 
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");
} 

Answer:

  • Line 6. Because of deferred execution, the database will only be hit once the IQueryable is used. This is in the foreach loop, when we need to access the first person in people collection.
  • Only once at Line 6. It’s perfectly valid to build on previous IQueryable variables appending filters as you go along. Once executed eventually, only 1 query will be executed, taking into account all filter expressions added.

# Question 2

  1. On which line does the Database get queried?

Console.WriteLine("Some code");
IQueryable<Person> people = db.People.Where(p => p.Name == "Niels");
IQueryable<Person> activePeople = people.Where(p => p.IsActive == true);
Console.WriteLine("Hello World");

Answer:

  • Never. Because of deferred execution, the query will not execute until it’s used. Since it’s never used, it will never execute and only remain “intent”.

# Question 3

Person table has 100 records. 60 of these records are active and 40 inactive

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people or 60 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.ToList();
var activePeople = allPeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
{
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");
}

Answer:

  • Line 1. ToList() forces the query to execute
  • 100 records will be returned. Line 1 executed the query and loaded 100 results to memory. Line 2 then filtered the collection in memory and created a new collection in memory with 60 records.
  • Yes there is, simply remove the ToList() in line 1. This means that allPeople will be IQueryable and finally only in the foreach (line 4) the database will be queried returning only 60 results into memory.

# Question 4

Person table has 100 records. 20 people’s names start with the letter “N”. Of these 20, 10 are active and 10 are inactive.

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people, 20 or 10 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.Where(p => p.Name.StartsWith("N"));
IEnumerable<Person> enumerablePeople = allPeople;
var activePeople = enumerablePeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
{
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");
}

Answer:

  • Line 5. Even though we cast to IEnumerable (a memory collection), deferred execution will still only execute the query once needed (in the foreach).
  • 20 records will be returned. Even though the query only executed at line 5. We indicated in line 2 that “from here on out, we will work with the collection in memory”. So once the foreach executes the query, Line 1’s query will execute against the database, returning 20 results into memory. Then Line 3 filters the 20 records in memory and creates a new collection in memory with 10 records.
  • Yes there is, simply removing Line 2 would keep everything IQueryable until it’s needed in the foreach (line 4). Then the database would only return back the 10 results needed.

# Question 5

Person table has 100 records. 20 people’s names start with the letter “N”. Of these 20, 10 are active and 10 are inactive.

  1. On which line does the Database get queried?
  2. Will the database query bring back 100 people, 20 or 10 people?
  3. Is there an obvious way to improve the query?
var allPeople = db.People.Where(p => p.Name.StartsWith("N"));
IEnumerable<Person> activePeople = allPeople.Where(p => p.IsActive == true);

foreach (var person in activePeople)
{
   Console.WriteLine($"{person.Id} : {person.Name} {person.Surname}");
}

Answer:

  • Line 4. Even though we cast to IEnumerable (a memory collection), deferred execution will still only execute the query once needed (in the foreach).
  • 10 records will be returned. Even though we marked the collection at line 2 as IEnumerable, the IsActive is still applied to the IQueryable before it’s cast to an IEnumerable
  • No. The query works fine. It can be somewhat misleading with the IEnumerable at Line 2, so ideally we should replace this with var. But with no performance difference.

# Question 6

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
var activePeople = db.People
     .Where(p => p.IsActive == true)
     .Select(p => p.Name + " " + p.Surname);

foreach (var person in activePeople)
{
   Console.WriteLine(person);
}

Answer:

  • Yes it works
  • Line 5. Deferred execution will ensure that we only execute the query once needed (in the foreach).
  • The database will query the database with a statment something like this: SELECT Name + ' ' + Surname FROM...and a list of strings will be loaded into memory (not all the Person columns)

# Question 7

Same as Question 6, except we’ve moved the Select part into a separate method.

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
var activePeople = db.People
     .Where(p => p.IsActive == true)
     .Select(p => BuildPersonName(p));

foreach (var person in activePeople)
{
   Console.WriteLine(person);
}
...

private string BuildPersonName(Person p)
{
   return p.Name + " " + p.Surname;
}

Answer:

  • No it doesn’t work. A NotSupportedException will be thrown at runtime. Even though we’re doing exactly the same as Question 6 and only moved the Select part into a separate method, this query will fail.Why? Remember that IQueryable builds up an Expression Tree from our LINQ and then translates that into a SQL Query. Since our LINQ references a method in our .NET code, how would SQL map .NET functions with the SQL query? It would try to do something like this:
    SELECT BuildPersonName(p) FROM Person p... which can never work since `BuildPersonName` is a .NET function and not a SQL function.

# Question 8

Same as Question 7, except we added a .ToList() after the Where(...)

  1. Will this query work?
  2. If it works. On which line does the Database get queried?
  3. If it works. Will the database return all columns in Person table or just a list of strings?
  4. If it works. Is there an obvious way to improve the query?
var activePeople = db.People
     .Where(p => p.IsActive == true).ToList()
     .Select(p => BuildPersonName(p));

foreach (var person in activePeople)
{
   Console.WriteLine(person);
}
...

private string BuildPersonName(Person p)
{
   return p.Name + " " + p.Surname;
}

Answer:

  • Yes it works. Unlike Question 7, this is perfectly valid. Since we return the database query results after the Where, the Select part will be handled in memory and therefore our LINQ can reference .NET functions.
  • Line 2. The ToList() will execute the query with the IsActive filter and return the results to memory.
  • The database will return all columns in the Person table, since the ToList() was called before the Select
  • Yes there is. Returning all columns from Person is less optimum than just returning the appended name. Changing the code to look like Question 6 is the best for performance.

 

Conclusion

The above questions tried to cover all the different ways that IEnumerable and IQueryable could be used. I believe if you understand why each query behaved the way it did in the above questions, you can figure out any query behaviour.

Posted in .NET Development, Tutorials

IEnumerable vs IQueryable – Part 1

What’s the difference between IQueryable and IEnumerable? This is probably the second most frequent question I’ve been asked (number 1 has to be understanding delegates).

The purpose of this article isn’t to formally define the interfaces, but rather to paint an easy to understand picture of how how do they differ. Then in Part 2 we get practical with 8 code snippet questions where we can test our understanding of the topic.

The Difference (Short answer)

  • IEnumerable – quering a collection in Memory
  • IQueryable – quering an External Data Source (most commonly a database)

What is IEnumerable?

The IEnumerable interface exposes an GetEnumerator method, which returns an IEnumerator interface which allows us to iterate through the collection. In plain English: an IEnumerable is a collection of items you can loop through.

Did you know, even arrays are inherently IEnumerable. See this snippet below:

int[] nums = { 1, 2, 3, 4 };
bool isEnumerable = nums is IEnumerable<int>; // True

IsEnumerable in the above code is true. This kind of makes sense, since an array (just like IEnumerable) is a collection of items we can loop through.

What makes IEnumerable special?

The most special thing about IEnumerable is that we can query items using LINQ.

There are a bunch of LINQ methods (Where, Select, Take, OrderBy, First etc.) which are simply extension methods for the IEnumerable interface. Same as all extension methods, just include the namespace (System.Linq) and the whole range of LINQ extensions are available to filter our collection.

Something else that’s very important to understand when using IEnumerable is Deferred Execution. In short, a LINQ Query only captures the intent. It holds off as long as it can and only does the actual filtering when any of the following happens:

  • Iterate the results (e.g. foreach)
  • Call ToList
  • Get a single result from the query (e.g. .Count() or .First())

Understanding Deferred Execution is key to using IEnumerable correctly. Not understanding this can lead to unnecessary performance issues. Check out Part 2 to see if you understand it correctly.

What is IQueryable?

Firstly IQueryable inherits from IEnumerable. This means inherently, it is also a collection of items that you can loop through. We can also write LINQ queries against an IQueryable.

IQueryable is used when querying a data source (let’s say a database).  So if we are using Entity Framework (EF), we can write a LINQ Query as follows and it will actually produce a SQL query:

EFQuery

In the above, our LINQ Query was translated to a SQL Query. When it is executed, the query will be run against our SQL database and return results to memory. Remember, the filtering does NOT happen in memory, but in the database (read this sentence again to make sure you’ve got it).

How does LINQ suddenly become SQL?

There are 2 important properties on the IQueryable interface: Expression and Provider

IQueryable

  • Expression – This is the Expression Tree built up from the LINQ Query
  • Provider – Tells us how to translate the Expression Tree into something else

In our case (using EF with a SQL database) what happened:

  • We created a simple LINQ query
  • This built up an Expression Tree
  • The Expression Tree gets passed to the Provider
  • Our provider translates the Expression Tree in SQL Query
  • As soon as we use our results (deferred execution), the SQL Query will execute against a database.
  • The results are returned and stored into memory.

Great uses for IQueryable

Think about the way IQueryable works for a moment. Let’s say we have a custom Data Source like a file which appends data with some separators we defined. If we find ourselves, constantly reading these files and trying to sift through the text to get hold of data;  we could instead use IQueryables and create our own Query Provider. This will allow us to write LINQ queries to get data from our files.

Another popular place IQueryable is used is for ASP.NET WebAPI OData. We can expose our REST endpoints and allow the person using our Web Service to filter only the data they need without pulling all data down to the client first. OData is basically just a standard that allows us to use URLs to filter specific data.

Example: Let’s say our REST service returns a list of 100 000 People: (http://mysite.com/People). But in our app we only want the people whose surnames contain the search text “Filter”.

Without the power of IQueryable and OData, we would either have to:

  • Pull all 100 000 people down to our client and then locally in memory filter for those 10 people with surname “Filter” that we actually need.
  • Or create an endpoint specfically for searching for people by surname, passing a query string parameter “Filter”.

Neither of these are great. But using Web API with OData, we could create a controller that returns IQueryable<Person> and then allow our app to:

  • Send a custom URL: http://mysite.com/People$filter=contains(Surname,’Filter’)
  • On the server, the IQueryable Expression Tree is built up from the OData URL
  • The Provider translates the Expression Tree to a SQL Query
  • The SQL executes against the database only getting 10 items from it
  • These 10 items are returned to app as the client’s requested format (e.g. JSON)

So with the power of IQueryable and OData, we indirectly queried the database via a URL, without having to write server code and didn’t have to pull data we did not need. (less bandwidth, less server processing and minimal client memory footprint)

Side note: LINQ Query Syntax vs Extension Methods

Not directly related to the topic, but a question I’ve been asked several times as well. Is it better to use Query Syntax or Extension methods.

Query Syntax:

var result = from n in nums
             where n > 2
             select n;

Extension Methods:

var result = nums.Where(n => n > 2);

They both compile down to the Extension Methods in the end. The Query syntax is simply a language feature added to simplify complex queries. Use which ever is more readable and maintainable for you.

I prefer to use Extension Methods for short queries and Query Syntax for complex queries.

Conclusion

If you missed everything, just remember this:

  • IEnumerable – queries a collection in Memory
  • IQueryable – queries an External Data Source (most commonly a database)

If you are comfortable with these interfaces, go to Part 2 and test yourself with 8 quick code snippet questions on the topic.

Posted in .NET Development, Architecture

Should my code be “Technical” or “Domain” focused

  > How do we structure our solution?
  > What do we name our files?
  > How do we organize the folders in the project?
  > How do we structure our code regions?

It’s probably safe to say we’ve all sat with these questions and still do every time we expand our projects or create new ones.

The big question that this article addresses is whether we should organize our code based on the “Domain” or rather on “Technical” implementations.

Let’s quickly define both and see which is better.

Technical Focus

This approach organizes code with a technical or functional focus. This is a more traditional way of organizing an application, but still very much in use today. Let’s see how this would look practically

Code Regions

Regions are defined according to the functional implementation. If it’s a method and it’s public it goes to Public Methods, regardless of what the method does.

tech_regions

Project Layout

For example creating and MVC application, File -> New Project lays out the folders with a technical focus. If you create a View, regardless of what it does, it goes into the Views folder.

tech_folders

Solution Architecture

The traditional layered architecture is a very common practice. This approach organizes the projects according to the function. If I have a Business Logic or Service class, it will go into the Domain project, regardless of what it does.

tech_architecture

In short it’s a “What it IS” approach

You’ll see that in each of the above cases, we’ve organised according to what something IS and not what it DOES. So if we’re developing a Hospital application, a Restaurant Management system, or even a Live Sport Scoring dashboard, the structure for these vastly different domains will look almost identical.

Domain Focus

This approach organizes code with a domain or business focus. The focus on the domain has definitely been popularized by designs such as DDD, TDD, SOA, Microservices etc. Let’s see how this would look practically:

Code Regions

Regions are defined according to the domain implementation. Anything related to “Admitting a patient to the Hospital” will go in the Admit Patient region, regardless of what it is.

dom_regions

Project Layout

Taking the MVC example mentioned earlier for Project Layout, we would now see folders according to the specific domain. If we create something that is related to “customer feedback”, it would go in the CustomerFeedback folder, regardless of what it is (view, controller, script etc.)

dom_folders

Solution Architecture

Architecture would be based around a type of SOA or Microservices approach, where each domain would exist independently in it’s own project. If we have new domain in a live sport scoring app such as “Cricket”, we would create a new project for Cricket and everything related to it will go in there regardless of what it is.

dom_architecture

In short it’s a “What it DOES” approach

You’ll see that in each of the above cases, we’ve organised according to what something DOES and not what it IS. So once again, if we’re developing a Hospital application, a Restaurant Management system and a Live Sport Scoring dashboard, the structure for these vastly different domains will look completely different.

So which is best?

Firstly let’s just put it out there that there’s a “3rd approach” as well, a hybrid between the 2. For example, we could have a Properties region (which is technical), and then a Admit Patient region (which is domain) for all domain related methods.

So which is best? Well let’s see…

Why Technical is better than Domain

1. Every project’s layout and all page regions are identical.

We as developers are often very technically oriented, so this would feel right at home as we can feel in control even if we’re clueless about the domain.

2. Less pieces

Since there are only so many technicalities within a project, once we’ve grouped by them, the number of regions, folders or projects will never grow.

3. Layer specific skills or roles

If the development team’s roles in a project are technical-specific, this approach is great. Each developer has their specific folder or project which they work on and maintain. For example you have one developer only creating views, another only doing domain specific validations, another only focusing on data access etc.

Why Domain is better than Technical

1. We’re solving business problems

As technical as we developers can be, at the end of the day, if we’re not solving domain specific problems, we’re failing as software developers. Since business is our core and the technical only the tool to get us there, organizing code, folder and projects by domain makes much more sense.

2. Scales better

When the application expands or the scope widens, it often means that the new implementations don’t affect or bloat existing code as each domain is “isolated” from the next  (closer adherence to the Single Responsibility and Open/closed principles).

3. Everything is together

Often developers are responsible for all or at least most layers of technical implementations. If we for instance had to now expand our Live Sport Scoring web dashboard to include tennis, we very easily end up working with data access code, business rules and validations, view models, views, scripts, styles, controllers etc. and these are for a typical web application implementation. We could easily have a few more.

The point is, we often work with all of these while solving a single domain problem. So if we for example have a tennis folder and our tennis specific scripts, styles, views, controllers etc. were together, that would already be much more productive.

4. Reusable

This only really affects architecture, but if a project is built and isolated by domain, it become reusable by different applications on it’s own. In an enterprise environment, this is really useful.

For example if a large corporate business has internal procurement rules or procedures, but the business has many different systems for it’s departments, be it the cafeteria, HR, finances etc. then an SOA-type approach would enable you to have one project which handles all the procurement procedures and all the different flavours of applications can go through this procurement service, ensure that both the correct procedures and the same procedures are used for every procurement for every department.

Conclusion

So I haven’t yet said which one is best. For me personally, my bias definitely lies more with organizing projects around the domain.

Once again, this is no silver bullet answer or solution, but remember that there are most definitely the wrong approach for a specific project or problem. Here are some questions that we should ask, testing our approach to existing systems:

  • Are there any areas where we suffer under lack of productivity?
  • If so, would the different approach be better?
  • If so, would changing the approach be too great an adjustment for the benefits it would provide?

But the ultimate questions really are:

  • Are the business needs currently being met?
  • And are the developers happy and in consensus with the approach?

As the good old saying goes: “Don’t fix something that’s not broken”.

I’d love to hear thoughts from your experience with either approach and any opinions, short falls or benefits you’ve experienced

Posted in .NET Development, Nugget, Tutorials

ASP.NET Nuggets – Tag Helpers

In ASP.NET Core 1.0 MVC (Previously referred to as MVC 6) they’ve introduced Tag Helpers, which replaces the old Html Helpers. The idea is that we can create standard Html markup but still have the ability to allow the server to “enrich” this markup without being too obtrusive.

The old way (Html Helpers)

Let’s say we need to create a form, that posts data to the Save action on a PatientController. We have an html form, a label and input TextBox:

using (Html.BeginForm("Save", "Patient", FormMethod.Post, new { @class = "form-control", data_extraInfo = "myextrainfo" }))
{
   @Html.LabelFor(x => x.Name, "First Name", new { @class = "control-label" })
   @Html.EditorFor(x => x.Name, new { htmlAttributes = new { @class = "form-control" } })
}

Here’s some more code. We have 2 different anchor tags, the first being incorrect and the second correct. The right thing that should happen is to create a link saying “Go Back”, which will call the GoBack action on the PatientController with PatientID as a parameter.

//Wrong
@Html.ActionLink("Go Back", "GoBack", "Patient", new { PatientID = Model.ID })

//Right
@Html.ActionLink("Go Back", "GoBack", "Patient", new { PatientID = Model.ID }, null)

There are several difficulties with this code above:

  1. We have no idea how exactly how the html actually renders (have to run and inspect).
  2. The closing form tag is a curly brace and in a large page, it’s difficult to tell if the curly brace we see closes the form or is it actually to close a `loop` or `if` statement.
  3. Simple Html attributes need to be created anonymous types (not transparent).
  4. Since anonymous Html attributes are C# anon types, some attributes conflict with reserved C# keywords (such as class has to become @class).
  5. If we want some data-dash attributes for our client-side code to use, we have to use underscores as we can’t use dashes in C# variables.
  6. The `LabelFor`, expects htmlAttributes parameter, so we say `new { @class = “…” }`, but the `EditorFor` expects additionalViewData, so we’d have nest the Html attributes like this `new { htmlAttributes = new { @class = “…” } }`. Certainly this is a very error-prone approach.
  7. In the 2nd code snippet we can see how adding the null parameter at the end makes the action link behave correctly. This is because the first uses a different overload that actually omits the controller and so the “Patient” string actually is incorrectly passed through as `RouteData` and the `RouteData` as `HtmlAttributes` (so easy to get it wrong as it compiles fine).

The new way (Tag Helpers)

easierWay

Html Helpers get the work done, but there’s now a much more efficient way. Here’s the same result using Tag Helpers:

<form asp-controller="Patient" asp-action="Save" method="post" class="form-control" data-extraInfo="myextrainfo">
   <label asp-for="Name" class="control-label">First Name</label>
   <input asp-for="Name" class="form-control" />
</form>

And here’s the action link using Tag Helpers:

<a asp-controller="Patient" asp-action="GoBack" asp-route-PatientID="@Model.PatientID">Go Back</a>

You’ll notice in the above 2 snippets, we’ve simply written standard Html Markup and the server enriched parts prefixed with asp-. Introducing Tag Helpers has helped us overcome all 7 of the difficulties mentioned earlier.

The beauty of Tag Helpers is that they’re truly WYSIWYG (What you see is what you get). Now we have the benefit of enriching our Html with server code, but still just write Html.

Posted in .NET Development, News

Exciting times for .NET developers

It’s definitely a good time to be a .NET developer. Microsoft has been around for a very long time and have often been labelled (rightfully, I suppose) as “slow” and “closed” in their approach, isolating their products and services solely to users on their platform. But this has changed drastically in more recent years. There are many reasons to be excited.

They’ve gone Agile

Don’t believe me? See this interesting article from Steve Denning on forbes.com . A company of 128 000 employees not only adopting the agile approach but doing so very successfully is no small feat.

Much of their recent development is completely open-source on a GitHub. Now anyone can see their progress, use or test pre-releases, provide feedback or even modify code on their behalf and commit it for review and approval. The earlier you get feedback on a product, the more solid the foundation and sooner you end up with a stable release.

.NET Core is Cross-platform

Yip, you can now host your ASP.NET Core 1.0 web site on anything from a Mac, Linux or even Raspberry Pi. How is this possible? .NET Core has been built completely modular and the .NET assemblies can be deployed as NuGet packages without having to “install” the framework first. As for the runtime, .NET core has what’s called the DNX which hosts the application on any of the mentioned platforms, which has the CoreCLR, so we don’t lose the managed goodies like garbage collection.

happy_binny_2

Here are some other ways which doors have opened for developers from vastly different technology backgrounds:

  • Visual Studio Code is a free version of Visual Studio running on Windows, OS X or Linux
  • There is built in tooling for building cross-platform hybrid Cordova mobile apps (TACO) VS, no more command line compiling as in the past.
  • Native Windows Mobile or Store apps (UWP) can also be written with HTML and JavaScript back-end (this enables pretty much every web developer to be able to create Native Windows apps without a steep learning curve of XAML and C#)
  • Visual Studio has first class support for GitHub source control directly from VS
  • Azure has support for pretty much any popular platform, development technology, source control etc.
  • VS also has built in support for popular task runners such as Gulp or Grunt and package managers such as bower and npm
  • If you prefer create sites with NodeJS, VS even has tooling for that
  • Even though this has been around for quite some time, if you have different language backgrounds such as Python or Ruby, you can create Desktop of Web projects from VS with these. For example it blew me away that you can create a WPF application having a XAML front-end with Python code-behind. (This makes use of the .NET’s DLR which bridges the gap allowing dynamic typed languages such as Python to run on the .NET framework).

The point to take from this is that the focus of Microsoft is no longer a attempt at a form of monopoly, but creating platforms and tools that would invite different developers to freely use their products, tools and frameworks (and I assume the goal is to ultimately get them to use Azure)

They went big with Azure

Microsoft is really putting a lot more emphasis on their cloud platform Azure

It’s also a great platform to all a local network to move to the cloud using their “Infrastructure as a Service” (IaaS) or even “Platform as a Service” (PaaS). This obviously saves cost and time spent on hardware and software maintenance, updates, hotfixes etc.

The consumption payment model “pay for what you use” is really attractive especially for start-ups and allows easy and flexible scaling. I’ve got a couple of tiny prototype applications running on Azure at the moment and so far, everything’s still free because of the low traffic.

Starting fresh

Haven’t we all had those projects where our great designs or approaches seem to get in the way years down the line as things change?

This is interesting, because if there’s any company that has years of “backward” compatibility caked into their software which they’d rather wish they’d have done differently or as times changed, the way their API’s get used changed, it’s Microsoft. Backward compatibility means stability but also often means lack in performance and scalability over time (especially if you’re still supporting legacy API’s from a decade ago).

Someone in Microsoft was bold enough to make the call for some rewrites. Off the top of my head, these are things they’ve recently completely rewritten from ground up:

  • The C# Compiler (Rosyln)
  • NET Core
  • ASP.NET Core 1.0
  • Entity Framework Core 1.0

These are only the ones I know about and they’re not small either. Besides Roslyn, nothing is directly “backward compatible”, but rather “conceptually” compatible, transferring existing concepts to the new frameworks rather than simply porting code as is.

In case you were wondering, ASP.NET Core 1.0 was initially called ASP.NET vNext and then became ASP.NET 5 with MVC 6, which ran on .NET Core 5 using EF 7. Now that’s a mouthful, so last week they’ve announced it’s been renamed to Core 1.0 (makes sense for a rewrite to start again at 1.0). So at least for now, it’s referred to as:

  • ASP.NET Core 1.0
  • NET Core 1.0
  • Entity Framework Core 1.0

Performance matters

It’s no longer fair to label Microsoft products as slow. There are a lot of smart people that have put much effort into reducing memory footprints and optimizing performance. To name a few performance benefits as a developer I’ve picked up on recently:

  • If you’re running .NET Native (such as UWP apps) you get the performance of C++ and the productivity of managed C#
  • The RyuJIT compiler [link to other article] means your app will just be a bit faster without doing anything, especially the start-up times.
  • And here’s my favourite, ASP.NET Core 1.0 benchmarks when compared to Google’s NodeJS web stack.
    • On a Linux server ASP.NET Core is 2.3x faster
    • On a Windows server, it’s more than 8x faster with 1.18 million requests per second!

aspnet5benchmark

Want to see some code

I’ve been exploring and keeping an eye on ASP.NET Core 1.0 as it goes through the pre-release phases. I’ve personally found it to be quite a big change from ASP.NET 4.6 and hope to be sharing a few nuggets soon on some great features I’ve enjoyed when I get the time.

Posted in .NET Development, Pitfalls

Switching languages – Common mistakes

These days I generally work with C#, VB.Net, JavaScript and SQL. Switching between the different languages with their constructs has caught me a few times with subtle bugs. I thought I’d post a few simple little “gotchas” I’ve encountered.

VB.Net Nothing is not C# null

Most of my development experience is in C#, so this one was a little strange when I encountered it. Nothing is NOT the same as null. Nothing in VB.Net is actually the same as default(T) in C#.

'This is perfectly legal in VB.Net
Dim myGuid As Guid = Nothing
//Whilst this will not complie in C#
Guid myGuid = null;

// The actual conversion of the Nothing to C# is
Guid myGuid = default(Guid); // Which is the same as Guid.Empty

So here’s the gotcha that caught me:

Dim myGuid As Guid = Nothing
If myGuid = Nothing Then
'// This is True
End If
If IsNothing(myGuid) Then
'// This is False, careful!
End If

The first check = Nothing checks if myGuid = default(Guid), which it does. The one that caught me was IsNothing at first glance should do the same, but it doesn’t. That’s because IsNothing method is only intended for reference types. So IsNothing is probably the closest equivalent to the C# == null check

From MSDN:

MSDN: IsNothing is intended to work on reference types. A value type cannot hold a value of Nothing and reverts to its default value if you assign Nothing to it. If you supply a value type in Expression, IsNothing always returns False.

JavaScript boolean comparison

A simple gotcha that has unfortunately caught me more than once is a boolean comparison with a string value. Let’s say we read a hidden input field value which holds a boolean value and do a simple check on it:

var isValid = $("#hiddenValid").val();
if(isValid) {
   // ALWAYS TRUE, as this actually checks if isValid is defined
}

if(isValid == true) {
   // ALWAYS FALSE, as we're comparing string with bool
}

if(isValid == "true") {
   // CORRECT
}

The is actually a very obvious mistake as we’re comparing different types, but isn’t so easy to track down once it’s in code. Working mostly in C#, the first 2 methods look perfectly correct at first glance.

SQL Server Not Equals

A few months ago, whilst debugging a report, I stumbled on this one. Once again, it’s a relatively simple little gotchas, but not at all obvious to track down. Lets say we have an table called Employee with the following data:

SQL
We have 7 empoyees. Steve and Adam are new interns and will only officially get a position once their 3 month probation is over (until then their position is null).

 

We must produce a report listing all employees except the CEO. Sounds easy enough:


--This is wrong
SELECT * FROM Employee
WHERE Position <> 'CEO'

--This is right
SELECT * FROM Employee
WHERE (Position IS NULL OR Position <> 'CEO')

-- This is also right
SELECT * FROM Employee
WHERE ISNULL(Position, '') <> 'CEO'

The first SQL query looks perfectly fine. Get everyone where not equal to ‘CEO’. However since the <> in SQL actually checks greater than and less than and since NULL mean no value exists, we cannot do any type of comparison against a non-existant value, so it will ignore the NULL values.

This isn’t a SQL specific thing, it applies to all languages that have nullable types. But in C# for instance, a design decision was made to include null items in comparisons for ease of use (even though it’s semantically not really correct).

That’s it

Are there any simple gotchas or traps you’ve run into switching between languages? If so, please feel free to post in the comments.