.NET RuntimeIdentifier vs RuntimeIdentifiers

Thursday, 5 December 2019

A Runtime Identifier (RID) is used to identify target platforms where a .NET Core application runs. They come into play when packages contain platform-specific assets (eg. native code for Linux, or Windows 64bit).

You can specify a single RID using the <RuntimeIdentifier> element in the project file, or to specify multiple RIDs use <RuntimeIdentifiers>.

Many dotnet command also can specify --runtime (or -r).

According to the documentation, if you only need to specify a single runtime then using <RuntimeIdentifier> will also result in faster builds.

I’ve noticed some other subtle difference between the singular and plural forms of this element.

Let’s create a simple .NET Core console app:

md rid
cd rid
dotnet new console

By default, the csproj (named rid.csproj in this case) looks like this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>

</Project>

If you look inside the obj directory, you’ll find a file named rid.csproj.nuget.dgspec.json. Its contents look like this:

{
  "format": 1,
  "restore": {
    "C:\\tmp\\rid\\rid.csproj": {}
  },
  "projects": {
    "C:\\tmp\\rid\\rid.csproj": {
      "version": "1.0.0",
      "restore": {
        "projectUniqueName": "C:\\tmp\\rid\\rid.csproj",
        "projectName": "rid",
        "projectPath": "C:\\tmp\\rid\\rid.csproj",
        "packagesPath": "C:\\Users\\david\\.nuget\\packages\\",
        "outputPath": "C:\\tmp\\rid\\obj\\",
        "projectStyle": "PackageReference",
        "fallbackFolders": [
          "C:\\Program Files\\dotnet\\sdk\\NuGetFallbackFolder"
        ],
        "configFilePaths": [
          "C:\\Users\\david\\AppData\\Roaming\\NuGet\\NuGet.Config",
          "C:\\Program Files (x86)\\NuGet\\Config\\Microsoft.VisualStudio.Offline.config"
        ],
        "originalTargetFrameworks": [
          "netcoreapp3.1"
        ],
        "sources": {
          "C:\\Program Files (x86)\\Microsoft SDKs\\NuGetPackages\\": {},
          "https://api.nuget.org/v3/index.json": {}
        },
        "frameworks": {
          "netcoreapp3.1": {
            "projectReferences": {}
          }
        },
        "warningProperties": {
          "warnAsError": [
            "NU1605"
          ]
        }
      },
      "frameworks": {
        "netcoreapp3.1": {
          "imports": [
            "net461",
            "net462",
            "net47",
            "net471",
            "net472",
            "net48"
          ],
          "assetTargetFallback": true,
          "warn": true,
          "frameworkReferences": {
            "Microsoft.NETCore.App": {
              "privateAssets": "all"
            }
          },
          "runtimeIdentifierGraphPath": "C:\\Program Files\\dotnet\\sdk\\3.1.100\\RuntimeIdentifierGraph.json"
        }
      }
    }
  }
}

If you supply a runtime identifier when running restore, like dotnet restore -r win10-x64, then two extra sections are added to this file. Firstly, under the "netcoreapp3.1" node:

          "downloadDependencies": [
            {
              "name": "Microsoft.AspNetCore.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.NETCore.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.WindowsDesktop.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            }
          ]

and secondly under the second "C:\\tmp\\rid\\rid.csproj" node, a runtimes section is added:

      "runtimes": {
        "win10-x64": {
          "#import": []
        }
      }

If you instead passed in -r linux-x64 then predictably, those entries refer to linux-x64 instead of win-x64.

Adding <RuntimeIdentifier>win10-x64</RuntimeIdentifier> to the csproj and running dotnet restore has exactly the same effect as if you specified the RID on the command line.

And now running dotnet build with the RID specified results in the compiled application being created in bin\Debug\netcoreapp3.1\win10-x64. Plus, since .NET Core 3 it also defaults to creating a self-contained application (so you get an .exe as well as all the dependent assemblies to allow you to run the application on a machine that didn’t already have the runtime installed)

It’s a slightly different story if you use <RuntimeIdentifiers> though..

You can’t specify multiple RIDs on the command line (well actually in .NET Core 2.2 you could for restore, but not in 3). So let’s change our csproj to have <RuntimeIdentifiers>win10-x64;linux-x64</RuntimeIdentifiers>. and run

dotnet restore

the dgspec.json now contains entries for both platforms. eg.

          "downloadDependencies": [
            {
              "name": "Microsoft.AspNetCore.App.Runtime.linux-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.AspNetCore.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.NETCore.App.Host.linux-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.NETCore.App.Runtime.linux-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.NETCore.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            },
            {
              "name": "Microsoft.WindowsDesktop.App.Runtime.win-x64",
              "version": "[3.1.0, 3.1.0]"
            }
          ],

and

      "runtimes": {
        "linux-x64": {
          "#import": []
        },
        "win10-x64": {
          "#import": []
        }
      }

but now if you run dotnet build, something interesting… there’s no bin\Debug\netcoreapp3.1\win10-x64 or bin\Debug\netcoreapp3.1\linux-x64 directories like you might be expecting. Instead there’s just the regular compiled assembly in bin\Debug\netcoreapp3.1! Almost as if you’d never set a RID at all.

What you can do now though, is build for both platforms consecutively. eg.

dotnet build -r win10-x64
dotnet build -r linux-x64

and you get both self-contained builds for win10-x64 and linux-x64 platforms! Plus, as you’ve already done a restore, you can make the build faster by passing in --no-restore so it doesn’t bother trying to restore again.

So if you’re targetting a single platform, use -r on the command-line or <RuntimeIdentifier>. If you’re targetting multiple platforms, use <RuntimeIdentifiers> and then use separate restore and build steps

A simple 'Up Next' Dashboard using PowerShell

Thursday, 28 November 2019

Last Saturday, we ran DDD Adelaide 2019. When we were setting up the venue on Friday afternoon, I realised that there was a huge flat-screen TV in the open area (behind where the registration/info desk would be located) and we hadn’t made any plans to use it.

We could just pop a copy of the DDD logo on a USB stick and probably the TV could just show that in ‘slide-show’ mode. But then I thought maybe we could go one better. Wouldn’t it be nice if you could display a simple ‘What’s on now, and what’s coming up next’ dashboard?

Ok, it’s Friday night, and I really should have been heading to bed, but I’d been inspired - now to find something that would fit the bill. A quick search of GitHub didn’t reveal anything obvious, so can I write a simple application myself to do the job?

First question - WPF, WinForms? HTML+JavaScript? They’d all do the job, but I wanted something simple that I could get done quickly! I decided I’d give PowerShell a go - and I kind of liked the idea of making it “old-school” ASCII text too.

I copied over the conference agenda data and decided on simple ordered dictionary would suffice for the data structure, using the time as the key. Then just two queries - one to find the entry who’s time is now, and the second to find the entry for what’s coming up next.

To help with development, I added a -test mode, that sped up time and made the clock run from 7am. Later on Saturday I realised I had an ‘off by one’ bug in the query logic - the test mode was useful to validate the fix.

One extra touch - I added a ‘current time’ and used [Console]::SetCursorPosition() to locate that in the bottom right-hand corner. While I was at it, just to be fancy, I added some colour to the ‘DDD’ bit in the title.

To run the dashboard, I used Windows Terminal. That allowed me to run full screen and choose a nice font size.

Dashboard in use at DDD Adelaide 2019

The dashboard worked well and I heard a few compliments that people liked it. Not bad for something whipped up in an hour!

If I revisit the script in the future, I might see if I can incorporate a simple Tweet wall - either on the right-hand side, or maybe alternating every 30 seconds. There was a lot of Twitter traffic on the day and it would have been nice to showcase that.

The source is all on GitHub. Pull requests welcome!

DDD Adelaide 2019 Summary

Sunday, 24 November 2019

Phew! DDD Adelaide 2019 is done. Yesterday we had 150 people come along to UniSA’s MOD building in the city and see 16 speakers present some awesome topics on software development.

The feedback on the day was overwhelmingly positive. I’ll be catching up with co-organiser Andrew soon to debrief and also review the comments received from attendees.

Special thanks to the gold sponsors:

Here’s a few of my highlights of the day:

Some kind people brought one or two donuts to share! Donuts

Crowd

Lars Klint was our keynote speaker, kicking off the day. Lars Klint

A fantastic range of speakers, including Ming Johanson Ming Johanson

and Liam McLennan (who incidentally was actually a speaker around 10 years ago at the last DDD Adelaide) Liam McLennan

Really yummy catering provided by Food LoreLunchtime

Afternoon tea

… and Coffee Cart provided by B3 Coffee and sponsored by Encode Talent

It was great to see so many software developers gathered together in Adelaide. Audience

Andrew drawing the prize winners (prizes sponsored by Octopus Deploy) and closing out the day. Andrew Best

Finally, special thanks to my wife and two eldest kids who also gave up their Saturday to help out as volunteers for the day. I really appreciate their support.

Trying Docker for Windows and Linux

Saturday, 16 November 2019

I’ve been spending a bit of time trying out Docker over the past few days, with the goal of making builds more reliable and repeatable.

Docker Desktop for Windows has the ability to run in Windows mode and Linux mode. Usually that means you can only run containers of one OS at at a time.

However, if you run configure Docker to enable ‘Experimental’ mode, then you can actually run both platforms simultaneously.

Interestingly, when you’re in this mode and you set ‘Windows’ as the default container platform, you don’t see an extra virtual machine listed in HyperV.

Here’s Docker running with Linux containers: Hyper-V Manager showing Docker virtual machine

And here’s Docker with Windows containers: Hyper-V Manager showing Docker machine not running Notice the VM may be there, but it is not running, even when I’m actually building a Linux container when that screenshot was taken.

So with experimental mode on, how can Docker be also running Linux containers in Windows mode?

Currently it uses something called LCOW - Linux Containers on Windows.

I know Docker also has preview support for WSL2. I think the plan is that once that ships (presumably with Windows 10 20H1) then Docker will be able to leverage that for Linux execution.

So in theory, if you need to spin up Linux and Windows containers, then this is the technology that will make that happen.

There’s still a few rough edges - probably why it’s all behind ‘preview’ or ‘experimental’ flags.

I hit one issue where trying to spin up a Node Linux container which has a step to run yarn resulted in some weird internal error:

Step 14/22 : RUN yarn
---> Running in eb14f055a9aa
container eb14f055a9aaa23db5f35493feec9009b775c6688e3c488b26c6880517bdd9f1 encountered an error during CreateProcess: failure in a Windows system call: Unspecified error (0x80004005)
[Event Detail: failed to run runc create/exec call for container eb14f055a9aaa23db5f35493feec9009b775c6688e3c488b26c6880517bdd9f1: exit status 1 Stack Trace:
github.com/Microsoft/opengcs/service/gcs/runtime/runc.(*container).startProcess
/go/src/github.com/Microsoft/opengcs/service/gcs/runtime/runc/runc.go:580
github.com/Microsoft/opengcs/service/gcs/runtime/runc.(*runcRuntime).runCreateCommand
/go/src/github.com/Microsoft/opengcs/service/gcs/runtime/runc/runc.go:471
github.com/Microsoft/opengcs/service/gcs/runtime/runc.(*runcRuntime).CreateContainer

No idea what’s going on there other than maybe I’ve managed to hit some issue where some API isn’t implemented?

It’s strange, as a different container (based on a different Linux distribution) didn’t have that problem.

So it has potential, but it’s obviously a work in progress.

PowerShell ErrorAction

Tuesday, 29 October 2019

Many PowerShell cmdlets have a common parameter ErrorAction, that can be set to one of Continue, Ignore, Inquire, SilentlyContinue, Stop or Suspend

I’ve never really understood what the different between Ignore and SilentlyContinue was until today. I’d ran the following code:

Get-Service 'NonExistantService' -ErrorAction SilentlyContinue

and then happened to look in the $Error automatic variable and noticed the following:

Get-Service : Cannot find any service with service name 'NonExistantService'.
At line:1 char:1
+ Get-Service 'NonExistantService' -ErrorAction SilentlyContinue
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (NonExistantService:String) [Get-Service], ServiceCommandException
+ FullyQualifiedErrorId : NoServiceFoundForGivenName,Microsoft.PowerShell.Commands.GetServiceCommand
 

Whereas the following does not get added to $Error

Get-Service 'NonExistantService' -ErrorAction Ignore

That’s the difference! Ignore was added in PowerShell 3.0, and quoting the documentation page “Unlike SilentlyContinue, Ignore doesn’t add the error message to the $Error automatic variable.”

So Ignore is probably a better option for most cases where ignoring the error is fine.