The big advantage of using self-hosted build agents with a build pipeline is that you can benefit from installing specific tools and libraries, as well as caching packages between builds.
This can also be a big disadvantage - for example, if a build before yours decided to install Node version 10, but your build assumes you have Node 14, then you’re in for a potentially nasty surprise when you add an NPM package that specifies it requires Node > 10.
An advantage of Microsoft-hosted build agents (for Azure Pipelines or GitHub Actions) is every build job gets a fresh build environment. While they do come with pre-installed software, you’re free to modify and update to your requirements, without fear that you’ll impact any subsequent build jobs.
The downside of these agents is that any prerequisites you need for your build have to be installed each time your build job runs. The Cache task might help with efficiently restoring packages, but it probably won’t have much impact on installing your toolchain.
This can be where Container Jobs come into play.
Whilst this post focuses on Azure Pipelines, GitHub Actions also supports running jobs in a container too, so the same principles will apply.
Containers provide isolation from the host and allow you to pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to maintain.
You could use a public Docker image, like
ubuntu:20.04, or you could create your custom image that includes additional prerequisites.
To make a job into a container job, you add a
container, like this:
container: ubuntu:20.04 steps: - script: printenv
With this in place, when the job starts, the image is automatically downloaded, a container using the image is started, and (by default) all the steps for the job are run in the context of the container.
Note that it’s now up to you to ensure that the image you’ve selected for your container job has all the tools required by all the tasks in the job. For example, the
ubuntu:20.04image doesn’t include PowerShell, so if I had tried to use a
PowerShell@2task, it would fail (as it couldn’t find
If the particular toolchain you require is closely tied to the source code you’re building (eg. C# 10 source code will require at least .NET 6 SDK), then I think there’s a strong argument for versioning the toolchain alongside your source code. But if your toolchain definition is in the same source code repository as your application code, how do you efficiently generate the toolchain image that can be used to build the application?
The approach I’ve adopted is to have a pipeline with two jobs. The first job is just responsible for maintaining the toolchain image (aka building the Docker image).
The second job uses this Docker image (as a container job) to provide the environment used to build the application.
Building the toolchain image can be time-consuming. I want my builds to finish as quickly as possible. It would be ideal if we could somehow completely skip (or minimise) the work for this if there were no changes to the toolchain since the last time we built it.
I’ve applied two techniques to help with this:
If your agent is self-hosted, then Docker builds can benefit from layer caching. A Microsoft-hosted agent is new for every build, so there’s no layer cache from the previous build.
docker buildhas a
-cache-fromoption. This allows you to reference another image that may have layers that be used as a cache source. The ideal cache source for an image would be the previous version of the same image.
For an image to be used as a cache source, it needs extra metadata included in the image when it is created. This is done by adding a build argument
--build-arg BUILDKIT_INLINE_CACHE=1, and ensuring we use BuildKit by defining an environment variable
Here’s the build log showing how because the layers in the cache were a match, they were used rather than the related commands in the Dockerfile needing to be re-executed.
#4 importing cache manifest from ghcr.io/***/azure-pipelines-container-jobs:latest #4 sha256:3445b7dd16dde89921d292ac2521908957fc490da1e463b78fcce347ed21c808 #4 DONE 0.9s #5 [2/2] RUN apk add --no-cache --virtual .pipeline-deps readline linux-pam && apk --no-cache add bash sudo shadow && apk del .pipeline-deps && apk add --no-cache icu-libs krb5-libs libgcc libintl libssl1.1 libstdc++ zlib && apk add --no-cache libgdiplus --repository https://dl-3.alpinelinux.org/alpine/edge/testing/ && apk add --no-cache ca-certificates less ncurses-terminfo-base tzdata userspace-rcu curl && curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin -Version 6.0.302 -InstallDir /usr/share/dotnet && ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet && apk -X https://dl-cdn.alpinelinux.org/alpine/edge/main add --no-cache lttng-ust && curl -L https://github.com/PowerShell/PowerShell/releases/download/v7.2.5/powershell-7.2.5-linux-alpine-x64.tar.gz -o /tmp/powershell.tar.gz && mkdir -p /opt/microsoft/powershell/7 && tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7 && chmod +x /opt/microsoft/powershell/7/pwsh && ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh && pwsh --version #5 sha256:675dec3a2cb67748225cf1d9b8c87ef1218f51a4411387b5e6a272ce4955106e #5 pulling sha256:43a07455240a0981bdafd48aacc61d292fa6920f16840ba9b0bba85a69222156 #5 pulling sha256:43a07455240a0981bdafd48aacc61d292fa6920f16840ba9b0bba85a69222156 4.6s done #5 CACHED #8 exporting to image #8 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00 #8 exporting layers done #8 writing image sha256:7b0a67e798b367854770516f923ab7f534bf027b7fb449765bf26a9b87001feb done #8 naming to ghcr.io/***/azure-pipelines-container-jobs:latest done #8 DONE 0.0s
Skip if nothing to do
A second thing we can do is figure out if we actually need to rebuild the image in the first place. If the files in the directory where the Dockerfile is located haven’t changed, then we can just skip all the remaining steps in the job!
This is calculated by a PowerShell script, originally from this GitHub Gist.
I’ve created a sample project at https://github.com/flcdrg/azure-pipelines-container-jobs that demonstrates this approach in action.
In particular, note the following files:
Some rough times I saw for the ‘Build Docker Image’ job:
Phase Time (seconds) Initial build 99 Incremental change 32 No change 8
The sample project works nicely, but if you did want to use this in a project that made use of pull request builds, you might want to adjust the logic slightly.
Docker images with the
latestversion should be preserved for use with
mainbuilds, or PR builds that don’t modify the image.
A PR build that includes changes to the Docker image should use that modified image, but no other builds should use that image until it has been merged into
Is this right for me?
Measuring the costs and benefits of this approach is essential. If the time spent building the image, as well as the time to download the image for the container job exceeds the time to build without a container, then using a container job for performance alone doesn’t make sense.
Even if you’re adopting container jobs primarily for the isolation they bring to your builds, it’s useful to understand any performance impacts on your build times.
In this post, we saw how container jobs can make a build pipeline more reliable, and potential improvements in time to build completion.
Here are my picks for Amazon Prime Day. I’ve included links to both Amazon.com and Amazon.com.au where available:
- Amazon Prime free trial amazon.com
- Western Digital 18TB WD Red Pro NAS Internal Hard Drive HDD - 7200 RPM, SATA 6 Gb/s, CMR, 256 MB Cache, 3.5” - WD181KFGX amazom.com amazon.com.au - wow 18, or even 20TB!
- Synology 2 Bay NAS DiskStation DS720+ (Diskless) amazon.com
- Microsoft Surface Laptop Studio - 14.4” Touchscreen - Intel® Core™ i7 - 32GB Memory - 1TB SSD - Platinum amazon.com
- Microsoft Surface Pro 8-13” Touchscreen - Intel® Evo Platform Core™ i7-32GB Memory - 1TB SSD - Device Only - Platinum (Latest Model) amazon.com
- And finally, make your own Lollybot with Chupa Chups Lollipops, 100 Pieces amazon.com.au or Chupa Chups Best of Mini Tube Small Lollipops, 50 Count amazon.com.au
(yes, these are all affiliate links)
I’m on leave this week and was listening to episode 835 of the RunAs Radio podcast “Updating Windows with Aria Carley” while out on my morning walk. I’ve been thinking about upgrading my main laptop to Windows 11 for a while now (Windows Update has indicated that it is compatible), but had been putting it off as my impression was that the initial release was possibly rushed out the door just a little bit early. Now that 22H2 is in the Release Preview channel, and scheduled for final release later this year, I figured it might not be too risky to give it a go given it’s had a bit more spit and polish applied.
Steps to upgrade from Windows 10 to Windows 11 Release Preview
If you’re at all cautious, make sure you have a good backup first. I verified that Synology Active Backup for Business had a current backup of this machine, and for good measure, I clicked on the Version button and locked the latest backup to preserve it in case I wanted to roll back to a known good state in the future.
From the Windows menu, launch Settings, then Windows Update and Windows Insider Programme (Yes, I’ve got the Australia/British English language settings). From here you can choose to join the Windows Insider Program. Click on Get started.
Click on Link an account
Select the account you want to use
Now you get to choose which Insider channel you want to join. I chose Release Preview but you could choose Dev Channel or Beta Channel if you prefer.
Now restart your computer!
After rebooting, you’re still running Windows 10, but if we go to Windows Update again and click Check for updates, you’ll now see a new section indicating that Windows 11, version 22H2 is available.
Because I’d clicked Check for updates, it automatically started to download the Cumulative Update Preview (as you can see in the image above). But clicking on the Download and install button halted that and instead Windows 11 started downloading.
Finally, the download completes and Windows Update is ready to restart to begin installing Windows 11.
After a few minutes (I grabbed some lunch at this point so I’m not sure exactly how many), you can now sign in to Windows 11. Just to confirm this, I launched
winver.exeto check (if the centred Start menu wasn’t already a clue!) that we are indeed running Windows 11.
For good measure, launch the Microsoft Store app, then Library and Get update to bring all your store apps up to date.
If you use WSL, then run
wsl --updateto upgrade to the latest version. Before I did this, the output of
Default Distribution: Ubuntu-20.04 Default Version: 2 Windows Subsystem for Linux was last updated on 28/03/2022 WSL automatic updates are on. Kernel version: 184.108.40.206
wsl --updateit now displays:
Default Distribution: Ubuntu-20.04 Default Version: 2 WSL version: 0.61.8.0 Kernel version: 220.127.116.11 WSLg version: 1.0.39 MSRDC version: 1.2.3213 Direct3D version: 1.601.0 DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp Windows version: 10.0.22621.169
And with that, I’m up and running Windows 11 22621.169 (which was the latest version of Windows 11 available to the Release Preview channel at the time of writing)