-
Fixing my blog (part 2) - Broken links
My first attempt to use the Accessibility Insights Action returned some actionable results, but it also crashed with an error. Not a great experience, but looking closer I realised it seemed to be checking an external link that was timing out.
Hmm. I’ve been blogging for a few years. I guess there’s the chance that the odd link might be broken? Maybe I should do something about that first, and maybe I should automate it too.
A bit of searching turned up The Lychee Broken Link Checker GitHub Action. It turns out ‘Lychee’ is the name of the underlying link checker engine.
Let’s create a GitHub workflow that uses this action and see what we get. I came up with something like this:
name: Links on: workflow_dispatch: jobs: linkChecker: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Link Checker id: lychee uses: lycheeverse/[email protected] env: GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}} with: args: '--verbose ./_posts/**/*.md --exclude-mail --scheme http https' format: json output: ./lychee/links.json fail: false
Results are reported to the workflow output, but also saved to a local file. The contents of this file look similar to this:
{ "total": 5262, "successful": 4063, "failures": 1037, "unknown": 5, "timeouts": 125, "redirects": 0, "excludes": 32, "errors": 0, "cached": 381, "fail_map": { "./_posts/2009/2009-09-13-tech-ed-2009-thursday.md": [ { "url": "http://blog.spencen.com/", "status": "Timeout" }, { "url": "http://notgartner.wordpress.com/", "status": "Cached: Error (cached)" }, { "url": "http://adamcogan.spaces.live.com/", "status": "Failed: Network error" } ], "./_posts/2010/2010-01-22-tour-down-under-2010.md": [ { "url": "http://www.cannondale.com/bikes/innovation/caad7/", "status": "Failed: Network error" }, { "url": "http://lh3.ggpht.com/", "status": "Cached: Error (cached)" }, { "url": "http://www.tourdownunder.com.au/race/stage-4", "status": "Failed: Network error" } ],
(Note that there’s a known issue that the output JSON file isn’t actually valid JSON. Hopefully that will be fixed soon)
The
fail_map
contains a property for each file that has failing links, and for eacho of those an array of all the links that failed (and the particular error observed). Just by looking at the links, I know that some of those websites don’t even exist anymore, some might still have the website but the content has changed, and some could be transient errors. I had no idea I had so much link rot!Good memories Nigel, Mitch and Adam (the writers of those first three old sites)!
Ok, let’s start fixing them.
But what do you replace each broken link with? Sure, some of the content might still exist at a slightly different URL, but for many the content has long gone. Except maybe it hasn’t. I remembered the Internet Archive operates the Wayback Machine. So maybe I can take each broken URL, paste it into the Wayback Machine, and if there’s a match, use the archive’s URL instead.
Except I had hundreds of broken links. Maybe I could automate this as well?
Find out in part 3..
-
Fixing my blog (part 1) - Introduction
I’ve been revisiting web accessibility. It’s something I remember first learning about accessibility many years ago at a training workshop run by Vision Australia back when I worked at the University of South Australia. The web has progressed a little bit in the last 15 odd years, but the challenge of accessibility remains. More recently I had the opportunity to update my accessibility knowledge by attending a couple of presentations given by Larene Le Gassick (who also happens to be a fellow Microsoft MVP).
I wondered how accessible my blog was. Theoretically it should be pretty good, considering it is largely text with just a few images. There shouldn’t be any complicated navigation system or confusing layout. Using tools to check accessibility, and in particular compliance with a particular level of the Web Content Accessibility Guidelines (WCAG) standard will not give you the complete picture. But it can identify some deficiencies and give you confidence that particular problems have been eliminated.
Ross Mullen wrote a great article showing how to use the
pa11y
GitHub Action as part of your continuous integration workflow to automatically scan files at build time. Pa11y is built on the axe-core library.Further research brought me to Accessibility Insights - Android, browser and Windows desktop accessibility tools produced by Microsoft. From here I then found that Microsoft had also made a GitHub Action (currently in development) Accessibility Insights Action, which as I understand it, also leverages axe-core.
The next few blog posts will cover my adventures working towards being able to run that action against my blog. I thought it would be simple, but it turns out I had some other issues with my blog that needed to be addressed along the way. Stay tuned!
-
Snapshot testing Verify.MongoDB
Verify
is a snapshot tool created by Simon Cropp. It takes inspiration fromApprovalTests
and makes it easy to assert complex data models and documents (e.g. as part of a unit test).What I like about this technique is that if the data model or document is different to what the test expects, then not only does the test fail, but for local development, it can automatically launch familiar diff tools. One of my favourites is Beyond Compare, and it makes it very easy to identify what the actual differences are.
In addition to the main Verify library, Simon and others have also created extension packages to provide additional support for more specific cases (like
Verify.AspNetCore
,Verify.EntityFramework
,Verify.ImageSharp
,Verify.NServiceBus
and more).I was doing some work with Azure Cosmos DB using the Mongo API with .NET and thought it would be useful to be able to write some tests to capture what actual queries are being sent over the wire.
I’d recently listened to an episode of The Unhandled Exception podcast where Dan Clarke interviewed Simon on Snapshot Testing. He gave the example of using the
Verify.EntityFramework
extension package to write unit tests that would validate the SQL that Entity Framework was generating.This made me wonder if I could do something similar for MongoDB. After reviewing how the
Verify.EntityFramework
extension worked, I took a closer look at the MongoDB .NET Driver library to see what hooks were available. After a bit of trial and error, I figured out how it was possible!You can write a unit test that includes code like this:
MongoDBRecording.StartRecording(); await collection.FindAsync(Builders<BsonDocument>.Filter.Eq("_id", "blah"), new FindOptions<BsonDocument, BsonDocument>()); await Verifier.Verify("collection");
The verified file would have the following content:
{ target: collection, mongo: [ { Database: VerifyTests, Document: { filter: { _id: blah }, find: docs }, Type: Started, Command: find, StartTime: DateTimeOffset_1, OperationId: Id_1, RequestId: Id_2 }, { Document: { cursor: { firstBatch: [], id: 0, ns: VerifyTests.docs }, ok: 1.0 }, Type: Succeeded, Command: find, StartTime: DateTimeOffset_2, OperationId: Id_1, RequestId: Id_2 } ] }
That’s a representation of what would be sent over the wire by the query in the test. It’s an ideal opportunity to confirm that the query is doing what you intended. For pay-per-use services like Cosmos DB, it’s critical that your queries are as efficient as possible. Otherwise, it might cost you too much, and your queries might end up rate-limited.
After confirming it worked as I’d hoped, I figured it could be something that others might find useful, so I created a NuGet package. I got in touch with Simon to find out how best to get it published on nuget.org. He was most helpful, and I’m pleased to report that the package is now available at https://www.nuget.org/packages/Verify.MongoDB/, and the source repository is at https://github.com/flcdrg/Verify.MongoDB.
If you’re building an application that’s using the MongoDB .NET Driver then this package will help you create some useful snapshot tests.
Check it out!