• Fixing my blog (part 1) - Introduction

    I’ve been revisiting web accessibility. It’s something I remember first learning about accessibility many years ago at a training workshop run by Vision Australia back when I worked at the University of South Australia. The web has progressed a little bit in the last 15 odd years, but the challenge of accessibility remains. More recently I had the opportunity to update my accessibility knowledge by attending a couple of presentations given by Larene Le Gassick (who also happens to be a fellow Microsoft MVP).

    I wondered how accessible my blog was. Theoretically it should be pretty good, considering it is largely text with just a few images. There shouldn’t be any complicated navigation system or confusing layout. Using tools to check accessibility, and in particular compliance with a particular level of the Web Content Accessibility Guidelines (WCAG) standard will not give you the complete picture. But it can identify some deficiencies and give you confidence that particular problems have been eliminated.

    Ross Mullen wrote a great article showing how to use the pa11y GitHub Action as part of your continuous integration workflow to automatically scan files at build time. Pa11y is built on the axe-core library.

    Further research brought me to Accessibility Insights - Android, browser and Windows desktop accessibility tools produced by Microsoft. From here I then found that Microsoft had also made a GitHub Action (currently in development) Accessibility Insights Action, which as I understand it, also leverages axe-core.

    The next few blog posts will cover my adventures working towards being able to run that action against my blog. I thought it would be simple, but it turns out I had some other issues with my blog that needed to be addressed along the way. Stay tuned!

  • Snapshot testing Verify.MongoDB

    Verify is a snapshot tool created by Simon Cropp. It takes inspiration from ApprovalTests and makes it easy to assert complex data models and documents (e.g. as part of a unit test).

    What I like about this technique is that if the data model or document is different to what the test expects, then not only does the test fail, but for local development, it can automatically launch familiar diff tools. One of my favourites is Beyond Compare, and it makes it very easy to identify what the actual differences are.

    In addition to the main Verify library, Simon and others have also created extension packages to provide additional support for more specific cases (like Verify.AspNetCore, Verify.EntityFramework, Verify.ImageSharp, Verify.NServiceBus and more).

    I was doing some work with Azure Cosmos DB using the Mongo API with .NET and thought it would be useful to be able to write some tests to capture what actual queries are being sent over the wire.

    I’d recently listened to an episode of The Unhandled Exception podcast where Dan Clarke interviewed Simon on Snapshot Testing. He gave the example of using the Verify.EntityFramework extension package to write unit tests that would validate the SQL that Entity Framework was generating.

    This made me wonder if I could do something similar for MongoDB. After reviewing how the Verify.EntityFramework extension worked, I took a closer look at the MongoDB .NET Driver library to see what hooks were available. After a bit of trial and error, I figured out how it was possible!

    You can write a unit test that includes code like this:

    MongoDBRecording.StartRecording();
    
    await collection.FindAsync(Builders<BsonDocument>.Filter.Eq("_id", "blah"),
        new FindOptions<BsonDocument, BsonDocument>());
        
    await Verifier.Verify("collection");
    

    The verified file would have the following content:

    {
      target: collection,
      mongo: [
        {
          Database: VerifyTests,
          Document: {
            filter: {
              _id: blah
            },
            find: docs
          },
          Type: Started,
          Command: find,
          StartTime: DateTimeOffset_1,
          OperationId: Id_1,
          RequestId: Id_2
        },
        {
          Document: {
            cursor: {
              firstBatch: [],
              id: 0,
              ns: VerifyTests.docs
            },
            ok: 1.0
          },
          Type: Succeeded,
          Command: find,
          StartTime: DateTimeOffset_2,
          OperationId: Id_1,
          RequestId: Id_2
        }
      ]
    }
    

    That’s a representation of what would be sent over the wire by the query in the test. It’s an ideal opportunity to confirm that the query is doing what you intended. For pay-per-use services like Cosmos DB, it’s critical that your queries are as efficient as possible. Otherwise, it might cost you too much, and your queries might end up rate-limited.

    After confirming it worked as I’d hoped, I figured it could be something that others might find useful, so I created a NuGet package. I got in touch with Simon to find out how best to get it published on nuget.org. He was most helpful, and I’m pleased to report that the package is now available at https://www.nuget.org/packages/Verify.MongoDB/, and the source repository is at https://github.com/flcdrg/Verify.MongoDB.

    If you’re building an application that’s using the MongoDB .NET Driver then this package will help you create some useful snapshot tests.

    Check it out!

  • What's new in NDepend 2022.1

    Is it really 6 years since I last wrote about NDepend? Apparently so!

    A lot has changed between v6 and the just-released v2022.1. Seriously, there are the changes listed at https://www.ndepend.com/whatsnew (it’s a long list), but I suspect there’s also even more than that.

    NDepend remains the preeminent tool for .NET dependency analysis.

    I thought I’d put the v2022.1 release through its paces by seeing what it makes of the .NET Interactive project from GitHub.

    I loaded up NDepend and configured it to analyse just the Microsoft.DotNet.Interactive* assemblies. By default, a HTML summary report is generated:

    NDepend HTML Report

    I find the real insights come with using the Visual NDepend application to drill into different parts of the analysis. Here’s the ‘Dashboard’ view for the project.

    Visual NDepend Dashboard

    I tend to look at the Rules summary first. You can click on the number to the right of the rule status (e.g. the ‘8’ next to ‘Critical’), and this will then show the details of the 8 critical rule violations. You can now double-click on a rule to see where in the code this rule has been matched.

    Once I’ve had a look at all the critical rule matches, then I’ll continue on to looking at the remaining rule violations.

    It’s important to remember, NDepend is just a tool. It’s up to you to decide if a particular rule makes sense for your codebase. You might decide that a rule should be disabled, or you can even customise a rule to change the behaviour so it is more appropriate for your use case.

    The new ILSpy integration could be useful for some. Unfortunately, it’s not a tool I use (I have the dotUltimate tools from JetBrains, so I tend to use dotPeek for decompiling).

    Another nice enhancement is that analysis is now done out-of-process by default - whether you’re using Visual NDepend, or Visual Studio. This is a good thing, especially for Visual Studio. The more things that run in separate processes, the less chance something adversely affects Visual Studio.

    A testament to the improved performance of NDepend is that it was quite tricky to take this screenshot showing the separate analysis process, as it had completed very quickly!

    NDepend processes in Task Manager

    There are also some nice improvements to .NET 6 support, with better handling of top-level statements and record types.

    Finally, the ability to export query results to HTML, XML or JSON (in addition to other formats) either through the UI, or through NDepend’s API could be really useful if you want to process the query results outside of NDepend. Maybe generate some custom reports that could be part of your continuous integration build pipeline.

    In closing, if you’re wanting to analyse your .NET code and learn more about how it is structured, check out NDepend. I’ll try not to leave it so long until my next post on NDepend!

    Disclosure: I received a complementary license for NDepend as a Microsoft MVP award recipient