• Snapshot testing Verify.MongoDB

    Verify is a snapshot tool created by Simon Cropp. It takes inspiration from ApprovalTests and makes it easy to assert complex data models and documents (e.g. as part of a unit test).

    What I like about this technique is that if the data model or document is different to what the test expects, then not only does the test fail, but for local development, it can automatically launch familiar diff tools. One of my favourites is Beyond Compare, and it makes it very easy to identify what the actual differences are.

    In addition to the main Verify library, Simon and others have also created extension packages to provide additional support for more specific cases (like Verify.AspNetCore, Verify.EntityFramework, Verify.ImageSharp, Verify.NServiceBus and more).

    I was doing some work with Azure Cosmos DB using the Mongo API with .NET and thought it would be useful to be able to write some tests to capture what actual queries are being sent over the wire.

    I’d recently listened to an episode of The Unhandled Exception podcast where Dan Clarke interviewed Simon on Snapshot Testing. He gave the example of using the Verify.EntityFramework extension package to write unit tests that would validate the SQL that Entity Framework was generating.

    This made me wonder if I could do something similar for MongoDB. After reviewing how the Verify.EntityFramework extension worked, I took a closer look at the MongoDB .NET Driver library to see what hooks were available. After a bit of trial and error, I figured out how it was possible!

    You can write a unit test that includes code like this:

    MongoDBRecording.StartRecording();
    
    await collection.FindAsync(Builders<BsonDocument>.Filter.Eq("_id", "blah"),
        new FindOptions<BsonDocument, BsonDocument>());
        
    await Verifier.Verify("collection");
    

    The verified file would have the following content:

    {
      target: collection,
      mongo: [
        {
          Database: VerifyTests,
          Document: {
            filter: {
              _id: blah
            },
            find: docs
          },
          Type: Started,
          Command: find,
          StartTime: DateTimeOffset_1,
          OperationId: Id_1,
          RequestId: Id_2
        },
        {
          Document: {
            cursor: {
              firstBatch: [],
              id: 0,
              ns: VerifyTests.docs
            },
            ok: 1.0
          },
          Type: Succeeded,
          Command: find,
          StartTime: DateTimeOffset_2,
          OperationId: Id_1,
          RequestId: Id_2
        }
      ]
    }
    

    That’s a representation of what would be sent over the wire by the query in the test. It’s an ideal opportunity to confirm that the query is doing what you intended. For pay-per-use services like Cosmos DB, it’s critical that your queries are as efficient as possible. Otherwise, it might cost you too much, and your queries might end up rate-limited.

    After confirming it worked as I’d hoped, I figured it could be something that others might find useful, so I created a NuGet package. I got in touch with Simon to find out how best to get it published on nuget.org. He was most helpful, and I’m pleased to report that the package is now available at https://www.nuget.org/packages/Verify.MongoDB/, and the source repository is at https://github.com/flcdrg/Verify.MongoDB.

    If you’re building an application that’s using the MongoDB .NET Driver then this package will help you create some useful snapshot tests.

    Check it out!

  • What's new in NDepend 2022.1

    Is it really 6 years since I last wrote about NDepend? Apparently so!

    A lot has changed between v6 and the just-released v2022.1. Seriously, there are the changes listed at https://www.ndepend.com/whatsnew (it’s a long list), but I suspect there’s also even more than that.

    NDepend remains the preeminent tool for .NET dependency analysis.

    I thought I’d put the v2022.1 release through its paces by seeing what it makes of the .NET Interactive project from GitHub.

    I loaded up NDepend and configured it to analyse just the Microsoft.DotNet.Interactive* assemblies. By default, a HTML summary report is generated:

    NDepend HTML Report

    I find the real insights come with using the Visual NDepend application to drill into different parts of the analysis. Here’s the ‘Dashboard’ view for the project.

    Visual NDepend Dashboard

    I tend to look at the Rules summary first. You can click on the number to the right of the rule status (e.g. the ‘8’ next to ‘Critical’), and this will then show the details of the 8 critical rule violations. You can now double-click on a rule to see where in the code this rule has been matched.

    Once I’ve had a look at all the critical rule matches, then I’ll continue on to looking at the remaining rule violations.

    It’s important to remember, NDepend is just a tool. It’s up to you to decide if a particular rule makes sense for your codebase. You might decide that a rule should be disabled, or you can even customise a rule to change the behaviour so it is more appropriate for your use case.

    The new ILSpy integration could be useful for some. Unfortunately, it’s not a tool I use (I have the dotUltimate tools from JetBrains, so I tend to use dotPeek for decompiling).

    Another nice enhancement is that analysis is now done out-of-process by default - whether you’re using Visual NDepend, or Visual Studio. This is a good thing, especially for Visual Studio. The more things that run in separate processes, the less chance something adversely affects Visual Studio.

    A testament to the improved performance of NDepend is that it was quite tricky to take this screenshot showing the separate analysis process, as it had completed very quickly!

    NDepend processes in Task Manager

    There are also some nice improvements to .NET 6 support, with better handling of top-level statements and record types.

    Finally, the ability to export query results to HTML, XML or JSON (in addition to other formats) either through the UI, or through NDepend’s API could be really useful if you want to process the query results outside of NDepend. Maybe generate some custom reports that could be part of your continuous integration build pipeline.

    In closing, if you’re wanting to analyse your .NET code and learn more about how it is structured, check out NDepend. I’ll try not to leave it so long until my next post on NDepend!

    Disclosure: I received a complementary license for NDepend as a Microsoft MVP award recipient

  • Failed to renew Microsoft Certified: Azure Developer Associate

    One of the traps with social media is that people tend to just post nice things. Or everyday things but with a filter applied to make it look better than it really is. But life is not always nice. Things don’t always work out the way you hoped. Some days are successful, other days not so much, and we don’t often hear about the latter.

    So allow me to redress the balance on my blog by following up my recent post about passing the Microsoft exam AZ-400, with how I subsequently failed to renew my Microsoft Certified: Azure Developer Associate.

    A lot of the newer certifications from Microsoft require an annual renewal. Rather than having to pay Prometric to sit another exam, instead, the renewal is hosted by Microsoft, doesn’t cost anything, has fewer questions and (importantly for me) you can retake the renewal test multiple times until you pass. You’re also still bound by a non-disclosure agreement.

    I’d had the notification that my Azure Developer Associate certification needs to be renewed before 18th July (12 months since I first earned the certification).

    The information about renewing the Microsoft Certified: Azure Developer Associate includes a summary of what will be tested:

    • Create a long-running serverless workflow with Durable Functions
    • Execute an Azure Function with triggers
    • Choose the appropriate API for Azure Cosmos DB
    • Optimize the performance of Azure Cosmos DB by using partitioning and indexing strategies
    • Control authentication for your APIs with Azure API Management

    To be honest, I kind of skimmed over that and just jumped straight in. Hey, I’d passed AZ-204 last year, so this should be easy, right?

    I quickly discovered that I’d forgotten a lot of the things that were being asked. And in the end, not surprisingly, I failed.

    In hindsight, looking back at that list of skills being measured, I think the problem is I haven’t actually been working with all of those technologies recently. Yes, I’ve been using Azure Functions, but not Durable Functions. I’ve been working with Cosmos DB, but just one aspect of it. Likewise, I haven’t done anything with API Management recently.

    So yeah, that was disappointing. But if the point of the assessment is to validate my knowledge of the skills listed above, then a ‘fail’ is unfortunately accurate.

    The good news in all this is that I can take the test again.

    But also, as part of the screen shown at the end of the test, you’re provided with a customised list of learning material that you could review, based on how you went in each of the skill areas. So I’ve got some homework to do, then I’ll have another go.

    Sometimes things don’t work out the way you’d hoped. Sometimes there’s nothing you can do to change that. But sometimes, you do get a second (or third) chance.