• Passed AZ-400

    I’m pleased to report that today I passed Microsoft exam AZ-400: Designing and Implementing Microsoft DevOps Solutions, which combined with AZ-201 that I took last year, now qualifies me for the Microsoft Certified: DevOps Engineer Expert certification.

    Microsoft Certified: DevOps Engineer Expert badge

    View my verified achievement from Microsoft

    The exam is quite broad in the content it covers:

    • Develop an instrumentation strategy (5-10%)
    • Develop a Site Reliability Engineering (SRE) strategy (5-10%)
    • Develop a security and compliance plan (10-15%)
    • Manage source control (10-15%)
    • Facilitate communication and collaboration (10-15%)
    • Define and implement continuous integration (20-25%)
    • Define and implement a continuous delivery and release management strategy (10-15%)

    Some areas I’d been working with for quite a few years, but others were new to me. To help prepare I used a couple of resources:

    Nice to get that one dusted.

  • Happy 20th Birthday .NET!

    This week the .NET platform celebrates 20 years since it first launched publicly with Visual Studio .NET in 2002.

    .NET 20 Years image

    It doesn’t seem that long ago that I was working as a web developer at the University of South Australia. I’d started out writing web applications with Perl (and the ‘CGI’ module?), mostly on Linux. That changed when Gary (who had been the Uni’s webmaster) joined our team and showed me this thing from Microsoft called ‘Active Server Pages’ (ASP). I was a little sceptical about using a Microsoft technology, but had to admit that it did make creating server-side applications a lot easier (particularly when needing to query a database like SQL Server). I’d also previously done Visual Basic 6 desktop development, so VBScript was pretty familiar.

    We created some pretty cool applications with ASP. I even did a little C++ work creating a COM component that our ASP pages could call. It was so easy to make a mistake and leak memory!

    After the release of ASP 3.0 I started reading about a rumoured “ASP+” that would bring new features (though it wasn’t really clear what those would be).

    It must have been sometime in 2001 that Microsoft released the first public beta versions of the .NET Framework. As it turns out, it wasn’t just “ASP+” but a whole new ecosystem. Instead of VBScript, there was Visual Basic.NET (which was compiled, and similar, but not exactly the same as Visual Basic 6). There was also a new language “C#”, which looked a lot like Java (though at the time I don’t think Microsoft would ever publicly admit that).

    For some reason, the fact that it compiled your code made it feel to me like I was doing “proper” computer programming (in comparison to the interpreted scripting languages I’d been using). I started working with the beta release and in particular ASP.NET WebForms. Once the tooling support shipped in Visual Studio 2002, it was quite a productive time. Admittedly, in hindsight the WebForms model (with the infamous View State that could quickly lead to page bloat) did have some issues, but it worked and the visual designer (especially for folks who had used VB6) was pretty innovative.

    I dabbled a little with C#, if for no other reason that you could add XML comments that would generate Intellisense help for your code, and VB.NET didn’t have that. But given our background in ASP and VBScript, moving to VB.NET made sense. Soon most of our development was using .NET (though ASP never went away).

    I moved on from UniSA and spent some time contracting. It was here that I made the switch to C#. I’ve often heard that C# developers found it difficult to switch to VB, but VB folks could transition relatively easily to C#, probably because they’re used to reading C# as it was often more prevalent in documentation and code samples.

    I was always interested in finding ways to improve the quality of the code being developed. I remember reading about, and then trying some of the early unit testing frameworks (NUnit and MbUnit). My first visit to the Adelaide .NET User Group was to give a talk on unit testing with MBUnit. Little did I know that years later I would end up running the group!

    Over the years Microsoft sporadically released new versions of .NET and Visual Studio. Inevitably there were bugs. Sometimes there were hotfixes or service packs, but you had to find them. Often there was no other option but to wait until the next major release and hope the problem had been resolved. Things started to improve around the Visual Studio 2012 timeframe when service packs became more frequent (and even better now with monthly updates).

    The .NET runtime and languages continued to evolve - Generics and Language-Integrated Query (LINQ) are two that immediately stand out. The libraries also grew, particularly around the .NET 3.0 release with Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), Windows Workflow Foundation (WF) and CardSpace (whatever happened to that!). Even though .NET did start off looking pretty similar to Java, the regular innovation happening on the .NET side made me feel good about the investments I’d made in the ecosystem. From a distance it did seem like Java was stagnating, though my impression is that has since changed.

    There was also tension between 3rd party libraries (especially open source) and Microsoft-provided libraries. Rather than promoting existing libraries (like NUnit, NHibernate, log4net and the various IoC containers), Microsoft had a tendency to build their own. Things are now better than they were, but it’s still a point of contention.

    .NET Framework was always something built by Microsoft and shipped to customers. Sometimes you could dig in to see the source (via reflection or later on through the reference sources they published), but you couldn’t easily contribute. It also only ran on Windows operating systems, which had been fine, but Mac couldn’t be ignored, and Linux popularity was really growing.

    Fast forward to 2014 and Microsoft announced “.NET Core” - an open-source, cross-platform implementation of .NET, managed by the new .NET Foundation.

    Around this time I’d also become involved in organising the Adelaide .NET User Group. Later in 2015, I was awarded my first Microsoft MVP award. This allowed me to travel to Seattle and attend my first MVP Summit at Microsoft’s Redmond campus. It was an amazing experience, though curiously there weren’t heaps of secrets to learn, as much more planning and development was now taking place in the open.

    “.NET Framework” then became the legacy Windows-only thing (with just occasional bugfixes and virtually no enhancements). .NET Core was the future, and once it got to version 5 (surpassing .NET Framework’s last major version ‘4’), it became just “.NET”.

    With Microsoft’s acquisition of Xamarin, Mono (originally a separate cross-platform implementation of .NET) has now been brought into the fold. While Microsoft still maintains the primary role in the implementation and direction of .NET, we’re now seeing many of the performance and memory optimisations in .NET have been contributed by the community. There’s Visual Studio Code for editing, in addition to new releases of Visual Studio. F# (a functional language for .NET) is influencing the C# language. Blazor is enabling .NET to effectively run inside a browser, and on the other side we can run massive .NET deployments in the cloud.

    As I write this, .NET 6 is the latest long-term servicing release. It won’t be long before we see the first preview releases of .NET 7, with even more advances in the runtime, libraries, languages and compilers.

    It’s been an exciting ride, and I’m looking forward to where .NET takes us in the future.

  • Don't let me be misunderstood

    A common goal of many people, including myself, is to be understood. To not only be heard but listened to. For the person receiving to ‘get’ what the first person is saying.

    This is trickier with the written word compared to a verbal conversation. You can’t usually rely on quick feedback techniques like ‘reflective listening’ or similar to correct misunderstandings or gain clarification. Writing clearly can also improve the accessibility of your content.

    So if good written communication is your goal, then there are a few things you can employ:

    • Correct spelling
    • Appropriate grammar
    • Proofreading (ideally by another person)

    I was reminded of this recently when an old work colleague (hi Simon!) reached out to let me know that I had a typo in my GitHub ‘About Me’ page. He knew me well enough to know that I love this kind of feedback! It caused me to review the text again myself, and I discovered a second error, so it was good to get them both fixed.

    Another example that comes to mind is an ebook I purchased a couple of years ago from a well-known book publisher. I won’t name the title, but it relates to .NET, and almost from the first page I was encountering grammatical errors. I don’t blame the author for this, but rather the book publisher. My understanding is that these should have been caught in the editing phase of publishing. It doesn’t reflect well on the publisher (or the editor) that they somehow missed addressing this, and makes me more cautious about buying other books from them. The result was a book that I found hard to read. Sentences didn’t flow, and comprehension was more of a challenge than it should have been. I gave up reading the book in the end as it was too distracting.

    I’ve recently been going back and running cspell over my older blog posts. It’s embarrassing to find numerous spelling errors in old posts that have been sitting there for years. At least I can fix them. A little while ago, I migrated my blog from Blogger to self-hosting on GitHub using Jekyll, with all the posts now being written in Markdown. I have the Code Spell Checker extension installed in Visual Studio Code. For newer posts, spelling errors should be flagged in the editor.

    Microsoft Word has had grammar checking built-in for quite a while, and I thought I’d have a search to see if there was something similar for Visual Studio Code (the editor I use to write my blog posts in). It looks like Rahul Kadyan has written an unofficial Grammarly extension. I just installed it, and check out all the extra squiggles as I was writing this page in the screenshot below!

    Do take the time to review Grammarly’s privacy policy. They run “as a service”, so all your text for grammar checking will be sent to them, so make sure you’re comfortable with that.

    Screenshot of Grammarly warnings

    Look, it’s spotted another repeated word (“an an old work”) already!

    The extension has some limitations. Some of the corrections are only available to paid Grammarly users (it took me a bit to figure that out - signing in with a free account doesn’t seem to have any benefit).

    It is interesting to compare that to copying and pasting the text into Microsoft Word. Fewer squiggles, but it has flagged the repeated word.

    Screenshot of Microsoft Word grammar/spelling warnings

    Tools are great, but the skill is knowing when to use them and when it is ok to ignore them.

    Getting at least one extra pair of eyes to proofread your text is probably the best idea. Only this week, I asked a colleague to review something I’d written to confirm that my intentions were being conveyed correctly before sharing it more widely. It’s less useful for my blog, being my thoughts, but I have used it in the past. While my blog content is hosted on GitHub, the repository is private, as sometimes I might have future posts or drafts that aren’t ready to be publicly viewable.

    In conclusion, my goal is to create clear and understandable content. Do reach out in the comments if you find cases where I’ve fallen short of that - I’m sure there are many (probably some I’ve still overlooked in this post!). With your help, I hope you find it easier to understand what I’m trying to say.