One of the traps with social media is that people tend to just post nice things. Or everyday things but with a filter applied to make it look better than it really is. But life is not always nice. Things don’t always work out the way you hoped. Some days are successful, other days not so much, and we don’t often hear about the latter.
So allow me to redress the balance on my blog by following up my recent post about passing the Microsoft exam AZ-400, with how I subsequently failed to renew my Microsoft Certified: Azure Developer Associate.
A lot of the newer certifications from Microsoft require an annual renewal. Rather than having to pay Prometric to sit another exam, instead, the renewal is hosted by Microsoft, doesn’t cost anything, has fewer questions and (importantly for me) you can retake the renewal test multiple times until you pass. You’re also still bound by a non-disclosure agreement.
The information about renewing the Microsoft Certified: Azure Developer Associate includes a summary of what will be tested:
- Create a long-running serverless workflow with Durable Functions
- Execute an Azure Function with triggers
- Choose the appropriate API for Azure Cosmos DB
- Optimize the performance of Azure Cosmos DB by using partitioning and indexing strategies
- Control authentication for your APIs with Azure API Management
To be honest, I kind of skimmed over that and just jumped straight in. Hey, I’d passed AZ-204 last year, so this should be easy, right?
I quickly discovered that I’d forgotten a lot of the things that were being asked. And in the end, not surprisingly, I failed.
In hindsight, looking back at that list of skills being measured, I think the problem is I haven’t actually been working with all of those technologies recently. Yes, I’ve been using Azure Functions, but not Durable Functions. I’ve been working with Cosmos DB, but just one aspect of it. Likewise, I haven’t done anything with API Management recently.
So yeah, that was disappointing. But if the point of the assessment is to validate my knowledge of the skills listed above, then a ‘fail’ is unfortunately accurate.
The good news in all this is that I can take the test again.
But also, as part of the screen shown at the end of the test, you’re provided with a customised list of learning material that you could review, based on how you went in each of the skill areas. So I’ve got some homework to do, then I’ll have another go.
Sometimes things don’t work out the way you’d hoped. Sometimes there’s nothing you can do to change that. But sometimes, you do get a second (or third) chance.
I’m pleased to report that today I passed Microsoft exam AZ-400: Designing and Implementing Microsoft DevOps Solutions, which combined with AZ-201 that I took last year, now qualifies me for the Microsoft Certified: DevOps Engineer Expert certification.
The exam is quite broad in the content it covers:
- Develop an instrumentation strategy (5-10%)
- Develop a Site Reliability Engineering (SRE) strategy (5-10%)
- Develop a security and compliance plan (10-15%)
- Manage source control (10-15%)
- Facilitate communication and collaboration (10-15%)
- Define and implement continuous integration (20-25%)
- Define and implement a continuous delivery and release management strategy (10-15%)
Some areas I’d been working with for quite a few years, but others were new to me. To help prepare I used a couple of resources:
Nice to get that one dusted.
This week the .NET platform celebrates 20 years since it first launched publicly with Visual Studio .NET in 2002.
It doesn’t seem that long ago that I was working as a web developer at the University of South Australia. I’d started out writing web applications with Perl (and the ‘CGI’ module?), mostly on Linux. That changed when Gary (who had been the Uni’s webmaster) joined our team and showed me this thing from Microsoft called ‘Active Server Pages’ (ASP). I was a little sceptical about using a Microsoft technology, but had to admit that it did make creating server-side applications a lot easier (particularly when needing to query a database like SQL Server). I’d also previously done Visual Basic 6 desktop development, so VBScript was pretty familiar.
We created some pretty cool applications with ASP. I even did a little C++ work creating a COM component that our ASP pages could call. It was so easy to make a mistake and leak memory!
After the release of ASP 3.0 I started reading about a rumoured “ASP+” that would bring new features (though it wasn’t really clear what those would be).
It must have been sometime in 2001 that Microsoft released the first public beta versions of the .NET Framework. As it turns out, it wasn’t just “ASP+” but a whole new ecosystem. Instead of VBScript, there was Visual Basic.NET (which was compiled, and similar, but not exactly the same as Visual Basic 6). There was also a new language “C#”, which looked a lot like Java (though at the time I don’t think Microsoft would ever publicly admit that).
For some reason, the fact that it compiled your code made it feel to me like I was doing “proper” computer programming (in comparison to the interpreted scripting languages I’d been using). I started working with the beta release and in particular ASP.NET WebForms. Once the tooling support shipped in Visual Studio 2002, it was quite a productive time. Admittedly, in hindsight the WebForms model (with the infamous View State that could quickly lead to page bloat) did have some issues, but it worked and the visual designer (especially for folks who had used VB6) was pretty innovative.
I dabbled a little with C#, if for no other reason that you could add XML comments that would generate Intellisense help for your code, and VB.NET didn’t have that. But given our background in ASP and VBScript, moving to VB.NET made sense. Soon most of our development was using .NET (though ASP never went away).
I moved on from UniSA and spent some time contracting. It was here that I made the switch to C#. I’ve often heard that C# developers found it difficult to switch to VB, but VB folks could transition relatively easily to C#, probably because they’re used to reading C# as it was often more prevalent in documentation and code samples.
I was always interested in finding ways to improve the quality of the code being developed. I remember reading about, and then trying some of the early unit testing frameworks (NUnit and MbUnit). My first visit to the Adelaide .NET User Group was to give a talk on unit testing with MBUnit. Little did I know that years later I would end up running the group!
Over the years Microsoft sporadically released new versions of .NET and Visual Studio. Inevitably there were bugs. Sometimes there were hotfixes or service packs, but you had to find them. Often there was no other option but to wait until the next major release and hope the problem had been resolved. Things started to improve around the Visual Studio 2012 timeframe when service packs became more frequent (and even better now with monthly updates).
The .NET runtime and languages continued to evolve - Generics and Language-Integrated Query (LINQ) are two that immediately stand out. The libraries also grew, particularly around the .NET 3.0 release with Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), Windows Workflow Foundation (WF) and CardSpace (whatever happened to that!). Even though .NET did start off looking pretty similar to Java, the regular innovation happening on the .NET side made me feel good about the investments I’d made in the ecosystem. From a distance it did seem like Java was stagnating, though my impression is that has since changed.
There was also tension between 3rd party libraries (especially open source) and Microsoft-provided libraries. Rather than promoting existing libraries (like NUnit, NHibernate, log4net and the various IoC containers), Microsoft had a tendency to build their own. Things are now better than they were, but it’s still a point of contention.
.NET Framework was always something built by Microsoft and shipped to customers. Sometimes you could dig in to see the source (via reflection or later on through the reference sources they published), but you couldn’t easily contribute. It also only ran on Windows operating systems, which had been fine, but Mac couldn’t be ignored, and Linux popularity was really growing.
Fast forward to 2014 and Microsoft announced “.NET Core” - an open-source, cross-platform implementation of .NET, managed by the new .NET Foundation.
Around this time I’d also become involved in organising the Adelaide .NET User Group. Later in 2015, I was awarded my first Microsoft MVP award. This allowed me to travel to Seattle and attend my first MVP Summit at Microsoft’s Redmond campus. It was an amazing experience, though curiously there weren’t heaps of secrets to learn, as much more planning and development was now taking place in the open.
“.NET Framework” then became the legacy Windows-only thing (with just occasional bugfixes and virtually no enhancements). .NET Core was the future, and once it got to version 5 (surpassing .NET Framework’s last major version ‘4’), it became just “.NET”.
With Microsoft’s acquisition of Xamarin, Mono (originally a separate cross-platform implementation of .NET) has now been brought into the fold. While Microsoft still maintains the primary role in the implementation and direction of .NET, we’re now seeing many of the performance and memory optimisations in .NET have been contributed by the community. There’s Visual Studio Code for editing, in addition to new releases of Visual Studio. F# (a functional language for .NET) is influencing the C# language. Blazor is enabling .NET to effectively run inside a browser, and on the other side we can run massive .NET deployments in the cloud.
As I write this, .NET 6 is the latest long-term servicing release. It won’t be long before we see the first preview releases of .NET 7, with even more advances in the runtime, libraries, languages and compilers.
It’s been an exciting ride, and I’m looking forward to where .NET takes us in the future.