Saturday, 13 October 2018

A Jon Skeet Meetup retrospective

This week we hosted Jon Skeet at the Adelaide .NET User Group. Jon demonstrated some of the new language features coming in C# 8, and it was probably the biggest attendance we've had in a really, really long time.

Because we had so many registrations, I lined up some extra help with my two oldest kids (conveniently on school holidays). They helped set up the room, liaised with the pizza delivery guy, and helped pack up everything. I think they even found a few things familiar with Jon's use of Fibonacci sequences in one example and similarity of programming language features (my eldest daughter has been doing some Python coding at school).

Jon lives in the UK, and we're in Australia, so whilst we would have loved to have Jon in person, the next best thing was to have him present remotely. We've had remote presentations before using Google Hangouts, but this time I opted to use Skype (and made use of the new recording feature).
I don't use Skype a lot, but we got the call up and running without too much difficulty.

I set up a webcam in the meeting room, along with a boundary microphone (an MXL AC404 USB Conference Microphone). The intention is that the presenter can see the audience, and the boundary microphone allows people to comment and ask questions from a fair way away (eg. right at the back of the room) and still be heard. This seemed to work pretty well - people up the back were able to ask questions and Jon seemed to hear them ok. Jon's video feed was pretty good. I think our feed back might have been a bit jumpy, but for the most part I think it was ok.

I left it to the last minute to arrange for access to the WiFi network at our venue. I ended up using my phone's 4G data for the call, which worked well. It was only after we'd finished that I discovered that an email had come through just before the start of the meeting with the WiFi details. At least I've got them for next time.

We also picked up a meeting sponsor this month in Simon Cook from Encode Talent Management. Being able to not charge attendees (to cover the cost of pizza) was great, and hopefully this relationship might continue in the future.

Snapshot from Skype recording, showing Jon Skeet in top, audience in bottom


Skype notes:
  • The recording doesn't include the extra buttons/overlays/controls (which is good)
  • The recording will record a split-screen of web cams if they're both active. If you just want to have one webcam in the recording then you'll need to disable the other one. When screen sharing, then just that is recorded.
  • Taking a 'snapshot' follows similar rules as for recording. eg. a snapshot would include both webcams if they're active, or just the shared screen if one is being shared.
  • If the presenting person isn't using a headset, then to avoid echo it's probably best to mute the microphone (and just un-mute it for questions)
  • Making the overlay controls slide out of the way was inconsistent. Sometimes moving the mouse off the screen worked, but one time it didn't. Would be good to figure this out.
  • F11 makes Skype go full screen
  • Skype now supports NDI. This means it should be able to talk to software like OBS Studio if you wanted more control over broadcasting or recording. I don't know if the feed it sends is just the video/shared screen or if that also includes the control overlays etc.
Other notes:
  • Our regular room can hold a decent crowd if necessary.
  • People will eat as much pizza as there is available to eat
  • People don't drink much water (but this might change if it was warmer weather)
  • Organise wifi access earlier!
  • Windows 10 has simple video editing via the Photos app
The recording of Jon's talk is up on YouTube. I won't be giving up my day job to become a YouTube broadcaster anytime soon, but it's nice to have a record of a great presentation.


Thursday, 13 September 2018

Speaking at .NET Conf - Put your C#, VB and F# projects and packaging on a diet

I'm really exited to be selected as one of the community speakers for .NET Conf

Title slide for .NET Conf talk
.NET Conf is a free “virtual” conference organised by Microsoft and the .NET developer community that is streamed live around the world. Being virtual, it means organising travel and accommodation is remarkably easy!

My talk is titled “Put your C#, VB and F# projects and packaging on a diet”, drilling in to the new project system for .NET, and how you can use it even with old projects that target .NET Framework and it starts at 04:00 UTC on Friday 14th September (check local times). Go to https://www.dotnetconf.net/ to watch the live stream.

All the demos from my talk and links to other resources can be found in the Github repo https://github.com/flcdrg/project-system-diet

Monday, 27 August 2018

Converting a SQL Server .bacpac to a .dacpac

Microsoft SQL Server has two related portable file formats - the DACPAC and the BACPAC. Quoting Data-tier Applications:
A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC.
A BACPAC is a related artifact that encapsulates the database schema as well as the data stored in the database.
When they say related, they're not kidding! Both of these files formats are based on the Open Packaging Conventions (a fancy way of saying it's a .zip file with some other bits), and cracking them open you discover that a bacpac file is basically a dacpac with a few extra files and a couple of different settings. Knowing this, it should be possible to manually convert a bacpac to a dacpac.

First, unzip the .bacpac file (using 7-zip, or rename to .zip and use Windows File Explorer’s Extract Archive).

Now do the following actions (you could do these programmatically if this is something you need to do repeatedly):
  1. Edit model.xml
    1. Change [email protected] to 2.4
  2. Edit Origin.xml
    1. Change ContainsExportedData to false
    2. Change ModelSchemaVersion to 2.4
    3. Remove ExportStatistics
    4. Recalculate the SHA256 checksum for model.xml and update the value stored in Checksums/[email protected]=’/model.xml’
  3. Remove directories _rels and Data
Now re-zip up the remaining files and change the file suffix back to .dacpac

To verify that the .dacpac is valid, try using SSMS with the Upgrade Data-tier Application wizard. Run it against any database and if you can proceed to without error to the "Review Upgrade Plan" step, you should be good to go.

Monday, 13 August 2018

Create a temporary file with a custom extension in PowerShell

Just a quick thing I wanted to record for posterity. The trick is using the -PassThru parameter with the Rename-Item cmdlet so that this ends up a one-liner thanks to PowerShell's pipeline:

$tempNuspec = Get-ChildItem ([IO.Path]::GetTempFileName()) | Rename-Item -NewName { [IO.Path]::ChangeExtension($_, ".nuspec") } -PassThru

Saturday, 28 July 2018

Microsoft LifeCam Studio stops working with Windows 10

I have a Microsoft LiveCam Studio webcam that I bought a few years ago for the Adelaide .NET User Group for when we have remote presenters. It's been pretty good (although not long after I bought it, Scott Hanselman tweeted that actually the Logitech 930e was worth considering with possibly a wider shot).

I went to use it the other day, and it just plain refused to work. My laptop has a builtin webcam and that was showing up, but using any app (eg. Microsoft Teams or the Windows Camera app) just wasn't showing the LifeCam. It was strange as it did show up as an audio device, but not video.

I brought up Device Manager, and looked in the Cameras node, but it wasn't there. I tried unplugging it and re-plugging back in (and rebooting Windows) to no avail.

Device Manager showing Cameras node
I then tried the webcam with a different PC, and it worked, so at least I knew the device wasn't faulty. Firing up Device Manager on the second PC revealed something interesting though. The LifeCam wasn't under Cameras, it was listed under Imaging devices. Who would have guessed!

Device Manager showing Imaging devices node


Switching back to my laptop, in Device Manager, I went to the View menu and selected Show hidden devices. Looking under the Imaging devices revealed something unexpected. There were two device drivers listed for the LifeCam! I right-clicked on both devices and selected Uninstall device.

I then plugged the webcam back into the laptop, and now Windows registered that a new device was attached and indicated it was installing the device drivers. After a short wait, it was now working correctly!

Mystery solved 😁

Monday, 23 July 2018

Creating VSTS Service Hooks with PowerShell using the REST API

Service Hooks are Visual Studio Team Services way of integrating with other web applications by automatically sending them events when specific things happen in VSTS, like a build completes or code is committed.

These are what I used in my earlier post about integrating VSTS with TeamCity. If you just have one service hook to set up then using the web UI is fine, but if you find yourself doing something again and again then finding a way to automate it can be really useful.

Interacting with service hooks via VSTS REST API is documented here. Web Hooks are a particular service hook 'consumer' suitable for sending HTTP messages to any web endpoint.

I'm going to create a PowerShell script which requires the following parameters

Param(
   [string]$vstsAccount,
   [string]$projectName,
   [string]$repositoryName,
   [string]$token
)

Using the VSTS APIs requires authentication, so the first thing is to encode a Personal Access Token (PAT) so it can be set as a HTTP header. (You create PATs from the Web UI by clicking on your profile picture and selecting Security)

$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f "",$token)))

There's a whole lot of VSTS events that you can choose as a trigger when creating a service hook. In this example, I'm interested in being notified when a Git pull request is created. In order to use this particular API, I need to also know the ids of the VSTS Project and Repository that I want this service hook associated with. I'll use API calls to find those out.

$uri = "https://$($vstsAccount).visualstudio.com/_apis/projects?api-version=5.0-preview.1"
$result = Invoke-RestMethod -Uri $uri -Method Get -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}

$projectId = $result.value | Where-Object { $_.name -eq $projectName } | Select-Object -ExpandProperty id

$uri = "https://$($vstsAccount).visualstudio.com/_apis/git/repositories?api-version=5.0-preview.1"
$result = Invoke-RestMethod -Uri $uri -Method Get -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}

$repositoryId = $result.value | Where-Object { $_.name -eq "$repositoryName" } | Select-Object -ExpandProperty id

As far as the REST API is concerned, we're creating a new 'subscription'. Create operations use a POST and usually also require JSON data to be sent in the body. We'll use PowerShell to model the data and then convert it back to a JSON string:

$body = @{
    "publisherId" = "tfs"
    "eventType" = "git.pullrequest.created"
    "resourceVersion" = "1.0"
    "consumerId" = "webHooks"
    "consumerActionId" = "httpRequest"
    "publisherInputs" = @{
        "projectId" = $projectId
        "repository" = $repositoryId
        "branch" = ""
        "pullrequestCreatedBy" = ""
        "pullrequestReviewersContains" = ""
    }
    "consumerInputs" = @{
        "url" = "https://servicetonotify"
        "basicAuthUsername" = ""
        "basicAuthPassword" = ""
        "resourceDetailsToSend" = "all"
        "messagesToSend" = "none"
        "detailedMessagesToSend" = "none"
    }
}

$bodyJson = $body | ConvertTo-Json

Obviously you will need to customise the url value to point to your particular web service that should be notified. If that service requires authentication, you can supply a username and password in the basicAuthUsername and basicAuthPassword values. You can also control what detailed information VSTS will send to by setting the three *ToSend values. In my case I only needed resourceDetailsToSend but not the other two.

$uri = "https://$($vstsAccount).visualstudio.com/_apis/hooks/subscriptions?api-version=5.0-preview.1"
Invoke-RestMethod -Uri $uri -Method Post -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Body $bodyJson

Using the VSTS REST API is pretty straight forward, and gives you great access to query and modify your VSTS environment.

Monday, 16 July 2018

Boxstarter and Chocolatey tips


Two big things happened earlier this year on the Chocolatey front. First off, Boxstarter (the tool created by Matt Wrock that allows you to script up full Windows installations including handling reboots) is now being managed by Chocolatey. Boxstarter.org still exists, but the source repository is now under the Chocolatey org on Github.

The second is that Microsoft are contributing Boxstarter scripts in a new Github repo – https://github.com/Microsoft/windows-dev-box-setup-scripts

If you’re looking to use Boxstarter to automate the software installation of your Windows machines, there’s a few tricks and traps worth knowing about.

Avoid MAXPATH errors


It’s worth understanding that Boxstarter embeds its own copy of Chocolatey and uses that rather than choco.exe. Due to some compatibility issues Boxstarter currently needs to embed an older version of Chocolatey. That particular version does have one known bug where the temp directory Chocolatey uses to download binaries goes one directory deeper each install. Not a problem in isolation, but when you’re installing a lot of packages all at once, you soon hit the old Windows MAXPATH limit.
A workaround is described in the bug report – essentially using the --cache-location argument to override where downloads are saved. The trick here is that you need to use this on all choco calls in your Boxstarter script – even for things like choco pin. Forget those and you still may experience the MAXPATH problem.

To make it easier, I add the following lines to the top of my Boxstarter scripts

New-Item -Path "$env:userprofile\AppData\Local\ChocoCache" -ItemType directory -Force | Out-Null
$common = "--cacheLocation=`"$env:userprofile\AppData\Local\ChocoCache`""

And then I can just append $common to each choco statement. eg.

cinst nodejs $common
cinst visualstudiocode $common 
choco pin add -n=visualstudiocode $common

Avoid unexpected reboots

Detecting and handling reboots is one of the great things about Boxstarter. You can read more in the docs, but one thing to keep in mind is it isn’t perfect. If a reboot is initiated without Boxstarter being aware of it, then it can’t do its thing to restart and continue.

One command I’ve found that can cause this is using Enable-WindowsOptionalFeature. If the feature you’re turning on needs a restart, then Boxstarter won’t resume afterwards. The workaround here is to leverage Chocolatey’s support for the windowsfeatures source. So instead of this

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-All

Do this

choco install Microsoft-Hyper-V-All -source windowsfeatures $common

Logging

If you have a more intricate Boxstarter script, you may run into some problems that you need to diagnose. Don’t look in the usual Chocolatey.log as you won’t see anything there. Boxstarter logs all output to its own log, which by default ends up in $env:LocalAppData\Boxstarter\Boxstarter.log. This becomes even more useful when you consider that Boxstarter may automatically restart your machine multiple times, so having a persistent record of what happened is invaluable.
The other things you might want to make use of is Boxstarter-specific commands like Write-BoxstarterMessage (which writes to the log file as well as the console output) and Log-BoxstarterMessage (which just write to the log file)

Find out more about these and other logging commands by running help about_boxstarter_logging.

My scripts

I keep a few of my Boxstarter scripts at https://gist.github.com/flcdrg/87802af4c92527eb8a30. Feel free to have a look and borrow them if they look useful.

Find out more

If you’re really getting in to Chocolatey and Boxstarter, you might also be interested in Chocolatey Fest, a conference focusing on Windows automation being held San Francisco on October 8th.

Thursday, 5 July 2018

Not all SSDs are the same

(Or why I should stick with software, rather than hardware!)

I'd ordered some larger SSDs from MATS Systems this week to upgrade a couple of laptops that were running out of room. I'd scanned through the list and saw Samsung EVO 500GB. Yep, "add to cart" x 2.

Job done (or so I thought).

They arrived promptly yesterday, and near the end of the day I disassembled the first laptop to extract the existing smaller-capacity SSD so I could put it in the disk duplicator. I then ripped open the box of the newly purchased Samsung SSD and to my horror, it didn't look anything like the old one!

In fact it looked a lot like this:

Samsung EVO 860 SSD
"But David", you say, "that's an M.2 SSD!"

Well yes, yes it is, and that's exactly what it turns out I ordered - not realising that "M.2" doesn't just mean "fast" or "better" but it's an indication of the actual form factor.

I now understood that what I should have ordered was the 2.5" model - not the M.2 one.

So what was I going to do? First step, post to Twitter and see if I get any responses - and I did get some helpful advice from friends:


Twitter conversation

Twitter conversation

Twitter conversation

Unfortunately I'd ripped open the box so it wasn't in a great state to return. Instead I sourced one of these Simplecom converter enclosures to see if I could use it in the 2.5" laptop slot after all.

As Adam had mentioned on Twitter, one important thing was to identify what kind of key the SSD I had was using. You can tell that by looking at the edge connector. Here's the one I had:

Showing edge connector of M.2 SSD

This is apparently a "B+M" key connector (as it has the two slots). The specs for the Simplecom enclosure say it's suitable for either "B" or "B+M" so I was good there.

Unpacking the enclosure, there's a tiny screw one one side to undo, then you can pry open the cover.

Enclosure, with side screw and screwdriver

With the cover off, four more screws to extract before you can access the mounting board

Unscrewing mounting board from drive enclosure

Now it's just a simple matter of sliding in the SSD and using the supplied screw to keep it in.

SSD mounted on mounting board in enclosure

Then reassemble the enclosure and it's ready to test.

I tried it out in a spare laptop - pulling out the existing SSD and using the duplicator to image that onto the new SSD (and taking extra care to make sure I had them in the correct slots in the duplicator. It would be a disaster getting that wrong!)

Then pop the new SSD back in the laptop and see if it boots up.. Yay, it did!

The great news is MATS were able to arrange to swap over the other SSD (the one I hadn't opened yet) with a proper EVO 860 2.5" model. And I learned that if I had been more careful opening the box on the first one, that probably could have been swapped with just a small restocking fee too.

So after feeling like I'd really messed up, things ended up not too bad after all :-)

Monday, 2 July 2018

2018-2019 Microsoft Most Valuable Professional (MVP) award

I first received Microsoft's MVP award in October 2015. My most recent renewal just occurred on July 1st (aka the early hours of July 2nd here in Adelaide), which was a really nice way to start the week. My 4th consecutive year of being an MVP.

Microsoft MVP Logo


To quote the confirmation email, it was given "in recognition of your exceptional technical community leadership. We appreciate your outstanding contributions in the following technical communities during the past year: Visual Studio and Development Technologies"

For me, that's leading the Adelaide .NET User Group, occasional blogging here, speaking at user groups (and the odd conference) and open source contributions. I like to think that the things I do that have been recognised are things that I would be trying to do in any case.

It isn't something I take for granted. A number of MVPs I know didn't make the cut this year - and it's always a bit of a mystery why some continue and some don't.

I'm also aware that should my own (or Microsoft's) priorities change in the future, then it may no longer be for me. But for now, I really appreciate receiving the award and hope I can make the most of the opportunities it gives me.

Friday, 22 June 2018

Migrating Redmine issues to VSTS work items with the REST API

Redmine is an open-source project management/issue tracking system. I wanted to copy issues out of Redmine and import them into a Visual Studio Team Services project.

Extracting issues can be done by using the "CSV" link at the bottom of the Issues list for a project in Redmine. This CSV file doesn't contain absolutely everything for each issue (eg. attachments and custom data from any plugins). Another alternative would be to query the database directly, but that wasn't necessary for my scenario.

To migrate the data to VSTS you can use a simple PowerShell script, making use of the VSTS REST API.

You'll need to create a Personal Access Token. Be aware that all items will be created under the account linked to this token - there's no way that I'm aware of that you can set the "CreatedBy" field to point to another user.

Notice in the script how we handle different fields for different work items types (eg. Product Backlog Items use the 'Description' field, whereas Bugs use 'Repro Steps'), and for optional fields (eg. not all Redmine issues had the 'Assignee' field set).

The full set of fields (and which work item types they apply to) is documented here. If you have more fields in Redmine that can be mapped to ones in VSTS then go ahead and add them.

Monday, 11 June 2018

Get programming in F#

I’m really interested in learning more about functional programming. It isn’t something I knew much about, but the benefits of reducing mutability (and shared state) promoted by functional languages and functional style are enticing.

To that end, I recently bought a copy of Isaac Abraham’s new book “Get programming in F#. A guide for .NET Developers”.



I have no background in functional languages at all, so I was looking for a “gentle” introduction to the F# language, without getting hung up on a lot of the functional terminology that seems to make learning this stuff a bit impenetrable for the newcomer. This book delivers.

The structure of the book is in 10 “units”, which in turn are broken down into separate “lessons” (each lesson is a separate chapter).

Here's my notes from each unit:

Unit 1 – F# and Visual Studio

  • Introduces using the Visual Studio IDE for F# development, and recommended extensions. Surprisingly for a book published in 2018, most of the book is based on using Visual Studio 2015. I can only presume this is an artifact of the time it takes to write a book. I understand the initial release of 2017 did have some tooling regressions for F# but I am under the impression those are now resolved, seeing at my time of writing the 7th update for 2017 has just been released, including specific enhancements for F#.
  • Throughout the book, comparisons are made to equivalent C# language constructs, and here too, the text is already a bit dated. An unfortunate downside of a printed book I guess.
  • One thing to note that is different from many other languages – the file order in F# projects is significant. You can’t reference something before the compiler has seen it, and the compiler processes files in project order.
  • The REPL is also a big part of F# development.

Unit 2 – Hello F#

  • The ‘let’ keyword is introduced. It’s more like C#’s const than var, seeing as F# defaults to things being immutable rather than mutable.
  • Scoping is based on whitespace indentation rather than curly braces.
  • Diving into how the F# compiler is much stricter because of the way the F# type system works, and how that can be a good thing.
  • A closer look at working with immutable data, and how you can opt in to mutable data when absolutely necessary, and how to handle state.
  • C# is statement based, whereas F# likes to be expression based.
  • The ‘unit’ type is introduced. It’s kind of like void, but is a way for expressions to always return a value (and means the use of those expressions is always consistent).

Unit 3 – Types and functions

  • Tuples, records
  • Composing functions, partial functions, pipelines,
  • How do you organise all these types and functions if you’re not using classes? Organising code through namespaces and modules

Unit 4 – Collections in F#

  • Looking at the F#-specific collection types – List, Array and Seq, the functions you can use with those collections. Immutable dictionaries, Map and Sets. Aggregation and fold.

Unit 5 – The pit of success with the F# type system

  • Conditional logic in F#, pattern matching
  • Discriminated unions

Unit 6 – Living on the .NET platform

  • How to use C# libraries from F#. Using Paket for NuGet package management
  • How to use F# libraries from C#

Unit 7 – Working with data

  • Introducing Type Providers. Specific use cases with JSON, SQL and CSV.

Unit 8 – Web programming

  • Asynchronous language support
  • Working with ASP.NET WebAPI 2
  • Suave – F#-focussed web library
  • Consuming HTTP data

Unit 9 – Unit testing

  • The role of unit testing in F# applications
  • Using common .NET unit testing libraries with F#
  • Property-based testing and FsCheck
  • Web testing

Unit 10 – Where next?

  • Further reading and resources to take your next steps.

Sunday, 7 January 2018

VSTS and TeamCity – Wrapping up

Part 4 in a series on integrating VSTS with TeamCity

Wouldn't it be great if TeamCity and VSTS had full builtin support for each other? Well yes, yes it would! Maybe that will happen soon.

If I knew Java well, I could probably have a go at writing a TeamCity addin that encapsulates most of the what the pull request server does - but the idea of spending a few weeks getting up to speed with Java/TeamCity development doesn’t excite me that much.

TeamCity 2017.2 adds VSTS Git support to the Commit Status Publisher build feature. I haven’t been able to try this out yet (due to some other bugs in 2017.2 preventing me from upgrading), but it is possible this could remove or reduce the requirement for the build completion handler.

VCS post-commit hook

Now you've seen how to to use the APIs for TeamCity and VSTS, you might also want to implement another optimisation - adding a VCS post-commit hook. You add an additional service hook in VSTS that notifies TeamCity that there's a code change so that TeamCity knows it should grab the latest commit(s).
  1. In VSTS Project Settings, go to the Service Hooks tab
  2. Click '+' to add a new service hook
  3. Select Web Hooks
  4. In Trigger on this type of event, select Code pushed
  5. Optionally, review the Filters and just check the Repository (and branch) that should trigger the event.
  6. In the URL, enter something like https://www.example.com/app/rest/vcs-root-instances/commitHookNotification?locator=vcsRoot:(type:jetbrains.git,count:99999),property:(name:url,value:%2Fbuildname,matchType:contains),count:99999
    the locator can vary depending on your individual requirements
  7. Enter the username and password to authenticate with TeamCity
  8. Set Resource details to send, Messages to send and Detailed messages to send to None
  9. Click Test to confirm that everything works.

The nice thing about this is that rather than TeamCity blindly polling VSTS, VSTS is telling TeamCity when it has something of interest.

Wednesday, 3 January 2018

VSTS with TeamCity – Configuration

Part 3 in a series on integrating VSTS with TeamCity

Now that we've created the pull request server, we need to configure VSTS and TeamCity so that they can send event messages to it.

VSTS

If you followed the steps in the sample tutorial, this will be familiar.
  1. Go to the Service Hooks tab for your project
  2. Click on the + icon
  3. Choose Web Hooks and click Next
  4. Select Pull request created.
    New Service Hooks Subscription dialog window screenshot
  5. If appropriate, select a specific repository and click Next
  6. In URL, enter the URL that VSTS will use to connect to the pull request server, including a query string that defines which TeamCity build should be queued.
    Eg. If the pull request server is hosted at https://www.example.com/pullrequestserver and the TeamCity build type id is My_CI_Build then you’d use https://www.example.com/pullrequestserver?buildTypeId=My_CI_Build
  7. In Username and Password, enter the credentials that will be used to authenticate with TeamCity
  8. Leave Resource details to send as All.
  9. Set Messages to send and Detailed messages to send to None
  10. Click on Test to try it out.
  11. Click on Finish to save this service hook.
  12. Repeat these steps to create another service hook for Pull request updated, also setting the Change filter to Source branch updated.
With the service hooks in place, you can now go to the Branches page, and click on the … (more actions) icon and choose Branch policies.
Selecting branch policy from more actions menu


The Add status policy button should be enabled, and clicking on that you should be able to find the pull request server listed in the drop down.

TeamCity

To allow TeamCity to call the pull request server, you will need to install the Web Hooks plugin for TeamCity. With that in place, go to the build configuration page in TeamCity, and you’ll see a new WebHooks tab.
  1. Click on add build Webhooks, then Click to create new WebHook for this build and add a new web hook for the project
  2. In the URL, enter the URL that TeamCity will use to connect to the pull request server.
    Eg. If the pull request server is hosted at https://www.example.com/pullrequestserver, you would use https://www.example.com/pullrequestserver/buildComplete.
  3. Set the payload format to Legacy webhook (JSON)
  4. Clear all the trigger events except On Completion Trigger when build successful and Trigger when build fails
  5. Click Save

In the VCS Root settings for the VSTS Git repository, set Branch Specification to
+:refs/heads/(master)
+:refs/pull/*/merge


We don't need TeamCity to trigger the builds for the pull request branches as the pull request server will be queuing those builds, but we do still want TeamCity to trigger the master builds.

In the build configuration VCS Trigger, set the Branch filter to +:<default>

With all that configuration done, creating a new pull request in VSTS should now trigger a branch build in TeamCity. When the build completes, the status is posted back to VSTS, allowing the pull request to be completed by merging the changes into master.