If you are a an ASP.NET Core backend developer, you might have heard the term “microservices” and you might be wondering what it’s all about. If you never heard of it, well, you should since it’s the most popular software architecture style these days.
In essence, microservices are a way to create modern distributed applications that small and focused teams can build and deploy independently. It brings in tons of benefits for teams of all sizes, but it also involves many new challenges.
If you are interested in microservices and would like to know how to get started building them with .NET, you should check out my free Building Your First Microservice With .NET online course. This is a step by step, didactic, 2+ hours course, that will take you from zero to having a fully working .NET microservice, with a complete REST API and a NoSQL database hosted in Docker.
A common problem I have seen across the teams I’ve worked on that use Azure Pipelines for building and releasing code is the lack of enough pipeline agents to handle the increasing number of builds/releases that need to be executed simultaneously. Most teams would start by just using Microsoft-hosted agents, which is super straightforward, no setup required. However they come with a few downsides:
They only allow for one parallel job at any given time for private projects unless you opt for purchasing more parallel jobs (public projects allow 10 free parallel jobs).
They use the Standard_DS2_v2 vm size which gives you only 2 vCPUs, 7 GiB of memory and 8000 IOPS. They also come with at least 10 GB of storage.
They come with a large list of installed software which rarely matches the exact set that you need, which is usually a very small subset from what they provide there. See the software list for Windows and Linux.
To overcome those issues you would usually come up with your own self-hosted agents where there is no limit on parallel jobs and you can decide what vm size to use and what software to put there. But then this comes with its own downsides:
You have to figure out the entire provisioning of the VM yourself, which can involve a lot of error prone steps if done manually. If not doing it manually you have to figure out how to automate the quick creation and deletion of VMs depending on team needs
Provisioning a VM can take several minutes, even more if setup scripts are run after provisioning
You have to keep the VM well maintained and updated
One way to find a middle ground among all these issues is to turn the agents into docker containers and then have Kubernetes orchestrate the provisioning of those containers. Yes, you still have to stand up and maintain a k8s cluster but then you get a bunch of benefits:
You can easily scale up/down the number of agents as needed. No parallel job limits
You get to choose the k8s node vm size
You get to pick and customize the docker image to use so it only has the software you need
You let k8s deal with all the provisioning stuff. It’s will ensure the number of required agents are always there
Provisioning is pretty fast, especially after provisioning the first agent
A bunch of people have come to this conclusion already, so when I found this open source project a while ago I gave it a try and worked pretty well. However since then the Azure DevOps team stopped supporting the docker image used in that project and you are now expected to come up with your own docker image. Plus the old VSTS went into a few changes. Therefore I forked the project and updated a few things to match the latest guidelines and added some more guidance on how to create the docker image and the kubernetes cluster.
To get started you can go to my azure-pipelines-kubernetes-agents GitHub repo and follow the steps described there. Here I’ll just summarize what you’ll end up doing to quickly come up with your own k8s hosted Azure Pipelines agents:
Create a Personal Access Token (PAT) with the Agent Pools(read, manage) scope
Create your pipelines agent pool
Create and publish your pipelines agent docker image (a sample is provided in the repo)
Create your k8s cluster. The repo provides steps for Azure Kubernetes Service (AKS)
Install the Helm chart
So, for instance, once you have the pipelines pool and the k8s cluster created as well as the docker image published, this is all I did to provision my pipelines agents:
From time to time I get approached by an overwhelming amount of recruiters for several software engineering positions for locations across the US and a few of them remote too. This is probably not unusual for folks working in software engineering, especially if you work on cloud tech. Some of these positions are actually really interesting. Looks like HBO keeps growing in the Seattle area with the upcoming HBO Max service starting in spring of 2020, Amazon expanding into Cupertino, CA, Mexico City and Vancouver, B.C, and Volkswagen coming up with it’s own Automotive Cloud in Redmond, WA.
However since I’m not interested in switching jobs these days (having some good fun at the Microsoft Project xCloud team) and I keep just replying with a “not interested” to all these recruiters, I thought I might as well share these opportunities here so others get to know about them.
You can check out the new jobs section here and if you are interested feel free to drop me an email or comment on this post with your updated LinkedIn profile so I can pass it over.
When software developers start collaborating in a project one of the main things to address is how to prevent build and test breaks. Here is where Continuous Integration (CI) shines and one tool that enables it and that has worked for me fairly well in the past is Azure Pipelines.
I just published a tutorial on how to enable CI with Azure Pipelines:
The tutorial will teach you:
How to enable continuous integration (CI) with Azure Pipelines
What is a YAML based pipeline and why use it
How to create a pipeline that runs on any change to your GitHub repo
How diagnose and fix issues detected by the pipeline
How to report the status of the pipeline on GitHub
Last week I published a video on how to deploy an Asp.Net Core 3.0 Web API to a local Kubernetes cluster. This week I thought I would move one step forward and show how to deploy the same Web API container to Azure Kubernetes Service (AKS). So here you go:
In this new video you will learn:
How to create a container registry and an AKS cluster
How to push your Web API container to a container registry
How to generate Kubernetes yaml files to describe your deployment and service using Visual Studio Code
How to deploy your Web API container to AKS
Please leave me a comment here or in the video itself about any feedback you might have on this video.
It’s been ages since I wrote anything here, but recently I decided it’s time I start sharing a few of the things I have learned in the past few years. Also, since .NET Core 3.0 just got released today and since I’ve been working with containers for a while I thought it would be appropriate to start with a video on how to containerize an Asp.Net Core 3.0 app, specifically a Web API type of app since that’s what I’ve mostly been using for building microservices. So here it is:
There I talk about:
• How to create an Asp.Net Core 3.0 Web API project
• How to add Docker artifacts with Visual Studio Code, including the generation of the Dockerfile
• How to build and run the Asp.Net Core project as a Docker container
Let me know your thoughts on this video, either here or in the video comments section. Would appreciate all feedback to incorporate it in future upcoming videos.