© Wouter de Kort 2016

Wouter de Kort, DevOps on the Microsoft Stack, 10.1007/978-1-4842-1446-6_9

9. Implementing Continuous Integration

Wouter de Kort

(1)Ordina Microsoft Solutions, GRONINGEN, The Netherlands

If you’ve followed along until now, you have already accomplished quite a lot. You learned how to use the Agile Project Management tools, looked into increasing the quality of your code, and learned how Visual Studio and Visual Studio Team Services support you in these endeavors. On your road to DevOps and continuous delivery, the next step to take is setting up continuous integration. What is continuous integration and why do you need it? This chapter will show you what continuous integration is and how you can configure it for your projects. You will also learn about SonarQube, a specialized tool for measuring the quality of your code on a continuous basis.

Why do we talk about continuous integration ? Well, you hope that developers check the quality of their code locally before sending changes to the central repository. But you can’t guarantee this. If a developer is late to go home, he could decide to quickly check in his changes and leave. And what about deployments? Do you ask one of the developers to run a deployment from her local workstation? If these things sound familiar, you need a continuous integration build. Imagine that every time a developer uploads a change to version control, a process gets triggered that compiles the latest version of the code, runs unit tests, performs code analysis, and then delivers a package in a specified location ready to be released. That is continuous integration.

This means that whenever a developer checks in some malfunctioning code, the central continuous integration build fails. This increases the feedback loop for your developers. A failing build means that one of the checks failed and that the current version is not working correctly. Continuous integration is a process where you encourage developers to integrate as often as possible and where you continuously validate the integrated version of the code.

Implementing a continuous integration build is not hard. VS Team Services gives you some great tools and you can be up and running in minutes. The difficult part is making your team feel responsible for the build. When the build fails and team members keep checking in code without anyone fixing the broken build, having a continuous integration build adds absolutely no value. You need to work on making your team feel responsible for the build. One way to achieve this is to make the quality of the build easily visible to anyone. A simple monitor in your team room that shows the status of the latest build does wonders. Some teams even take this a step further. I once found a video showing a team that had built a rotating machine gun that fired tennis balls at the person who broke the build. Another company uses a giant teddy bear that gets placed on the desk of the person who broke the build and only moves on when someone else breaks the build. This is a process that has to grow but remember that instilling the importance of always fixing a broken build on your team is crucial.

Configuring a Continuous Integration Build

VS Team Services offers built-in capabilities for a continuous integration build. You create what’s called a build definition in the web portal of VS Team Services. You then configure this build definition to trigger on every check-in, on a specific schedule, or manually. VS Team Services takes care of the rest. Of course, the builds that are triggered need to run somewhere. On your local environment, you have installed Visual Studio and probably a bunch of other tools and SDKs that are required to build your application.

When running a build in VS Team Services, these same tools need to be available. To help you with this, Microsoft runs what’s called a hosted build agent. This is a preconfigured machine that has the most common tools already installed and is available on demand for your builds. Figure 9-1 shows the capabilities of the hosted agent. You can view these capabilities on your own VS Team Services account by navigating to the settings of your account and then choosing Agent Pools ➤ Hosted.

A346706_1_En_9_Fig1_HTML.jpg
Figure 9-1. Capabilities of the hosted build agent

In the following section, you will look at installing a build server of your own if the hosted build agent doesn’t meet your requirements. Whenever possible, I recommend that you use the hosted build agent since Microsoft keeps these servers up to date for you and does any other required maintenance work. This saves you a lot of time and energy.

A build server won’t start running builds on itself. You need some kind of instruction set to tell the build server what to do. Figure 9-2 shows a default build template that ships out of the box with VS Team Services. This build template builds your code, runs unit tests, and then stores the build output in VS Team Services. These steps are executed on the build server.

A346706_1_En_9_Fig2_HTML.jpg
Figure 9-2. A build definition in VS Team Services

The build definition consists of a list of tasks that you want to execute. You can easily add build steps to your build and configure each step. For example, the Visual Studio Build step that’s selected in Figure 9-2 lets you configure the solution you want to build, arguments you want to pass to MSBuild, the platform and configuration for your build, whether to start with a clean workspace on every new build, if NuGet packages need to be restored on build, and which version of Visual Studio you want to use to run the build.

The infrastructure behind the build system is cross-platform and easily extensible. The build steps are a combination of JavaScript and a platform-specific script that does the actual work. The build agent is a JavaScript, cross-platform application built in NodeJS that executes the build steps and keeps track of things like errors and logging. This means that the build system is not only capable of building .NET applications on Windows. It can also build and test cross-platform apps, like an iOS Xamarin-based mobile app on a Mac, a Core CLR-based ASP.NET application, or a Java application on Linux. These are all available out of the box. If you want, you can add your own tasks or use the default tasks that run a script of your own. In addition to the set of build steps, which forms the main part of your build definition, there are other options that you can configure.

You create a build through the Web Access portal of VS Team Services. Each team project has its own set of builds. If you open a team project, you see the Build hub in the top navigation bar. You can then select the green + icon to create a new build definition (see Figure 9-3).

A346706_1_En_9_Fig3_HTML.jpg
Figure 9-3. The Build hub is a part of Web Access

You can choose from a couple of templates when you create a new build definition. Figure 9-4 shows the standard list of templates. The Universal Windows Platform requires an agent that has both Visual Studio and the tools for creating Universal App installed. The Visual Studio template creates a build definition that requires only an installation of Visual Studio to compile and test your project. The Xamarin templates can be used when you are creating cross-platform mobile apps with Xamarin. These templates can compile an Android or iOS app. The Xcode template can build code on a Mac. You need to install your own Mac with a build agent to use this template. Finally, you can also start with an empty template that you can then configure by adding tasks. All templates are customizable and you don’t limit yourself to anything by starting with a template.

A346706_1_En_9_Fig4_HTML.jpg
Figure 9-4. Default definition templates for a build definition

After clicking next, you get the window shown in Figure 9-5. In this case, I’ve selected the Visual Studio Build Definition. You can then configure the repository source, the agent to run on, and whether or not this is a continuous integration build: a build that runs on every check-in. The Repository type lets you choose between Git and TFVC. You can also build externally hosted applications from GitHub (or another Git repository) or Subversion. Finally, you click the Create button to create your new build definition.

A346706_1_En_9_Fig5_HTML.jpg
Figure 9-5. Additional configuration options when creating a new build definition
Note

For more information on building GitHub projects, see https://msdn.microsoft.com/en-us/Library/vs/alm/Build/github/index .

The Build tab of your new build definition lists the steps that compose your build, as you’ve seen in Figure 9-2. Here you can add new steps and configure each step to perform your build. The Options tab lets you run your build under one or more configurations (for example, you want to create both a Debug and a Release configuration). You can also choose to create a work item on failure. This makes sure that every broken build shows up as a work item assigned to the person who executed the build. You can choose what type of work item you want to create and whom to assign it to and then set any additional fields that are important to you. Finally, you can select if build scripts can interact with other parts of VS Team Services by using an authorization token.

You can see the Repository tab in Figure 9-4. Here you define how the build agent receives the sources of your project from version control. For TFVC, you can map and cloak folders just as you do on your own development PC. The Clean option lets you configure if the build agent should get all sources every time and start with a clean slate or if it can only get the updates that where made since the previous build. With large projects, cleaning your repository each time and downloading all the code takes sometimes too much time. In such scenarios, you can choose to disable the clean. Labeling sources means that each build gets assigned a label that you can later use to retrieve the exact set of files that were used in that build.

When you’re working with a Git repository, you don’t get the Mappings part of Figure 9-6. You do get the option to select which branch you want to build. You also get an option to check out submodules. A submodule in Git is a way to reuse code from another project without copying the code. Instead, a subfolder of your project points to another Git repository. As you can see, the build system is quite powerful.

A346706_1_En_9_Fig6_HTML.jpg
Figure 9-6. Repository configuration for a build

The fourth tab, Variables (see Figure 9-7), lets you define name value pairs of properties that you can then use in your build definition. By default, you get properties that specify for what configuration and platform to build. You can use these variables with a $(VARIABLE_NAME) syntax. So, $(BuildPlatform) will be substituted with the value of that variable by the build agent. This a handy feature to avoid spreading all kinds of configuration options throughout your build template. Instead, you specify them and then use them in multiple places.

A346706_1_En_9_Fig7_HTML.jpg
Figure 9-7. Configuring variables for your build definition

The Triggers tab (Figure 9-8) allows you to specify when your build should run. If you don’t configure any triggers, you can always start the build manually. You can also configure automated builds that run on a specific schedule or that run on every check-in. The option to batch changes can be important when you have a busy repository. Imagine that multiple users check in code while a build is still running. Without this option, every check-in would queue a build, thus creating a long waiting list. If you batch changes, all the check-ins that happen while a build is running are scheduled for the next build. If the build fails (which is hopefully rare), the builds are run separately so you know exactly which check-in caused the failure.

A346706_1_En_9_Fig8_HTML.jpg
Figure 9-8. You can configure multiple triggers for your build definition

The General tab lets you configure a couple of options, as you can see in Figure 9-9, such as a description, build number format, and timeout. The agent queue determines where your build is going to run. You’ll dive into this in the next part when you see how to configure your own agents. The Badge option lets you show the status of your build on an external web site. This is nice if you have an overview page or something where you want to have a simple image that shows your builds status. Beneath these options you see a list of demands. Demands are used by VS Team Services to figure out on which agent your build can run. Agents have capabilities and VS Team Services matches those to the demands of your build definition. In the example in Figure 9-9, the machine needs to have Visual Studio installed. This will automatically install MSBuild and VSTest so that all capabilities are matched.

A346706_1_En_9_Fig9_HTML.jpg
Figure 9-9. General options for a build definition

The Retention tab (Figure 9-10) determines how long your builds are kept once they’re finished. When you have a busy team project, you will create a lot of builds. Keeping all those builds around clutters your environment and takes up a lot of space in VS Team Services. By default, there is a rule that deletes everything after 30 days. You can add rules and remove this default rule if you want. Finally, the History tab allows you to inspect previous versions of your build definition and roll back to them if you want to undo certain changes.

A346706_1_En_9_Fig10_HTML.jpg
Figure 9-10. Retention options configure how long your build is stored in Team Services

Now that you have a build definition, you can use it to queue a new build. You can queue a new build directly from Web Access. When your build enters the queue, VS Team Services will launch a hosted build agent for you and start the build. While the build is running, you get a real-time log of the build output. This way, you can track progress and monitor for errors. After the build is finished, you can view the log files in your browser, or if they’re too big, download them to your local machine to view them. Figure 9-11 shows how to queue a build.

A346706_1_En_9_Fig11_HTML.jpg
Figure 9-11. Queue a new build

When queuing a build manually, you configure the options shown in Figure 9-12. You select a queue and configure the variables and demands that are required. You can also select a shelveset if you want to run a build that uses the specific code in that shelveset.

A346706_1_En_9_Fig12_HTML.jpg
Figure 9-12. Configuring a build to be queued

When you’re running on the hosted agent pool, you have to wait until an agent is available to start your build. You get one free build agent in the hosted queue. You can have more (for example, to run builds in parallel), but then you have to pay for the additional agents. If the wait time for an available agent is slowing your team down, that could be a reason to switch to a dedicated build machine.

The build log that gets created is very detailed. It shows a real-time output of what the individual tasks in your build definition are doing. You can use these especially when you’re trying to fix a broken build. The build details show you a timeline of the individual steps (Figure 9-13) and let you download any artifacts that were created during the build. The timeline is especially useful when you are trying to speed up a build. By checking the duration, you can easily see which step takes the most time and focus on speeding up the steps.

A346706_1_En_9_Fig13_HTML.jpg
Figure 9-13. The Timeline shows how long each individual step took in your build

Installing and Configuring Build Agents

In the previous section, you looked at the default hosted build agent to run your builds. Microsoft maintains these hosted agents and they decide which software is available on the server. If you have specific requirements for the build server, be it security, installed software, or performance, you can run your own build servers and connect them to VS Team Services.

Build agents are grouped in pools . These pools are defined at the account level of your VS Team Services account. This means that the pools are available to all projects within your account. You can create as many pools as you want and each pool can contain a set of agents. An agent can exist in only one pool. The pools are linked to queues, which are defined at the collection level. A queue is what you select when you create a new build definition. This means that a new build is put in a queue . The queue is linked to a pool of agents and one of these agents will pick up your build. This allows you to put certain boundaries in place. By limiting the queues a project can use and placing your agents in separate pools, you control which build accesses which agent. Figure 9-14 visualizes this configuration.

A346706_1_En_9_Fig14_HTML.jpg
Figure 9-14. The build infrastructure uses queues, pools, and agents

In addition to the pools and queues, an agent is also selected based on his capabilities. System capabilities—such as environment variables and specific settings like the .NET Framework version or the installed editions of Visual Studio—are detected automatically. These capabilities are used to find the correct agent to run your build on. Capabilities are requested by the build definition based on the tasks you add. For example, adding the Visual Studio Build task requests a capability that Visual Studio is installed on the build machine. You can also add your own capabilities that are simple key/value pairs. This way, you can specify custom software that’s installed on your build machine or other specific settings. By requesting those capabilities in your build definition, you make sure that your build runs on the correct agent.

If you want to install your own agent, you can navigate to the Account settings shown in Figure 9-15. Here you see the default pools that are available . You also see an option to download the agent.

A346706_1_En_9_Fig15_HTML.jpg
Figure 9-15. Configuring agent pools

Once you download the agent, you can copy the ZIP file to the machine you want to install the agent on. After extracting the files from the ZIP archive, you can run a PowerShell file named ConfigureAgent.ps1. When running this file, you’re asked for the URL of your account (be it on-premises or in VS Team Services), the pool you want to add the agent to, a work folder, and the authentication details. One other option is if you want to run the agent as a Windows Service. If you want to use the agent to run CodedUI tests (see Chapter 11 on testing for more information), you need to choose no. This will install the agent as a regular desktop program capable of launching and working with other programs. Figure 9-16 shows the configuration process when installing a new agent. Now that your agent is configured, it shows up under the pool you specified during configuration in VS Team Services. You can then start using this agent to run builds.

A346706_1_En_9_Fig16_HTML.jpg
Figure 9-16. Installing a new build agent

Creating Custom Tasks

Out of the box, the build system ships with quite a lot of tasks. However, there will always be a time where you miss something. A simple extensibility point that you can use to run custom tasks is the task that executes a PowerShell script. You control the PowerShell script and you can let it do whatever you want. Figure 9-17 shows the PowerShell task and the configuration options. You can specify the script location and any arguments that you want to pass to the script. This allows for easy extensibility of your build system.

A346706_1_En_9_Fig17_HTML.jpg
Figure 9-17. A PowerShell task

When passing arguments to your build task, be it PowerShell or another task, you have access to a number of variables inside the PowerShell script. These variables are created by the build system and contain information ranging from the current working directory to the account you’re working on. You can find the complete list of variables at https://msdn.microsoft.com/en-us/Library/vs/alm/Build/scripts/variables .

If you find yourself using the same PowerShell script repeatedly, you can choose to encapsulate this script in a custom task. That way, you can just add your custom task with its own configuration options and you’re done. All the build tasks are Open Source and you can inspect them to find out how they work. The basis of a task is the task.json file.

Note

You can find all the tasks at GitHub: https://github.com/Microsoft/vso-agent-tasks .

The following listing shows the task.json file for the PowerShell task from Figure 9-17.

{
    "id": "E213FF0F-5D5C-4791-802D-52EA3E7BE1F1",
    "name": "PowerShell",
    "friendlyName": "PowerShell",
    "description": "Run a PowerShell script",
    "helpMarkDown": "[More Information](http://go.microsoft.com/fwlink/?LinkID=613736)",
    "category": "Utility",
    "visibility": [
                  "Build",
                  "Release"
                  ],    
    "author": "Microsoft Corporation",
    "version": {
        "Major": 1,
        "Minor": 0,
        "Patch": 5
    },
    "demands": [
        "DotNetFramework"
    ],
    "groups": [
        {
            "name":"advanced",
            "displayName":"Advanced",
            "isExpanded":false
        }
    ],
    "inputs": [
        {
            "name": "scriptName",
            "type": "filePath",
            "label": "Script filename",
            "defaultValue":"",
            "required":true,
            "helpMarkDown": "Path of the script to execute. Should be fully qualified path or relative to the default working directory."
        },
        {
            "name": "arguments",
            "type": "string",
            "label": "Arguments",
            "defaultValue":"",
            "required":false,
            "helpMarkDown": "Arguments passed to the PowerShell script. Either ordinal parameters or named parameters"
        },
        {
            "name": "workingFolder",
            "type": "filePath",
            "label": "Working folder",
            "defaultValue":"",
            "required":false,
            "helpMarkDown": "Current working directory when script is run. Defaults to the folder where the script is located.",
            "groupName":"advanced"
        }
    ],
    "instanceNameFormat": "Powershell: $(scriptName)",
    "execution": {
        "PowerShellExe": {
            "target": "$(scriptName)",
            "argumentFormat": "$(arguments)",
            "workingDirectory": "$(workingFolder)"
        }
    }
}

This JSON file starts with metadata on the task such as the name, description, author, and version. The demands section requests the capabilities that need to be present on an agent. This allows the build system to match agents to build definitions and thus to individual tasks. The file then specifies the input parameters that users can supply through the interface. You see the script name, arguments, and working folder. The last part is execution. This node specifies what the task does when it runs on an agent. As you can see, the PowerShell task just executes PowerShell.exe and passes the input arguments to it.

More complex tasks use a similar JSON file but in the execution node call a PowerShell (or other type of script) that’s included with the task. This script then does the actual work. For example, the VSBuild task has the following execution node:

"execution": {
     "PowerShell": {
        "target": "$(currentDirectory)\VSBuild.ps1",
        "argumentFormat": "",
        "workingDirectory": "$(currentDirectory)"
    }
}

All it does is call VSBuild.ps1 and pass it the working directory. The VSBuild.ps1 script performs the actual work like NuGet restore, getting the sources, and running the build. The build system is cross-platform and JavaScript-based. You use gulp to compile the tasks and produce the output files. The gulp build step creates a tasks.loc.json file and an English strings file. You can use this to create localized versions of your task. You then package the output of the build and upload the package to VS Team Services.

Note

Learning gulp is not required for creating build tasks. However, gulp is very powerful so learning it is definitely something you should look at. For more information on gulp, see http://gulpjs.com/ .

To create a custom build task, you need a couple of tools. First, you need to have Node.js installed. This will install the Node Package Manager (npm) that you can then use to install the tools needed to create a build task: tfx-cli. As you can see in Figure 9-18, installing tfx-cli through npm downloads all the dependencies and makes sure you can run the package locally.

A346706_1_En_9_Fig18_HTML.jpg
Figure 9-18. You can use npm to install tfx-cli
Note

Visual Studio looks at different places for your Node.js installation. If you get an error that states that you’re not using the latest Node.js version, go to Options ➤ External Web Tools settings and make sure that your PATH environment variable is at the top.

Now that you have the tools, you can use the tfx command to create a skeleton of your new task. When running this command, you need to enter a value for the short task name, friendly name, description, and author, as shown in Figure 9-19. You execute the task by running:

tfx build tasks create
A346706_1_En_9_Fig19_HTML.jpg
Figure 9-19. Create a skeleton build task by running tfx build tasks create
Note

If you run into errors while creating the files for your task, make sure that you are running the correct version of Node.js. At the time of writing, the newest version of Node.js is not yet supported. Changing back to an older Node.js version fixes the problems. You can use a tool like nvm-windows ( https://github.com/coreybutler/nvm-windows ) to run multiple Node.js versions simultaneously.

This command creates a couple of files for you:

  • icon.png: A sample icon for your extension

  • sample.js: The JavaScript version of your task that can run cross platform

  • sample.ps1: The PowerShell version of your task that can run on Windows

  • task.json: The manifest file of your task that describes its settings and how to run it

You can the modify these files and create your task. Once finished, you need to upload the task to VS Team Services. To do this, you need a special token that you can use to authenticate from the command line. You can get such an access token by using the Web Access and navigating to your own profile properties. Figure 9-20 shows the Security tab of your profile. Here you can choose to create a new personal access token. You need to specify a name, a duration period, and the scope of your token. After creating the token, you see it only once. It’s not stored in VS Team Services so you need to copy it and keep it safe.

A346706_1_En_9_Fig20_HTML.jpg
Figure 9-20. You can create a personal access token for uploading your new build task

After you have the token, you can run the following command to upload your task:

Tfx build tasks upload –task-path <path>

You need to enter the URL of your collection ( https://youraccount.visualstudio.com/defaultcollection ) and the personal access token you just received. (You won’t see the characters appear. Just paste the token in and press Enter.) Figure 9-21 shows a successful upload. After this, you can verify your upload by navigating to your list of tasks. Figure 9-22 shows the Hello DevOps task I created and uploaded. If you want to remove a task you can run the following command. You can get this ID from the task.json file in your task directory .

A346706_1_En_9_Fig21_HTML.jpg
Figure 9-21. Upload your build task to VS Team Services
A346706_1_En_9_Fig22_HTML.jpg
Figure 9-22. A newly uploaded task is visible in VS Team Services
tfx build tasks delete --id <id>

Using SonarQube

As you’ve seen in the chapter on code quality, managing technical debt is important. Visual Studio offers some great features for this, such as Code Metrics, Code Analysis, and Unit Testing. These tools run on a developer’s computer. When working on your continuous integration pipeline, an important step is to run these same quality checks at the central build server. This way, you start tracking your code quality on every check-in. You can then analyze trends and set minimum quality gates for allowing code to be checked in.

SonarQube is a product from SonarSource that helps you with this. Microsoft has partnered with SonarSource to make sure that VS Team Services and SonarQube work great together. There is now support for installing SonarQube on a Windows Server, analyzing C# code with the new Roslyn analyzers, and integrating this fully into the VS Team Services build system. To get a feeling of what SonarQube offers you, you can go to a free demo environment running at http://nemo.sonarsource.org/ . Figure 9-23 shows you the dashboard of SonarQube Nemo.

A346706_1_En_9_Fig23_HTML.jpg
Figure 9-23. The SonarQube dashboard gives you a quick overview of the status of your projects

To see more of what SonarQube can do, you can navigate to the analysis of the Roslyn compiler project by clicking on the icon in the top-right area of the Microsoft Roslyn .NET tile in the All Projects panel. See Figure 9-24.

A346706_1_En_9_Fig24_HTML.jpg
Figure 9-24. The Roslyn project is also analyzed in this demo environment

If you look at the resulting dashboard, you see something called a SQALE Rating. The SQALE method is implemented by SonarQube to evaluate the amount of technical debt you have in a project in an objective way. The result of this analysis is the amount of time it will take to fix all the technical debt in a project. These timing estimates are based on rules where each rule has a time attached to it that’s based on the SQALE analysis model.

An SonarQube analysis of your project gives you a wealth of information. Not only do you see where the problem areas are, you also get immediate information on how much time it’s going to cost you to fix your technical debt. This is a huge advantage to making decision on when to incur or pay technical debt. If you look further at the dashboard (shown in Figure 9-25), you see information on the size of your project (lines of code, number of files, classes, and functions). You also see information on code duplication, issues found in your code, and the amount of technical debt over time .

A346706_1_En_9_Fig25_HTML.jpg
Figure 9-25. The SonarQube dashboard for the Roslyn project shows a wealth of information

The Issues list is what’s most important. This is a complete list of issues detected by validating all the rules that SonarQube has installed. Issues have a severity and a full description of what the violation is all about and how to fix it. In every big project, there will be false positives so you can also mark items as something you won’t fix. Figure 9-26 shows the Issues page for Roslyn.

A346706_1_En_9_Fig26_HTML.jpg
Figure 9-26. The Issues page in SonarQube shows you all the technical debt in your code

SonarQube integrates with .NET, Java, Objective-C, and Swift builds. Additional plugins are available from SonarSource. There is a free edition of SonarQube but it doesn’t contain the SQALE rating. You can easily install SonarQube on a virtual machine that you create in Azure or that you host somewhere else. The ALM Rangers have a detailed installation guide that you can find on http://aka.ms/vsartdsq . After you have installed the server, you can start using it from within your builds.

When integrating with .NET builds, you use the SonarQube Runner for MSBuild. This runner needs to be started at the beginning of your build and stopped at the end. The Begin step contacts your SonarQube server and requests information on how you want to run your analyses (specifically, the quality profile and rulesets). When stopping the SonarQube runner, the results are published to SonarQube and you can view them in your dashboard.

The Begin step needs a couple of parameters :

  • SonarQube endpoint: A configured service endpoint for your SonarQube server

  • SonarQube project properties: The unique identifiers for the SonarQube project where you want to store the analysis results

The hosted agent can then run your build while communicating with your SonarQube installation for the analysis. The communication is done through a service that you define in VS Team Services, as shown in Figure 9-27. You create a new generic service with a name, endpoint, and credentials.

A346706_1_En_9_Fig27_HTML.jpg
Figure 9-27. Create a new generic service to link SonarQube to VS Team Services

Now that the service is defined, you can add the SonarQube Begin and End steps to your build definition, as shown in Figure 9-28. To configure the Begin task, you need to select the generic endpoint you create for the SonarQube endpoint value. Then enter a key and name for your project so you can find the results in SonarQube. After configuring these options, you can start your build and the analysis details will be available in SonarQube.

A346706_1_En_9_Fig28_HTML.jpg
Figure 9-28. The SonarQube steps for your build are available out of the box

Summary

This chapter introduced you to the benefits of continuous integration. You learned how to create a continuous integration build running on VS Team Services. You’ve seen how easy it is to add your own build agents and create custom tasks that you can use in all your builds. You’ve also configured a SonarQube server and a build to use it. You’ve seen how easy it is to set up a build on VS Team Services that integrate with SonarQube. By using the correct plugins, you can now easily analyze your code. This helps you manage the quality of your code and avoid technical debt.

In the next chapter you’re going to look at another exiting feature of VS Team Services: package management. You will see how you can use VS Team Services as a centralized repository to share code with others within your organization and how to keep track of the packages that are used.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset