Getting started with Jenkins 2

Curso de Pluralsight sobre Jenkins 2, de Wes Higbee. La idea es aprender una gran nueva característica de Jenkins 2, los Pipelines

Reseña

Pequeño curso para rascar la superficie de los Pipelines de Jenkins 2 (y un poquito de Blue Ocean, la nueva interfaz gráfica).

El ritmo del curso me parece un poco lento, especialmente al principio. Hacia el final del curso, los conceptos son ya más complejos y los requerimientos de las herramientas para ejecutar las tareas necesarias no son nada fáciles de instalar y usar. Pero Wes hace un gran trabajo y lo hace lo más fácil posible, creando un gran equilibrio entre complejidad, usabilidad, interés,…

Al principio, había muchos conceptos básicos. Yo, me los saltaría si supiera algo de Jenkins. La parte final del curso, bastante más interesante.

¿Qué he aprendido?

Notes (English)

Setting up Jenkins

Getting started with Pipelines

Jenkins is like CRON, but on steroids

It’s awesome to automate things, mundane things, boring things

Download a .war file from the downloads page. Place it in a directory of your choice and run

java -jar jenkins.war

Export JAVA_HOME env variable:

export JAVA_HOME=/usr/lib/jvm/default-java

You should have java (version 1.7 or greater) installed and running.

The first time, it’ll give you a first administrator password. When you visit http://localhost:8080/ it will ask you for that password. After entering the password, you’ll be asked to install some plugins.

Everything is installed in ~/.jenkins folder. No database. If you delete this folder, you’ll start from scratch.

Then, you need to create the admin account.

Go to Manage Jenkins > Configure Global Security to configure authorization and authentication

Go to Manage Jenkins > Manage Users to add/remove users

Docker

That’s a very convenient way of running Jenkins. There is a hub in http://hub.docker.com with Jenkins images and there is documentation too about how to install and how to run Jenkins in a docker container.

Creating application builds

Clone the repo https://github.com/g0t4/jenkins2-course-spring-boot.git

Go to spring-boot/spring-boot-samples/spring-boot-samples-atmosphere. You’ll build that project with Maven

Install Maven

mvn compile to compile the Java code

mvn test to run tests

mvn test to run tests

mvn package to package your app in a .war o .jar file

In the project configuration, go to Source code management, and enter the URL for the repo

Then, go to Build and add a build step, Maven targets, especifying compile as maven target and spring-boot-samples/spring-boot-sample-atmosphere/pom.xml the target POM file

Project view: the page where options about the project or job are shown

Build view: the page where you can see optoins for a specific build

Console view: to see what commands are executed

Workspace: source code of our project. JENKINS_HOME/jobs/<job name>/workspace

From the Project view, you can navigate the project’s workspace

The workspace is common for the project, so files may be modified from build to build

A way to keep interesting files (for example, a .jar file with our app) we need to configure a Post-Build step to Archive artifacts

The archived artifact will be shown in the Project view in this case. It depends on what type of artifact and what plugin do you use

What’s the difference between a Job and a Build? A Job is defined by a configuration file. A Job can be seen as a step in the build process of our application. A Job can have multiple Builds associated with it. A Build is the result of executing a Job.

Testing and CI

Add build steps to your project: execute shell script, run maven target,…

Add build triggers or build post-build actions

But that way is the old way. We’re going to take a look at the modern way, the Pipeline way.

When creating a Pipeline project, there is no build step, or post-build actions step. Instead, there are advanced options and pipeline steps.

Pipelines are described in Groovy scripts.

In Pipelines, there is a command, step, that includes all standard steps for Free Style projects.

node {
    git branch: '*/master', url: 'https://github.com/g0t4/jenkins2-course-spring-boot.git'
    
    def projectPath = 'spring-boot-samples/spring-boot-sample-atmosphere'
    dir(projectPath) {   // run commands in a custom workind directory
        sh 'mvn clean package' // run shel command
    
        archiveArtifacts 'target/*.jar'  // archive .jar files
    }
}

“Master Agent model” sub-chapter is really interesting.

Something interesting about pipelines are Stage view. You can add a stage to the Pipeline script with:

stage 'Stage name'
...

Absolutely fantastic! In the project view, you can see a graph/table/visual-thing about what stages have been successful, what ones have failed, how much time they took to run,… Impresive!

Email notification in a Pipeline

Choose step emailext

One can define functions in Pipeline scripts. Awesome! You can define a notify method:

def notify(status) {
  emailext(
    to: 'rchavarria@whatever.com',
    subject: "Job ${env.JOB_NAME}",
    body: "${status}"
  )
}

Notify on errors

As the pipeline script is just a groovy script, we can use try-catch blocks. Hmmm, this sound weird to me, but ok.

Visualizing test results: through the traditional step of “Publish JUnit…”

Finding and managing plugins

Integrating code coverage

For a java project managed with Maven, just run mvn verify to generate a HTML code coverage report at target/site.

Publishing HTML reports

Pipeline command:

ublishHTML(target: [
            allowMissing: true,
            alwaysLinkToLastBuild: false,
            keepAll: true,
            reportDir: 'target/site/jacoco/',
            reportFiles: 'index.html',
            reportName: 'Code Coverage',
            reportTitles: ''
        ])
``Pipeline command:

publishHTML(target: [ allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: ‘target/site/jacoco/’, reportFiles: ‘index.html’, reportName: ‘Code Coverage’, reportTitles: ‘’ ])


BlueOcean UI plugin

A new user interface for Jenkins

It's a experimental plugin for now. You need to go to `Advanced` tab in the `Manage plugins` and add a new source for plugins.

# Building continuous delivery pipelines

## Backup and restore

Just gzip and copy Jenkins home directory. Then, paste it whenever you want, ungzip it and you'll go:

tar -czf jenkins-home.tar.gz ~/.jenkins tar -xzf jenkins-home.tar.gz <another/path>


### Stashing pipelines

There is a pipeline command, `stash` to save a *copy* of some files to reuse them in the rest of the pipeline. *stash* means: *reserva oculta*. It's like archiving, but it only lasts the duration of the pipeline.

### Browsing workspace in Pipelines

Workspaces live inside nodes. Every pipeline must reserve a node. To go to the workspace: go to *Pipeline steps*, select the step *Node* and you'll see the workspace.

### Allocating a second node

Each time we allocate a node, we're not sure we have the same workspace in all different nodes. Usually, it's the same, but it's not guaranteed.

To share the workspace, we use the `stash` phase/step.

To get a shared workspace (a *stashed* one), we can `unstash <stash-name>`.

Create a new Node in Jenkins, a new Agent:

- Name
- # of executors
- Remote root dir: it can be `/tmp/jenkins-<agent-name>`
- Labels: to identify this agent in Pipelines
- Launch method: launch agent via Java Web Start can be a good option

When allocating a node in a Pipeline, with `node`, you can pass some parameters, for example, one label we gave the Agent before: `node('agent1_label')`

### Execution and Monitoring parallel builds

The following code will run tests in **parallel**, each one will allocate a new node:

// create a new stage stage ‘Browser testing’

// run different tests in parallel parallel chrome: { runTests(‘Chrome’) }, firefox: { runTests(‘Firefox’) }, phantomjs: { runTests(‘PhantomJS’) }

def runTests(browser) { // each run will allocate a new node node { sh ‘rm -rf *’ unstash ‘everything’ // run tests sh “npm run test-single-run – –browsers ${browser}” // archive test results step([$class: ‘JUnitResultArchiver’, testResults: ‘test-results/**/test-results.xml’]) } }


### Manual approval step

`input` step is a way to ask the user to enter some data by hand, so it can work as a manual approval, a human must say "Yes" to continue the execution of the job

Do not use `input` inside a `node` step. It will pause an executor until a human press a button. A common pattern to use with `input` is the following:

// first, notify (it must be done in a node) node { call_my_notify_method(‘You must click to continue’) }

// outside any node input ‘Is it ready to continue?’


There are heavyweight executors and lightweight executors. heavyweight ones run code inside a `node` step, they're common executors. lightweight ones run Pipeline code outside `node`, but they can't run `sh` steps or are not designed to do that.

Lightweight executors belong only to *master* node.

### Deploy to staging

// concurrency 1 means we only want 1 of these stages to be run at the same time, only the newest will be run. the rest will be cancelled by Jenkins stage ‘Deploy’, concurrency: 1 ```

Well, now, it seems stage needs a block syntax {} and it doesn’t accept a parameter called concurrency, there will be a new way to accoplish that.

Jenkinsfile

You can get the content of the Pipeline and make it part of the project source code.

Summary

Advantages:

  1. Split jobs into different stages
  2. It saves the work if Jenkins crash, then it starts when it was
  3. Manual verification

Resources:

Transcript

Course Overview

Hello and welcome to my course, Getting Started with Jenkins 2. My name is Wes Higbee. I am a huge fan of having automation to take software from development through to a production environment. There is nothing as cool as being able to check in code and see it flow automatically through to at least some staging environments and maybe even into production and having all the sanity checks along the way to make sure that everything is okay, this really makes me feel confident about the software that I’m creating and the software that I’m helping my customers create. No matter where you’re at right now, this course will help you get up and running to quickly automate builds, deployments, testing, and many other things with Jenkins 2 and one of the major things we’ll be covering is the new pipeline functionality that’s built in right out of the box. By the end of this course, you’ll know how to set everything up so that when a developer checks in code, that code can be compiled, tested, packaged, and then deployed to staging environments, perhaps further testing, and even the ability to deploy that into production environments. I’ve designed this course for anybody that has an interest in or perhaps even a background in automating builds and deployments of software, so no matter where you’re at, whether you’ve used Jenkins in the past or you’re completely new to it, this course will help you get up and running quickly. If this sounds like what you’re looking for, join me as I take you through Getting Started with Jenkins 2.

Setting up Jenkins

Intro

As I was taking the time to create the materials for this course, I thought back to one of the very first build tools that I had used in my own work and that was CruiseControl. I remember setting up CruiseControl to do some pretty crazy things including building my software every time we checked in, through to deploying that software into staging environments so I always had a demo environment ready to go in case a customer called up and would like to see some of the newest features we were working on. CruiseControl alone cut out several hours of time every week manually building software and pushing it out into environments. Since the early days of CruiseControl, there have been many more tools that have popped up in this space and Jenkins is one of those tools that falls into my list of favorites. I use Jenkins in my own production environment for hosting my own sites, for performing backup and even doing maintenance activities including renewing certificates, as you can see here, but to be honest, when I first used Jenkins years ago, I really wasn’t that happy with the experience. Here’s a diagram that can help explain what I was initially frustrated about with older versions of Jenkins. This is a Build Pipeline view from the Build Pipeline plugin. Now do not confuse with Jenkins 2 and the Pipeline plugin; they’re different things entirely. This Build Pipeline plugin was an attempt to visualize chaining jobs together using what I kind of like to refer to as the Legacy job style in Jenkins. So you have individual jobs that do small units of work. For example, you might have one job that compiles your code and spits out a package, so you might have a JAR file come out of this or a NuGet package. This step performs the basic compilation and maybe some unit testing just to make sure that the state of the application is healthy and you run this every time somebody checks in. After this then you might have another job that performs static analysis, so code coverage, maybe looking for code style violations, and then you might have another job that’s involved in performing some functional testing, so you might need to deploy to perform your functional testing and there might be different environments that you deploy to and then you need to run those tests thereafter and then maybe you need to do some performance testing thereafter. What I didn’t like was the process to stitch together all of these individual jobs was rather cumbersome; you had to create each job independently and there was a bunch of configuration to wire these jobs together as upstream and downstream dependencies and that was just the beginning. You also had to add a lot of overhead to pass artifacts between jobs and you had to find all the right plugins to be able to make this possible because out of the box, this type of pipeline just wasn’t possible. And of course, trying to figure out what plugins worked well together was never fun, but I am happy to report that as of v2, a totally new approach is available out of the box and of course, you could’ve used this in prior versions of Jenkins as well, but now it’s available as a suggested plugin right out of the box and you can do some really neat advanced pipelines and these pipelines can now be defined in code as a script, a Groovy script to be specific, a script that can also be checked in diversion control right alongside all the other files that we have for our project. I really, really like this approach and that’s why I’m excited to share with you Jenkins 2 in this course and I really want to help you get up and running so that you can take advantage of this new approach.

Why Jenkins

If you’re new to Jenkins, you might be wondering, why would I use it? What is it meant to do? Well, I like to think of Jenkins and many of the similar or related tools as simply CRON on steroids. This analogy is helpful because at the end of the day, all we’re doing is taking tasks that we would otherwise manually perform and we’re automating them and then we’re setting them up to be scheduled or triggered so that we don’t have to do anything to intervene and kick off these automatic jobs. That’s a lot like what you can do with CRON, but then Jenkins goes way beyond CRON by collecting feedback, centralizing it for you, sending it to you perhaps over email, and giving you a nice graphical interface to work with and also giving you a whole entire toolset, an ecosystem of plugins that help you get your manual tasks done with very little coding or configuration on your part. So the key here, Jenkins is designed to help you automate the mundanity of your day-to-day work and that might be developing software through to releasing it into a production environment. Maybe you’re looking to invest into continuously integrating your development process. You’d like to get feedback when people check in, to make sure that the code still compiles, still packages, and perhaps some testing still works, or maybe you’re going beyond continuous integration into the continuous delivery space and you’re looking to automate the entire process from dev to production. The other key with Jenkins again is getting reliable, fast, feedback so when you kick off all these jobs, you shouldn’t have to kick them off yourself, they should be executed automatically and if something goes wrong, you should be able to find out about it quickly so that you can do something about it, when the problem is still fresh in your mind, not weeks and months later. Another benefit of using Jenkins, the setup process, is dead simple. As we’ll see in this course, it’s as easy as downloading a JAR file. Data is stored on disks so all you have to have is a hard drive and you’re good to go and in version 2, the process is even simpler because there are a lot of suggested plugins out of the box that help you cover the common use cases that you’ll encounter. But I think the last and perhaps most important reason to consider using Jenkins is that it can ultimately help you be confident in what you’re doing, by making sure your tests are always run. It can help you be confident in performing deployments the exact same way every time.

History of Jenkins

Let’s talk briefly about the history of Jenkins to help you understand how things got to be the way they are today. So as I mentioned earlier, CruiseControl was one of the first attempts at providing a system that could help us automate many of the mundane tasks around developing software. CruiseControl started back in 2001 and largely focused on building software, making sure that the app compiled, making sure that maybe some unit tests were passing. CruiseControl was initially developed for Java apps, but also there was a version called CCNet meant for .NET developers. One of the drawbacks for CruiseControl was that it relied upon manipulation of XML configuration files; it was not a lot of fun to set things up. In the summer of 2004, work began on a project called Hudson. Hudson was started by Kohsuke then at Sun Microsystems and the first release of Hudson was in 2005 and by 2007, most people were switching over from CruiseControl to Hudson. In 2009, amid the recession, the Oracle acquired Sun Microsystems and in 2010, that acquisition was complete and of course, at this point in time, Oracle got involved in the development of Hudson, as this was a project that had been developed at Sun Microsystems. In late 2010, after the acquisition was complete, tensions began to rise. There were many issues around governance between the contributors to the Jenkins project and Oracle. In December of 2010, right at the end of the year, that dispute was amplified when Oracle applied for a trademark for Hudson, so Oracle was laying claim to the name Hudson. In January 2011, because of the tensions, the community around Hudson took a vote and decided to rename the project to Jenkins and since this time, there has been a debate about who actually forked the project because now they’re actually two separate projects, one for Hudson and one for Jenkins and that’s because Oracle continued the development of Hudson while the community moved on to develop the project called Jenkins. And the rest is history because by 2014, most of the community that had been using Hudson was now using what was called Jenkins and you can find both of these projects to this day. In 2014, CloudBees shifted from a PaaS focus to focus on Jenkins. It’s worth noting that Jenkins is an open source project; CloudBees is a primary provider of an Enterprise version of Jenkins including support and in 2014, Kohsuke, the creator of Hudson, became the CTO for CloudBees, and then recently in April 2016, Jenkins 2 was released.

Course Overview

This course is oriented toward a getting-started approach and thus I’ve designed this course for you to be able to follow along. You’ll get the most out of it that way because I’ve designed a bunch of examples for you and I’d encourage you to first work through these examples, even if they’re not in the language or platform that you develop with on a day-to-day basis. There is nothing like having good experience using other languages and other tools than the ones you use on a daily basis, because doing this often demystifies the process. Once you see that things are pretty much the same no matter what language and platform you’re working with, that can be very empowering to get this working with your own applications, which would the last thing you’ll want to do then. Once you have worked through the examples here, then I’d encourage you to try and apply the examples to your own projects. This course is divided up into five modules. In this very first module, the next thing we’ll do is just get started by setting up Jenkins, so I’ll walk you through the installation process and also how to add yourself as a user to Jenkins. Once we have a lab environment set up, we’ll move into creating application builds and although I’ve been talking up the new pipeline job type, we’ll first start out with the older freestyle project job type. There is nothing wrong with this job type; it’s been around since the beginning of Jenkins and you’re likely to put it to use yourself in certain situations so I want you to start there and have a good understanding of that job type before we move on to understanding the pipeline job type. After we’ve set up a basic build process for an application, we’ll look at a more rounded, continuous integration process by bringing in testing, triggering, and notifications and we’ll do this using the new pipeline job type. As we work on our CI process, we’ll start to realize that we might like to have a few additional plugins to give us some more visualizations inside of Jenkins, and then we’ll wrap up this course by extending our continuous integration process into a continuous delivery pipeline.

Installing and Running Jenkins

First we need to get a copy of Jenkins downloaded to be able to run it and it just so happens that there are many ways to run Jenkins and thus many options for downloading. To check out these different approaches, you can click the Download button here right in the center of the page or you can go up to the bar and click the Download button and you’ll be presented with two separate options. On the left-hand side you have the LTS Release, which is a long-term support or stable release of Jenkins and the on the right-hand side you have the Weekly Release. We’ll be using the LTS Release in this course, specifically version 2.7.1. By default we have a button here that downloads the war file release; however, you’ll also see some other choices in the dropdown list here for various different distributions. For example, Mac OS, and the difference here is that these various different distributions oftentimes include installers that set up a service so that Jenkins persists across reboots, it comes up automatically, so basically there’s a Daemon behind the scenes to make sure that Jenkins is always running. For now though, we’re going to focus on the war file download because it’s the simplest way to get up and running so go ahead and just click on 2.7.1 and grab the .war release file. I can hop over to a terminal and make sure I’m in the directory where I downloaded that file and right from this directory, all I have to do is type java -jar and then Jenkins.war. So the one additional piece of software you’ll need here is to make sure that you have Java installed on your system. Java 8 is recommended, but you can also use Java 7 and you’ll also need to make sure that Java is in your path. Okay, now we can go ahead and execute Java here. Jenkins will start up and begin the setup process. Join me in the next clip where we walk through the installation.

Initial Setup and the Data Directory

As Jenkins is starting up, I’m going to split the screen here and in a browser, I’m going to pull up localhost 8080. 8080 is the default port that Jenkins runs on and as Jenkins is starting up, you’ll see a status message that indicates it’s getting ready to work. Take note. Up above in the terminal there’s an admin password that’s been generated for you. You’ll need this to sign in to this installation of Jenkins to be able to walk through the setup process. Now take note. Down below, you can see an Unlock screen comes up when Jenkins is ready for you to sign in. This gives you a secured Jenkins installation out of the box, so if this is running on a public facing site, that’s okay because you’ll need the admin password to get in. So hop over to the terminal and copy that password, paste it in, and then go ahead and click Continue to continue the installation process. After a few seconds you’ll be presented with the option to either customize Jenkins or you can go ahead and install the suggested plugins. So before I do that, I’m going to pop this down to the lower half of the screen so you can see some of the output as we install plugins and then I’m going to click to install and you’ll notice that we’ve got some more output coming out of our running instance of Jenkins and that’s because the terminal output we had running from the war file, just comes right out to whatever terminal you’re connected with. So you can check the status here if you have any problems, or you can go ahead and just maximize this window and watch as each of the little checkmarks show up, indicating that that plugin is done installing. One of the things that’s great about version 2 of Jenkins is that the list of suggested plugins is much more robust. Back in v1, there weren’t very many and you often had to start out installing plugins just for some very basic uses cases; that’s no longer the case. As you can see here, there are quite a few plugins available out of the box and most notably of these is the Pipeline plugin, but there’s also a Git plugin, which is pretty prevalent today; there’s a GitHub plugin, and then there are various other plugins. For example, you’ll see some Build tool plugins and in Gradle. One other helpful piece of information as this is starting up, Jenkins uses a data directory on disk to store everything, there’s no database, so all the config and all the build information goes into the file system. If I hop back over to the terminal here and split the screen, the default location because we’re running from JAR file, is going to be in my user directory in a dot folder. If you look in here you’ll see various different configuration elements and other aspects of Jenkins. So just to let you know what’s being modified on your system, this is the folder that’s created. So if you wanted to wipe things out here, you could just wipe out this folder and then when you start Jenkins back up, it’ll run through the installation process again, so this is also a folder you can back up if you’re working on something and don’t want to lose it. Now you can see in the screen behind here for the installation process, we’re now presented with a screen to create an admin user. So we have Jenkins up and running now. Join me in the next clip where we’ll walk through a brief tour of the web UI.

Default Security

Once all the plugins are installed, you’ll be presented with a screen here to set up an admin user. Now you can go ahead and continue as admin, but let’s take a moment here and create an account that we can use instead and that’s because you’ll want to have your users sign in individually so they can have their own views and configuration inside of Jenkins. So users will have individual user accounts. So go ahead and set up a password. Once you’ve got that then you can go ahead and click Save and Finish. With that your user is set up so go ahead and click on Start Using Jenkins. This landing page you see here is the Jenkins web UI. You’ll be coming out here quite often to find out the status of builds and various jobs that you’ve set up, so let’s take a quick moment to learn about this interface and see how we can navigate around. First off, you’ll notice that I’m signed in as that user that I set up and if I want to log out, I can click Log Out here. When I do that or when people just come to the Jenkins site initially, they’ll be presented with the log in screen, at which point in time they need to put in their username and password. So by default the Jenkins web UI is secured for you, but let’s go ahead and check out how that’s configured so you can start to find your way around. This default landing page here is the portion of Jenkins where you can get information about jobs that run. It’s like the user portal. There’s also an administration portal and you can get to it over on the left-hand side here under Manage Jenkins, and then there will be a big long list here of different aspects that you can configure, come into the Configure Global Security item, and in here you’ll see the default configuration. So right now we have security enabled and then by default we’re using Jenkins’ own user database. Notice there are other options here, for example LDAP, and these options that are listed under Security Realm have to do with authentication, so knowing what users can authenticate with the system, how they authenticate, and then also what groups they belong to. You might also take some of these other options like allowing users to sign up, which will add a form for users to request an account, and then down below under Authorization, this is the second half of Security. First we authenticate to specify who we are and then this is the privileges that we have access to and so you can see right now, once somebody logs in, it’s a free-for-all; they can do whatever they would like to the system and that’s fine as you’re learning, but you’ll want to lock this down once you go to release this into a production environment and of course, there are some options here you might care about like allowing anonymous access, so we could set up allowing anonymous read access and take a look at this in this course. You’ll definitely want to look at the matrix-based security and project-based matrix authorization if you want fine-grained control over security, otherwise, just having people log in is probably enough. So that’s the default security. Let’s go ahead and save our allowing anonymous read access, and then I want you to scroll down here and I want to show you how you can set up users. So you can go under Manage Users. If you want to add additional users, you can come in here and add them right on this form and that’s because we’re using Jenkins’ own user database in this course. So you could come down and create an additional user if you would like. In the next clip, let’s take a look at that read-only access that we just enabled in the security settings.

Anonymous Read Access

By setting the read-only security to true over in Configure Global Security to allow anonymous read access, when we now log out, you’ll see that we have a different interface on the landing page. We’re no longer prompted to log in. We instead have what looks like a minimalistic user interface. This is the read-only access that we’ve enabled. This can give people the ability to poke around and look at the status of things inside of Jenkins without changing things, and of course, if you click Log In then, you’ll be taken to a form that looks like we used to sign in. After which you’ll see many more menu items show up. It’s very likely if you use Jenkins internally that you’ll want to at least turn on read-only access, especially in situations where the information isn’t sensitive and convenience is more important. So now that we have Jenkins up and running, let’s take a minute and talk about one other way that you’re very likely to want to run Jenkins and that’s via Docker.

Running Jenkins with Docker

One of the easiest ways to run Jenkins and avoid all the need to understand the different installation types or different operating systems is to simply run Jenkins in a Docker container and if you go out to the Docker hub, you’ll find that there’s an official repository for Jenkins that serves as a great starting point to get Jenkins up and running if you’re already familiar with Docker. Now if you scroll down here, you can read through some of the description of how to use this image including the port mappings and a volume mapping so that you can open up the correct ports and also so you can persist that Jenkins home data directory across container restarts and the destruction of your container. You are more than welcome to check out the volume mappings. I’m just going to grab this first command here and us this to start up another instance of Jenkins. So with that I can hop over to my terminal and this assumes that you already have Docker installed. I can paste in that command and then I’m going to make one small change to it since we already have Jenkins running on port 8080, I’ll change the Docker container version of Jenkins to run on port 8090 on the host. This maps then to the internal port 8080 and then if we want, on the end we could add that tag for 2.7.1 to make sure we’re using the same version. So let’s go ahead and run our Docker run command. You’ll see similar output to what we saw before. The only difference here is we’re just running Jenkins inside of a container, which really is not much different than running it as an application on our computer. We can go ahead and copy the generated password here and then we can come out to a browser and point at port 8090; make sure you use the correct port, whatever you mapped. You’ll see the standard message here that Jenkins is getting ready to work and you’ll land on the Unlock page where you can paste in that password and follow the same process we used to set up Jenkins locally. The last thing that I have to say about using Jenkins inside of a container is that you’ll be responsible in this course for installing the appropriate software into that container, so that’s something you need to be knowledgeable about and if you aren’t already knowledgeable about that, I’d encourage you just to follow along with Jenkins running locally and later on you can explore how to run Jenkins inside of a Docker container. One of the things I really like about using Jenkins inside of a Docker container is that I can have multiple versions running on my computer. No problem. They won’t conflict with each other. It’s very easy to spin up instances of Jenkins, test them out and spin them down, which is invaluable when you’re learning and also when you’re testing new releases of plugins or even Jenkins itself and as you peruse the web you’re going to find a lot of people talking about using Docker with Jenkins, including what we’re doing here, running Jenkins inside of a Docker container. So what we have here is our master node. It’s our server node. You’ll also find people talking a lot about running the slave or agent nodes, which are nodes that you can use to scale out a pool of workers to perform the various different jobs that you have. So you see a lot of this, Docker is probably a skill set you’re already looking at acquiring if you don’t already have it so I’d encourage you to try it out. Jenkins is very Docker friendly. Alright, now that Jenkins is set up and running, let’s move on to the next module where we’ll start building our application code.

Creating Application Builds

Anatomy of the Build

Now that we have Jenkins up and running, let’s put it to use to build some application code. Before we dive into this demo, let’s first walk through the anatomy of a build process. First we’re going to clone our application source code from GitHub. We’ll clone that into a local workspace, a folder that we create on our hard drive. Once we have that code, then we’ll compile it to make sure that it compiles. We’ll run unit tests on it to make sure it’s healthy code, and then we’ll package up the result and in this case we’ll produce a JAR file. This is the process we’re going to work with, first manually so we can see the commands that are involved and then we’ll port those manual commands over to automation inside of Jenkins. So let’s get started by cloning our code from GitHub.

Cloning the Sample Project

I have a Java example that we’ll begin with. It’s based on Spring Boot and the reason I chose this is it’s a rather easy project to compile, test, package, and even execute. So the first thing I need you to do is come out to my GitHub repository here and come out and take a look at a repository called jenkins2-course-spring-boot. You can also grab the official Spring Boot project, but that might contain a newer version of the code so I went ahead and forked this so this stays static in time so you can always use the same version that I used and inside of here, there are some samples and we’ll be focusing on one of these samples in this module and it’s the spring-boot-sample-atmosphere. So the first thing you’ll want to do is clone this repository locally. So come over to the terminal and make a directory and I’ll call this jenkins2. I’ll cd into there and then I’m going to clone that repository. Okay, then once that’s done we can take a look at this project, so we’ll change into the directory we cloned, list the contents there, and you can see the folders we saw out on that GitHub repository. You’ll want to change into the spring-boot-samples folder and then inside of there are all the samples. We’re looking for the atmosphere sample; that’s where we want to change into next. If I clear the screen and list the contents, you’ll see here we have a source folder and a POM file. This is a Java app that’s using Maven as a build tool, so join me in the next clip where we download Maven and work on manually building this project.

Manual Compilation with Maven

So we’ve downloaded our sample project now and just taking a look at the contents here, we have the POM file, which is used by Maven to figure out how to compile, test, and package up this application. Inside of the source folder we have the code for the application and then we also have some tests for this app. If we were to go through the process of manually building this, we’d first need Maven to kick this process off. Now if you already have Maven, that’s great. If you don’t have Maven and have never used Maven, don’t be afraid; it’s a tool that’s pretty straightforward to use and even if you don’t think you’ll ever be working on building Java applications, I’ve found that it’s really helpful to learn how to build different types of projects that involve different languages and platforms because that really helps wrap my mind around the commonalities that are involved in building, testing, packaging, and deploying software. So let’s walk through the process of what this looks like with Maven. The first thing you’ll need to do is install Maven on your computer and there are some instructions out here on maven.apache.org, which basically involves downloading a zip file and then extracting that zip file so that you can use the Maven executable that’s inside. So you’ll see down here at the end of these steps we’ll be using the Maven command. There are going to be other ways to install this. If you have a package manager for your system, I’d encourage you to use that. For example, I’m going to use brew here on a Mac to install Maven. You can do the same on a Mac. If you’re on Windows you could check out Chocolatey for a simple install process or you could go through the steps that are on that page. Once that’s installed, we can come back to our project and we should be able to type mvn -v to get the version information about Maven. Now I’m using version 339, so if we take a look again at the structure of our project, you can see right now we just have a source folder. So let’s just say we’re at the point in time where the code is complete and we’d like to run it through a compilation process, so we’d like to compile our Java source code and generate class files. This is a lot like compiling perhaps a DLL or an executable with other platforms and languages. To do this all we need to type in is mvn to invoke Maven and then we pass the compile phase to Maven. Long story short, Maven will compile the Java source code into class files and if everything went successfully, you should see BUILD SUCCESS in the output here, and then if I look at the contents of this folder again, you’ll see an entire new folder called target, so we now have our compiled application and here are those class files that I was talking about. So the very first step of building an application is compiling it. Before we move on though, what are some other steps that you have performed in the process of building an application? What can you think of that we might like to do next? Join me in the next video where we continue the process of building this software.

Manually Testing, Packaging, and Running the App

So we’ve compiled our application. The next thing we might like to do is to perform some tests on our application. You can pass the test phase to the Maven command to execute any tests that are associated with this project. Once the tests are done running you should see some output that indicates there is one test that was run, there are no failure, no errors, and nothing was skipped, and you’ll also see that the build was successful. After testing our application, the next thing I can think of is packaging it up. Now in Maven land there is a package phase that you can include, so as you can see, Maven is a pretty all encompassing build tool. Oftentimes other platforms have other tools for the various different phases we’re talking about here. For example, in .NET land, you might use a tool like NUnit to run tests and then you might use a tool like NuGet to package up the results, but in Java land with Maven there’s a package phase and that will generate typically a war or a JAR file. In this case we’ll be generating a JAR file. So in the output here, right at the end of calling Maven package you’ll see that our tests were run again and that’s because tests are run as a part of the packaging phase. The new piece of output you’ll see here is building a JAR file. This is a lot like a zip file. In fact, you can rename it to a zip file and get access to the files that were put inside of it. It’s a package that contains our application, all of its dependencies, and resources and various things that we need for that application to run. So let me clear out the screen here and if we look inside of the target folder, that’s where that JAR file was generated. Join me in the next clip where we talk about the beginning of automating this process.

Creating a Jenkins Job and Configuring a Git Repo

Now that we’ve seen how to build this particular project from the command line, we can start to relate those commands we are running into an automated build that we set up inside of Jenkins. Keep these commands in mind as we’re working through Jenkins and just always keep in mind that Jenkins is more or less just taking care of the process of running these commands for us. So let’s come back over to Jenkins and let’s log in here and as a very first step, let’s just work on what it takes to compile our application. Because this is a new installation, we have no jobs inside of it and thus we get this nice welcome message here to create a new job. A job in Jenkins, also referred to as an item, a job just represents a task that we’d like to be executed. So it could just be a series of commands. In this case, a get clone followed by an mvn compile. So when we create a job we first have to give it a name. Let’s call this atmosphere and then after we give it a name, we need to come down and pick from a series of choices for the type of job or the type of the item that we’re going to create. In this module we’re going to focus on freestyle projects. This is one of the original styles of jobs in Jenkins so we’ll start here so you have a good understanding of this basic job type and then later in this course we’ll move into pipelines, which are an evolution of a series of freestyle projects, actually. We’ll get into that later though, so for now, we’re starting with a freestyle project. So select that and click OK then. Once you’ve done that, you’ll be taken to a job configuration screen. Now we’ll get into the details of all the various different components of this, but for now, let’s focus on the two tasks that we have at hand. Number one, we’d like to set up to clone our repository and if you look across the top here, there are tabs that can jump you down to different parts of this configuration. Let’s click on Source Code Management because that sounds like what we’re working on right now. Now out of the box with those suggested plugins that come with Jenkins, we have support here for both Git and Subversion, but you can also use other repository types as well, you’ll just need to install plugins for those. We have Git out of the box so we can check as we’re working with a GitHub repository. If I come over to the command line and search through my history here, I should be able to find the URL that we cloned from and just copy that from right here and bring that back over to Jenkins and paste that in the repository URL field and that’s all we have to do. This repository is public so we don’t need any credentials. By default, this will build the master branch, so that’s the source code management. That will perform our get clone for us and then the next thing we want to do is compile our application.

Compiling in Jenkins

Now we want to move on to compiling our source code. So if you come to the Build tab here, this will scroll you down to the Build Environment. By the way, these tabs are just a shortcut for manually scrolling through this. So we can come down to the Build section here and we can click this dropdown here to add a build step. Now we have a couple of choices here and these are the ones that are installed out of the box and in this case it just so happens that there’s a special build step for Maven projects, so we’ll choose that option, though most of the time you’d start with executing a batch files in Windows or a shell script on UNIX, Linux, Mac OS, etc. In this case we happen to have a special type though that will invoke Maven for us, so let’s go ahead and select that and then this is asking us for a set of Maven goals. So in this case we want to compile our source so we just type in compile there and what we’ve set up here is exactly what we did at the command line with mvn compile; there’s really no difference and we’ll see that in a moment here in the output when we execute this job. Now we’re not quite done yet. Can you think of what we might be missing versus what we did at the command line? It just so happens that we’re working with a folder that’s not in the root of the repository. We’re inside of this spring-boot-samples and then the atmosphere sample, so let’s copy that and bring that over and then if you click Advanced, many of these build steps have a series of advanced options that are hidden by default because you usually don’t need them. It’s not typical that you would put the POM file in a nested folder, except in this example when a git repository has many projects in it. So in this case we need to specify the location of our POM file. So it’s in that folder and then pom.xml. Now there are some other options that you could set up, but we’ll leave those exactly as they are, we just needed to specify the location here. With that set, we can go ahead and hit Apply or Save. Let’s choose Save which will close out the configuration and take us to the Project view. Since this job type that we chose was a freestyle project, we’re taken to a Project view, so depending on the type of job you set up, you’ll be taken to different views and in this case we have a Project view. Later on we’ll see a Pipeline view. So this is a view of our job and we have the configuration over here on the left side we can click to get into that. Let’s check that out real quick. So you can click Configure to get back into our settings and whenever you save you’ll be brought back here to this project view. Now there are a bunch of options, but let’s focus for now on just compiling our application. So how do we run those commands that we set up? We basically have a get clone and an mvn compile. Well over on the left-hand side here we have a Build Now option and that makes sense because we are building our application so let’s go ahead and build our app. So if you click on that, you’ll see Build Scheduled shows up and then down here there’s a Build History widget and you’ll see that something just showed up here, a record showed up with #1 on it and it’s got a little progress bar spinning along, like a barber shop bar, so something’s going on here. We can click on this to get more information and when we do that, we’re brought to a different view. We’re brought to what’s called a Build view. This gives us the status of this particular application build and if you come into the Console Output on the left-hand side here, you’ll see the commands that are firing off. Now it looks like this build just completed. If we scroll here though and maybe I’ll zoom in a little bit to help out, let’s talk through some of this. So this is the Console Output here. So a couple of things. First off we’re cloning the Git repository that we set up and we happen to be cloning that into this folder and this folder is what’s known as a workspace in Jenkins. A workspace is just a location where we are checking out code and performing other operations to build our application, just like we have a workspace that we checked out into when we were manually building our application. We have quite a bit of output here showing the progress of cloning the repository and then once that’s done, you’ll see some output with regards to compiling our application. Focus specifically on the line that has mvn and then -f with our POM file and then compile. So here’s our mvn compile and it just so happens we specify an extra flag to Maven to point out the location of our POM file and then all the output down below here will be exactly like what we worked with when we were compiling from the command line and you’ll see down below we should have a BUILD SUCCESS, assuming that succeeded and you’ll also see that three source files were compiled into our target folder, so there’s that same target folder that we created manually at the command line; that’s now created with Jenkins as well. So we’ve got our app compiling now, join me in the next video where we take a look at where all this is happening at and start to understand what we would then do with our compiled application. Peeking into the Jenkins Workspace

So we’ve just mirrored the entire command line process in Jenkins and we can see the output here and that’s nice, but where exactly are these files at? Well, I mentioned that workspace before. We can actually copy this and go take a peek there. That’s where Jenkins performed this build at and do you notice anything about the path that looks familiar? You should see in the path that we’re working from our Jenkins home directory. We talked about this in the previous module. So inside of there, there’s a workspace folder and inside of there is a folder called atmosphere and it just so happens to be called atmosphere because that we called our job. So each job that you set up will have its own separate workspace so that you can keep the files isolated for each job. Let’s go ahead and copy this. We can hop over to the command line here. I’ll change to my home directory and clear this out and if you look in here, we had that Jenkins folder and if I just list the contents here, you’ll see way at the bottom is the workspace folder. So if we see the end of that, clear the screen out and list the contents here, you’ll see the atmosphere folder. So we can change into that as well. Now I have a question for you. What do you think is going to show up when we list the contents of this atmosphere folder? And if we list the contents of this, you’ll see a lot of files. These are the files that were cloned from our Git repository, so just like the copy that we manually cloned, this is the location of our automated clone and inside of here is the spring-boot-samples and inside of there will be the atmosphere project, and if we look here we’ll see the same three elements. We have our POM file and our source and target folders. Let’s clear the screen here and let’s run a tree command again on this whole folder, just like we had on the folder that we manually pulled down, and you can see we have the same output here. At this point you’ll see that we only have class files in the target folder. We haven’t packaged up our app yet so we don’t have the JAR file, nor have we tested our app so we don’t have any of the testing files either. Now while it’s nice to come and poke around on the disk to understand what Jenkins is doing, you don’t have to do that. Join me in the next clip where I show you how to browse the workspace inside of Jenkins.

Browsing the Workspacre in Jenkins

I think it’s great to dig around in the file system and see what Jenkins is doing behind the scenes, but chances are we wouldn’t want to have to do this in a production environment all the time because it might be an extra step and hassle to ssh into or connect to this remote machine to browse the files. If you hop back over to Jenkins we can find this workspace here and browse it inside of the browser. We want to click this Back to Project link because right now we’re in what’s known as the build view. I call this the build view. It’s all the elements with regards to a particular build. Let’s hop back up to the project and on the project level, you’ll see on the left-hand side a link to the workspace, but you’ll also see this on the primary page here for the project. You can click on either of these and you’ll be taken to a folder view that will allow you to peruse through the files just like we did at the command line. You can click then on the samples and then if you click atmosphere, you’ll see the same files we just looked at and here’s that target folder and inside of here are the class files that we generated. So this is a quick way inside of Jenkins you can browse through the files and this is really convenient because the next thing you’ll probably want to do is package up the application and if you package up the application you’ll probably want to keep a copy of it and at this point we haven’t done anything inside of Jenkins to keep any of our workspace so the next time we build our application, we’ll lose everything we built here and we probably don’t want that. We’d like to keep some history of our application. So join me in the next clip where we set up packaging so that we can then archive the package.

App Packaging in Jenkins

For this next task I want to open up the workspace here inside of finder so you can see files show up as Jenkins is executing so I’m going to open up the samples folder and I’m going to open up the atmosphere sample and then I’m going to split the screen here and put this on the bottom and then on top I’m going to come back to Jenkins here. Now right now we’re in the Workspace view. I’d like to set it up so we can package our application, so do you remember when we typed in mvn package? Let’s get that working now. So my question to you, how do we go about doing that? Well, to change the configuration we need to come over to the configuration element inside of our project and right now we’re in the project view so we can click just Configure right here, and maybe I’ll make this big so that we can configure this and I’ll scroll down here or use the tabs. This is where these tabs are really helpful and I’ll change from compile into package. I can click Apply then and that’s changed the configuration. So what we’ve done here is change our job configuration. Next if we want to run this configuration, you tell me, what do we need to do to test this out? We need to perform another build of our job and that means we will generate another build that’s associated with our job, so keep in mind that builds and jobs are separate things. A job is a lot like a template. Builds are instances of that template. It’s kind of like an ice cube tray; that’s your job configuration. When you pour water in the tray then, eventually when you put that in the freezer you’ll have ice cubes come out. Those ice cubes are like a build. Each time then that you put water in that tray, you generate a new set of ice cubes, that’s like a new build. So the job is like the tray, the ice cubes are like the build. Now that we’ve saved our configuration and we can click Save to close this if we want, We’re brought back to the project view and over on the left side we can click the Build Now button. Now watch on the lower left and our Build History will have a number 2 show up and we could sit here and wait for this to complete or we can click into this and check the status as it’s executing. By the way, if you want updates, you can click this ENABLE AUTO REFRESH here and the UI will refresh periodically. Now it looks like our build is actually complete at this point and you can tell that because of a couple of things. First off, it says, Started 29 seconds ago, and it took 19 seconds to complete, but we also have this blue ball here and this indicates the status of our job and in this case, blue means everything’s okay. You would see red if something went wrong. So we now know because this ball is not flashing too, you’ll see a flashing ball when the job is still executing, we now know our job is complete. Where can we go to figure out what happened as this was executing? If we wanted to dig into what we’d see at the command line? Well, we’d come over here to the Console Output, just like we did before, and if we scroll down here now, we’ll see additional steps were executed and way at the bottom you’ll see we had our test executed and you’ll also see that a JAR file was created and if everything was okay, you should see BUILD SUCCESS. By the way, if you’re having any problems with Maven, perhaps not finding the Maven tool, it just might be that you don’t have Maven in your path, so make sure that the user that Jenkins is running as has a path that includes the path to Maven. Later in this module I’ll show you how to troubleshoot problems like this. Now if we flip over to our finder folder and we expand out the target folder, you’ll now see we have that JAR file, so that’s the same JAR file that we generated at the command line. Now that we have our JAR file, let’s talk about what we might need to do to keep a copy of this around and to have that conversation, let’s next turn to the topic of cleaning our workspace, which will help us understand the lifecycle of this workspace and whether or not this file in here would be good enough for historical purposes.

Archiving Artifacts

So we’ve got this workspace in Jenkins. What do you think will happen with this workspace if we run another build? Let’s say our application changed and a new version came in through version control, what will happen with this workspace then? Let’s go ahead and split the screen here and find out. So come over to Jenkins in your Project view and click Build Now, but before you do that, take note of the date modified timestamp, in this case 3:55 PM was the last time that this file was modified, the JAR file. So let’s click Build Now and if I scroll down here you’ll see in the Build History a number 3 shows up, but I can click on that, so this is Build #3 that’s executing and if we give it a second to complete, you can see it’s done now and do you see what happened with the file down below? Look at the timestamp on it. We’ve now got 4:03, so this workspace is not specific to a particular build; it’s specific to our project or our job. It’s tied to the job and it’s not tied to the build which means that we lose the files in here every time we build our application and if all we need is the latest version of our app, that would probably be fine, but likely you want to keep a history around and so the next thing you want to do out of this process of building your app is to grab a copy of the package and in this case that’s a JAR file. For other projects it might be something like a zip file, so whatever artifacts come out of your build process, you might want to keep a copy of those. Now to do that we need to come back to Jenkins and go up to our project level and configure our project. In addition to the build steps that we added, we can have steps that we add post build, so these are actions that are executed after the build completes and from this list we can choose a step that will archive artifacts and then all we have to do is specify which files we’d like to archive. Now in this case in our workspace, remember we’re drilled into this atmosphere sample and then inside of there we’re looking at target and then this jar file. So we can come back to Jenkins here and open up the Advanced option and grab the path that we used for the POM file, scroll down here, and we can paste this in to the files to archive and then we don’t want the POM file, we want inside of target, and then we want to grab that jar file. Now we could come paste the name in here, but that was rather long. You can also use wildcards and by the way, if you’re ever uncertain what you can use, a lot of these inputs have little question marks next to them that you can click and you can get additional information including often links that will take you to some documentation. So in this case we can use wildcards. So we can type *.jar to grab all the jar files inside of that target folder. So keep in mind these little question bubbles are really helpful and you’ll see them everywhere here, including for that POM file, though sometimes you’ll see that there’s an error if the relevant help file was not found and then you might want to refer to the official documentation instead. Now that we’ve set up this archive step we can click Save, and now to test this out, we’ll have to run another build, so we won’t be able to archive what we already have, but when we run a new build, we will be able to archive the new build results. Let’s go ahead and click Build Now. Down below we can see in our Build History that Build #4 is executing here. Now pay attention to this project view here. We’re just going to sit on this view and let it refresh and we’ll wait for #4 to complete, but pay attention. Right now we only have workspace, ah, and now we have something new; we have Last Successful Artifacts, so in addition to the workspace, it looks like we’ve got some artifacts that we’re keeping and there’s specifically one jar file that we were looking to grab. We have a little information about it. It’s 14 MB. We can click to download a copy of it. What we now have is a copy of this stored inside of Jenkins and if I come to Build now and click that again, we can kick off another build, #5 this time, and now we have a new build executing so we should have a new jar file that comes out of that process. Whatever the latest jar file is, that will show up on your project view here. You can see 5 is complete now so let’s drill into these and let’s see, first let’s take a look at #3. When you come to that build and this is the build view, you’ll see there’s nothing on here about an artifact. Now over here on the navigation on the left-hand side at the very bottom, this is something really convenient, you can jump between builds here with these little arrows. If you click the right arrow, you’ll increase to build #4. You’ll notice in the UI that #4 shows here. Also this is in the URL. So in Build 4 now, it looks like we have our artifact, so when I come back here and do Previous Build, you see there’s no build artifacts. Next build I have the build artifact and next build I have the build artifact, so 5 has this as well. Now we’re keeping a history of our artifacts and as changes come into our application and we build a new copy of our application, we can go grab old copies now. Jenkins becomes a place for you to store a history of artifacts and of course, you can integrate other tools as well if you’re doing that somewhere else. Now join me in the next clip where we talk about cleaning.

Cleaning up Past Builds

Thus far we have not done anything to clean out the files each time we perform the build and if you have been performing in builds for a while now, you probably know that it’s a good idea to start from a clean slate, especially considering that most build tools will not rebuild elements like a class file or a dll or an executable if the underlying source code hasn’t changed; however, that detection mechanism can potentially be buggy and so it’s usually a good idea just to start with a clean slate. I’ve opened up a new window here in my terminal and I’ve changed back to that manual location we are working with. I just want to dump out the contents here and show you that we have this target folder and we also have the source file on the POM file. Inside of this target folder, we have elements that won’t be updated necessarily when we run Maven commands again, so when we run Maven compile, we’re not always updating the files inside the target folder and that can be potentially dangerous. If I clear out the screen here and I run an ls command with alR to recursively list the contents of the target folder, you’ll see most of these files were built around 3:08 the last time we ran Maven. It’s been a while now. Let’s see what happens when I run mvn pkg again and this is inside of the manually cloned repository, this is not Jenkins right now. I’ll pause the video and let this finish and it’s now complete. Now the first thing you might notice is that process did not take nearly as long as it did the first time and that’s because there are many elements that are cached. If I clear the screen now and I run that ls command again on the target folder, do you notice anything about this? Notice anything interesting as I scroll through these files here? Take a look at the times on these files. Even though it’s much later now, most of these files have the original times. It’s about 4:17 right now. Many of these files were not updated. In fact, I’m not seeing very many files that are updated aside from that jar file that we saw that is rebundled; the rest of this hasn’t changed so we haven’t recompiled anything. We’ve just packaged up the same compiled class files. Now in most cases that’s not going to be a problem and if you are just testing things out in dev and you wanted really fast results, you don’t necessarily need to clean, but in the production process before releasing an app, you should start from a known working, clean slate. From the command line here, I’m listing the contents so you can see we have the target folder right now. If I run mvn and then clean, this is the clean phase. It didn’t take very long, but you’ll see that a bunch of files were deleted and you’ll notice in the output here we had this deleting and the most important part here is that we’re deleting that target folder where we had all those class files and the jar file, etc. If I list the contents now, you’ll see the target folders entirely gone. So we should do this as a part of our build process as well. Back inside Jenkins, we can come up to the project level, we can come to Configure, and then if we click this little Build section, we can hop back down and we can add another phase to the Maven build process. We can go ahead and click Save to apply this change and then now I’m going to split the screen again. If I come over here and click Build Now, watch what happens down below. If I scroll down, you’ll see in the Build History, #6 starts up. Do you see that? The target folder is being wiped out and now it’s being recreated, so just like that we cleaned and now we’re rebuilding all of the artifacts of our application, in this case we’re generating that jar file and there’s the jar file is the last piece that was generated and now Build #6 is complete and if I click on that, you’ll see the new jar file, but most importantly, you’ll see that all of the files in here have now been updated, so they’ve all been recreated. We’re not starting from anything that was a part of a past build.

Build Time Trend

Now that we have our process set up, I want to talk about some of the Jenkins UI to help you find the information you’ll typically be looking for. Now one of the first things you might be interested in, if you take a look at Build #6, the one we just performed, even though we cleaned and wiped out that target folder, it still only took 19 seconds. That seems pretty snappy. If we come back to the first build and you can do that by navigating in the URL here, you’ll see that one took 44 seconds and this one was cloning our repository and compiling, so how is it possible that we’re now packaging faster than we can clone and compile? Well, it’s because even though we clean out that workspace, we don’t wipe out the repository file so we don’t have to clone again, so there are some things that aren’t cleaned up. The Maven clean command is only targeting the files that are created as part of the Maven process and that’s usually okay. You usually won’t have problems with that. It’s okay to check out the source and only patch over the latest version of your application and then as long as you’ve cleaned up the output files, that’s usually a pretty safe process; however, if you’re paranoid, you can go in and turn settings on to wipe out even more information. So if we go back to the project level and come to configure and we scroll down here, you’ll see some additional settings including the option to delete the workspace before the build starts. So there’s still some caching going on and that’s why things are fast, but you might be wondering, how fast is the build process? Once you have a build process that’s locked down, you’ll have access then to this timeline after you’ve run the build a few times and once that process is locked down and not changing, then this timeline should help you find out if perhaps the amount of time has skyrocketed or dropped. So you can see a history here of how long the build takes. So ignoring the first couple ones where we set things up, 4, 5, and 6 all seem to indicate a build time that’s approximately the same, so even adding clean here in step 6 didn’t add much time.

The Jenkins Dashboard

Now inside of our project view and I know we’re in the project view because we have Jenkins and then atmosphere, if you click back to dashboard or the Jenkins homepage here, you’ll see where we used to have that Create New Job link, we now have a little table that’s shown up and we now have one record in that table and that represents our atmosphere job and you can see the name there. You see a few icons here. The blue ball indicates that the last build was successful and the other icon here is a weather indication. You could think of the weather as more of a long term indication of the stability of this build and in this case we’ve had a lot of successful builds so the stability here is pretty rock solid and it’s kind of like having good weather so a sun shows up. If things aren’t so good, you’ll see some clouds; if things are really bad, you’ll see what looks like a storm. You can get a little bit of information about this particular job, the last successful build, the last failed, and in this case we don’t have any failed builds, and then the last duration and then you also have a button here and that’s the button that will trigger a new build. So you can click that and a new build will kick off and you’ll see down in this little widget down here on this dashboard, you’ll see that the atmosphere #7 build is executing. This happens to be an executor status indication. This is telling you where this build is executing at and in this case, this is just running on our local computer. If we have more than one machine as a part of our Jenkins cluster, we could see the status of where things are running at. You can always run more than one build. If I click this a bunch of times here, watch the Build Executor Status in the lower left, you’ll notice we have two now, one that’s running in the executor slot and one that’s in our queue. We can always cancel these by clicking the little red X and that will stop that build. Now there’s usually not a visual indication of that other than it usually disappears rather quickly and if you have disabled Auto Refresh, make sure you refresh to see the status here. Now in this case we have a gray ball, which means that the build was cancelled or aborted and if we click into atmosphere now, on the lower left side you’ll see the build history and #9 was cancelled so it’s gray, but we have #8 that also just succeeded and #7. So the dashboard is helpful to navigate at a high level through the various different applications that you have set up to be built in Jenkins.

Troubleshooting Build Failures

I’ve got a challenge for you. In between clips here, I manipulated our configuration so that our job now has a problem and I went ahead and ran the build again and you can see here that we’ve got a red ball, which is the status of the last build, so we had a failed build. You’ll also notice now that the weather is not sunny, but partly cloudy and if you look at the output here of the little popup, you’ll see that the build stability is one out of five of the last builds has failed. What can I do to figure out what happened here? That’s my question to you. Take a moment, pause the video, and think about how we could figure out what we went wrong. Well let’s drill in and see what’s going on here. So if I click on atmosphere, I’m brought to the Project page so this gets me a step closer. I can then click on the build, Build #10. By the way, I could also go back to the dashboard and get here too. You can see here the last failure was #10. I can click right there and get to the same place. So that jumps me right into Build #10. Now from right here, if you haven’t thought of anything yet, stop and think about what we could do to figure out what went wrong. One thing that’s really interesting to me is that we have a Build Artifact even though our build failed. That’s interesting. What does that tell you? The next thing that comes to mind and probably the first thing that always comes to mind for me is to jump into the Console Output. I don’t have any information here about what happened, so let me jump into the output and see if that can help me out at all and I scroll through and boom! I can see something red and if I zoom in here, you’ll see the red message says that we have an unknown lifecycle phase, cleans, and that indicates what I did to change this project. So if I come back over to the project level and go into Configure then, I can fix this and I can change this back to clean instead of cleans and go ahead and save this. Now the reason we still have the artifact is because even though our build failed, and let’s take a look at that again so let’s go into #10 here, so we still have this artifact and that’s interesting. If we go into the Console Output again, we’ll realize that because we didn’t clean with Maven, our files were still around, so this points out something interesting about post-build steps. They’ll still run even if the build fails, so in this case we still archive the old artifact. So that’s something you’ll need to watch out for. One thing that’s not so troubling about this to me is if we come back to the build level, we know this is a failed build and we’d never deploy artifacts from a failed build. Let’s take a look at one other problem you’re likely to have. If I come back to the command line here and I uninstall Maven, and then I come back to Jenkins here and go up to the project level, what do you think will happen when I click Build Now? So I click that and in the Build History, even though we’ve fixed using clean instead of cleans, we still have a failed build. We can click on that to drill into that. If we come back to the Console Output though, now this is different this time. We don’t have red indicating that there’s a problem, so you’re not always going to get any easy visual indication that there’s a problem and this is a problem you’re likely to run into as you’re starting out and you’re downloading tools that you need to make available to Jenkins and in this case we just can’t find the Maven program and that’s obviously because we uninstalled it. Sometimes it’s easy to gloss over an error like this because it’s not red. Now while we’re at it, I’ll hop back up to the Dashboard level so that I can show you the icon now. Now we don’t even have a sun and that’s because only 60% of the last five builds have passed, so now things are a little bit worse and if I run this again, you’ll see it’s raining now and if I run it again, you’ll see there’s lightning now.

Challenge and Importing Job config.xml Files

I’ve got a challenge for you. Now there are a couple of ways you can access the files for this challenge. First you can come out to my Gist out on GitHub using my username here, g0t4, or I’ve prepared a link here, git.io/vKSVZ, the last four are capitalized. You can click on that and be taken to this page as well and then down here, under module 2, you’ll see there’s a What am I config.xml. If you click on that, this is the file that you’ll need to be able to work on this challenge. What I’ve done is prepared a config.xml file and it just so happens to be that Jenkins behind the scenes stores all the job configuration in an xml file. So I’ve prepared an additional xml file here for a new job and I want you to load up this job and I want you to get it running, but I’ve broken a few things inside of this job so you’ll need to figure out what these are, fix them, and then get the build running. That will be your indication that you’ve succeeded with this challenge. So I’ve already copied this file down to my computer; you’ll need to do the same. Make sure it’s named config.xml. I want to show you now how to load this into Jenkins. So if I hop over to the command line and I’m inside of the .jenkins folder here, so this is the Jenkins home directory and I’m in the Jobs folder and if I list the contents here, you’ll see a job folder for the atmosphere project that we just created and if we take a look inside of the atmosphere project, you’ll see a couple additional folders and files. Notably you’ll see a config.xml file similar to the one that you just downloaded. You’ll also see a folder that has builds inside of it and then you’ll see some links to the lastStable and lastSuccessful builds as well as the nextBuildNumber. I’m sure you can guess what most of this is. The part that I want to focus on is that config.xml file. To make a new job we don’t even have to use the Jenkins UI, we can just make a new directory in here. We’ll call this atmosophere2 and then we need to copy that file that we downloaded, wherever you downloaded that at, copy that into the new atmosphere2 folder. So now if we look inside of atmosphere2, you’ll see we just have the config.xml file. So that’s all we have to do to get the file into a location where Jenkins can load this as a new project. Those other folders like builds? You don’t need to worry about creating those. Jenkins will create those when it loads this job. So now we need to switch back to Jenkins and if we reload the UI here, you’ll see that we don’t see any atmosphere2 job and that’s because we need to tell Jenkins that we need to reload the configuration from the drive. So we can come out to Manage Jenkins and then we can click into Reload Configuration from Disk. We’ll get a prompt confirming this, click OK, and within a few seconds, Jenkins will have reloaded the configuration and we should see our new job and we do indeed see our new atmosphere2 job. Now if you had any trouble doing this, for example, if you’re running Jenkins as a different user and you have a permission issue, you’ll want to come into Manage Jenkins and take a look at the system log and then click into All Jenkins Logs and if you look for atmosphere2 in here, you’ll see any issues that are related to the job, though in this case, it doesn’t look like we have anything that shows up, so that’s where you’ll want to look and the indication that you have a problem is that on the dashboard here, the job will not have been loaded up, but we have the job loaded now, so now I want you to do the same thing and I want you to try and build this job and work through the issues that you run into until you have a successful build and then you can check the end of this course for the solution to this exercise where I’ll show you the problems that you would’ve run into if you worked through this and also how to fix those problems.

Anatomy of the Job

What’s the difference between a job and a build? If you walk away from this module with one thing in mind, I want you to walk away with the difference between these two concepts. A job is a process that we define. It could be something as simple as compiling, testing, and packaging our application as we saw in this module. As you saw, a job is defined at this point by a freestyle project job type and that job definition is composed of a job configuration and actually behind the scenes, that config is indeed stored in xml files, just like CruiseControl, but we have a nice interface in Jenkins to configure that graphically. Each time we kick off the job by pushing Build Now, we take the job configuration, and we spit out a build our application and if we click the Build Now button again, that will result in another build of our application, again running the entire process defined by the job configuration. Each time we run this, we spit out artifacts that are the result of whatever process we define. So the key difference between a job and a build is that a job can have many builds associated with it and a build represents the result of executing a job. Even with the new pipeline job type that we’ll take a look next we’ll still have builds that will represent the result of executing that job type.

Testing and Continuous Integration

Continuous Integration

We now have a process in place inside of Jenkins that allows us to, at the push of a button, take our application source code, clone it into a workspace, compile it, test it, and package it up. We can capture the package artifacts and keep a history of those, and if something goes wrong along the way, we can find out about it, dig into the problem, and fix it pretty quickly; however, this is just the beginning of what we might like to have. If you are like me at all, you’re probably not going to remember to come out here to Jenkins every day or every time you update your software and commit in version control. You’re probably not going to remember to come out here and run the build and then wait for the output, especially if your build process takes some time. So in this module we’re going to look at rounding out our build process to flush it out into a continuous integration process, a process we can use to quickly get feedback about problems in an automated fashion. So we’ll set up triggers to execute our jobs automatically. We’ll set up notifications so we can find out about problems, and we’ll also see how we can expand out the reporting here to get some high-level overviews of testing because right now, while we we’re running the tests, the only place we can go to the test results is in side of the Console Output. It would be nice to be able to get the test results parsed on a high level to give us some additional information so we’ll see that as well, as well as talking about some other static analysis tools that we might like to plug into the process.

Adding Steps to Our Freestyle Project

As we round out our build process to add additional steps and some automation to this, we could come into the existing freestyle job that we have and configure this to add additional build steps and we could come down here to the Build steps first and we could add some new build steps. For example, we want to use a generic shell or windows batch build step to execute some other program besides just Maven. You might have some other tool that gives you code coverage in your application or perform some other part of the build process, so you could add these as well and have as many steps as you’d like. There are even plugins that expand the list of choices here for build steps beyond what we have here out of the box. We can also come up to the Build Triggers section and set up triggers and then we can come down to the Post-build Actions to add recording, for example, we might like to have JUnit test results parsed and reported back in the UI in a nice graphical fashion. We can use the Publish JUnit test result report, post-build action to do that. We might also like to send an email notification and then again there are also plugins that can add to this list of post-build actions. For example, there are tools that can help you with other test result formats besides JUnit. There’s an XUnit plugin that provides generic support for quite a few different testing frameworks, or you might get something like a code coverage reporter to plug in. There are many different things you could plug in here and build steps or in post-build actions inside of this freestyle project, but this is really the old way of doing things and with Jenkins v2, the emphasis is put on shifting to the Pipeline plugin, as it’s much more flexible. So that’s what we’re going to do in this module. We’ll look at setting up the same build process we have right now in a Pipeline job instead of a freestyle project and then we’ll start to round it out by adding the additional things in that we just discussed, but I do want you to know that for historical purposes you could continue to use the freestyle project. There’s nothing wrong with that; however, pipeline projects tend to be more flexible, so let’s take a look at that next.

Creating a Pipeline Job to Execute Maven

Now we could start out with some long-winded explanation of what exactly these pipeline job types are, but I prefer instead to take the freestyle project, our atmosphere freestyle project that we set up, and just convert it over into a pipeline job type. After we’ve done this we can circle back and explain some of the reasons why we would use the new pipeline job type, after you see one in action and get your hands on something that you can relate to this previous example that we built. So let’s get started by creating a new item and instead of choosing the freestyle project, this time we’ll choose the pipeline job type, so just select that and then come up and put a name in and we’ll just call this atmosphere pipeline and then if you click off this box, the OK button will show up. You can click that and you’ll be presented with a configuration screen much like with the freestyle project, but if you scroll down here you’ll notice there are some different sections. Actually, if you look at the tabs up at the top, we don’t have many of the sections that we had before. We still have General and Build Triggers, but instead of Build Steps we have Advanced Project Options and most importantly this Pipeline section. So really the heart of a pipeline job type is this Pipeline section and what’s special about it is that it’s just a script and it’s a script written in Groovy. Now don’t worry if you don’t know Groovy; it’s built on top of Java so if you do have Java familiarity, you’ll have a leg up here. However, even if you don’t know Java or Groovy, that’s fine. It’s pretty easy to get up and running and in fact, one of the best ways to learn about this is to click this Pipeline Syntax option down below, which will open up a tab and show you some sample syntax and you might be looking at this and saying, where is the syntax at? Well, it turns out we have an interactive form here that can help guide us through the process. Now in the case of Maven, what we’ll do is come to the Sample Step dropdown here and pick from the various different steps that we can get help for. We can scroll down here and look for a shell script step. If you are running on Windows instead of Unix or Mac or Linux, you’ll want to use the batch step instead, so just keep that in mind. If you’re on Windows, use the batch step type instead of the shell type, all the way throughout this course. And the reason we’re going to use a shell script is because we’re just going to call Maven from the command line. As we saw in the last module, Maven can be executed from the command line with mvn and then we just specify the phases and in our case that’s clean and package that we’d like to execute. Then we can come down below and hit Generate Groovy and you’ll see the snippet that we can copy, and as I said, this is pretty straightforward and intuitive. Let me zoom in a little. All we’re really doing is calling a shell function and passing in the command we’d like to execute. So we can copy that and we can bring that back over to our pipeline and paste that in as the very first thing that we do, so that Pipeline Syntax form is a great way to get a little bit of help. So that sets up our compilation process and actually packaging. Join me in the next clip where we walk through archiving in a Pipeline script.

Archiving in a Pipeline

I’m going to split the screen here so we can see the Pipeline on the left and the old freestyle job on the right. Another thing I’m going to do to help you out, I’ve got a style set up that will make that text area for the script a little bit bigger and it will also get rid of that dropdown box that was in the upper right. That dropdown box in the upper right, by the way, can generate some samples for you if you’d like to try out something more complex and then I’m going to scroll down here so we can have pretty much just a textbox for our Pipeline on the left and then we have the old freestyle job on the right to be able to refer to. Now right now I want to scroll down here into the post-build actions so that we can copy over the step to archive artifacts and then I’m going to come over to the left-hand side here so we can set this up. Let’s use that Pipeline Syntax Generator so that we can walk through a form to fill this out. Now you’ll notice that the very first sample step is actually an archive step and you can use that. That’s a new archive step though. I actually prefer coming under step here and then picking from this dropdown to archive the artifacts, because this version of archiving is exactly the same version we have on the right-hand side here. In fact, if you click Advanced here and one of the reasons I like this is it has some more options, you’ll notice the options are exactly the same; the form is exactly the same, actually. These generic build steps, which are prefixed with the step function, refer to many of the build steps you’re used to in freestyle jobs. Any compatible freestyle build steps will be brought over into this dropdown for you here so you can pick from these, which is nice because then we don’t have to really change anything; we can just copy this value right over, paste this into the files to archive. We won’t set any advanced options right now. We can then click Generate Groovy and you’ll notice this time it’s a little more helpful to have this generator because this syntax is a little bit more verbose. So let’s copy that over and let’s bring that over to our Pipeline script and just paste that in right after we run Maven. One thing we could do since the project path spring-boot-samples/spring-boot-sample-atmosphere is so long, we could actually extract that out into a variable and this is one of the really nice things about having a script. We could just yank this out. We could come right up above this step and we could define a variable, maybe project path, and set that to the value and even if that wraps off the screen, that’s a little bit less important to know what’s on the end there. We can then come in here and change from single quotes to double quotes so that we can use string interpolation and we can actually just inject in the project path. Remember, we’re working with a Groovy script at the end of the day, so we can use anything that’s a function of the Groovy language and string interpolation is one of the things we can take advantage of. So that’s how we can set up archiving. Join me in the next clip where we talk about setting up source control, which is another step that we have up here toward the top was defining our Git repository.

Checking out a Git Repository in a Pipeline

The next thing I want to set up is grabbing our source code, so grabbing the definition here we have that points at a GitHub repository for our source code. We need to get that set up in our pipeline as well. And again, as always, we want to use the Pipeline Syntax Snippet Generator and we can come up here. Now what I’d encourage you to do now that you’ve seen me use this a little bit, pause the video and see if you can figure out how to set up the source control. Take a look in the sample steps and see if you can find something to help you out, see if you can plug in what you need and generate the appropriate Groovy and then join me back in this clip and I’ll show you how to walk through this. Okay, so let’s work through setting up our source control access. If you scroll up in the list here, there should be a step specific to using a Git repository. You’ll also see one specific to using svn, if you’re using svn. We can click on Git. We have a form to fill out then that looks quite a bit like the form we have on the right-hand side, so let’s copy our repository URL and bring that over and paste that into the same box. We can leave the branch’s master and then we don’t need any credentials, we’ll just access this anonymously and then there we go. That’s all we need to do. Click Generate Groovy. Scroll down here. Pretty basic actually. It’s just a Git function passing in our repository in this case because we’re using defaults for the master branching credentials, so we can copy this. Hop back over to our Pipeline and we can just paste this in, except we should bring this up to the top of our Pipeline because this is the very first step that we need to perform before we can perform any other actions. Okay, we’re getting pretty close here. Let’s go over to the right-hand side and just scroll through all of our settings. So first we had Source Code Management and we took care of that. We have Build Triggers. We didn’t define any of those yet so we can skip that and likewise with Build Environment, we didn’t set anything up there. Then we had the Build section where we invoke Maven and specify the phases of clean and package and then if we come down to Post-build Actions, we have our archival of the jar file and we’ve got that mapped over as well. So it seems like everything is okay; however, I am missing one small detail. Do you remember what that is? Think about that for a minute, what’s missing and join me in the next clip where we wire that in and run this pipeline for the first time.

Changing Directories in a Pipeline

So here is what’s missing. If we go back up to our Build steps and we click Advanced, sometimes this is not so obvious when we set some of these advanced options, but we had specified a location of the POM file that’s in a nested folder inside of our project path, so we need to specify that as well over on the left-hand side here. Now if you remember, when we saw the command line output for Maven, we noticed that there was an F flag to specify the location of this file and so we could actually just copy this whole path here and paste this in here to point at the POM file and pass that to the Maven command when it runs. Let me make this big here, and this will give us what we need; however, this is not exactly so pretty and every time we want to reference something inside of our project, we’re having to reference the nested path. It turns out there’s a little bit of help we can have here with the pipeline. If we come back to the Syntax Snippet Generator and we scroll up here and pick a sample step type of dir for changing the current directory, we can change the directory by simply specifying the path here. So I’ll paste in that path, take off the pom.xml and this way we’ll be running all of our steps inside of this nested folder. I can click Generate Groovy and now you’ll notice something a little bit different here. Let me copy this over first, come back to our Pipeline, and let’s just paste this in right after our Git repository is checked out. Now you’ll notice that unlike before where we just execute some function like shell or step, we now have a block denoted by these curly braces and you’ll see there is some block as a comment in here. What we need to do is to grab all of our steps that we want to execute in this sub directory and paste them in. So you could think of as like a scope in code. Everything inside of here will execute inside of this changed directory and then once we’re done, everything thereafter will execute in the root our repository. Now a couple things we can clean up here. We don’t need the project path defined here anymore because we’ll already be in that directory, so let’s bring that variable up here though so we can make use of that. It’s a nice name for our project path. It gives some meaning to that path and then we can get rid of specifying the POM file because we’ll already be in the directory where the POM file is at that we’d like to use so we can now just refer back to Maven clean. Likewise with the Artifact Archiver, we don’t even need to use string interpolation anymore, so with this in place now we should have what we need to go ahead and execute our pipeline, so let’s save this and you’ll see we’re brought back to the same view we had with a project, but now it’s called Pipeline view. So instead of Project we have Pipeline here and just like before, we can click Build Now to execute this, so let’s go ahead and do that now. You’ll see a build was scheduled and eventually it shows up down here; however, it looks like we have a problem so let’s drill into this. We can click on this just like a build with a freestyle project and if we click on the Console Output, we should be able to get some more information about what happened here and if I zoom in just a little bit here and I’m going to highlight this error for you here, we have a little bit of an issue.

The Master Agent Model

We’ve just about got our pipeline working, but we’ve got a little bit of a problem and we’re being told here that we’re missing something that’s referred to as a node step. So let’s go back to our configuration and if we scroll down to the Pipeline here, we can now work on fixing this. Let’s come over to the Pipeline Syntax Snippet Generator and click down in this dropdown and select node. Now one of the nice things about this snippet generator, you can get some help here about the relevant steps that you select and you’ll notice here in the description that when we have a node step, this node step will allocate an executor on a node. So the problem we have here is we’re trying to check out source code; we’re trying to compile it and package it and archive it and that requires that we’re executing that code on a node somewhere that has a workspace and if we don’t define a node to allocate a node, then we won’t have a workspace to work inside of. Remember the workspace with the freestyle project type? Without a node step, we’re not defining a node to run on and we don’t have a workspace then, so we can’t perform these advanced operations like checking out source code and compiling it. Just to make sure you’ve got your mind wrapped around what we need to have happen here with this node allocation, let’s take a minute and talk about the Master Agent Model in Jenkins. Thus far in the course and for all of this course, we’ll only work with a single master node in Jenkins. This master node is the computer that you’re running Jenkins on. On this master node there are a pool of executors that can execute builds. So every time we click Build Now, we have to have an executor to be able to do that work, to perform the associated tasks and our master node at this point has a pool of executors, two by default, that we can configure this number to be more or less, and you can think of these executors as worker bees, so in some ways the master node is like a boss that doles out work and executors are like the employees that do the work. So right now we’ve only focused on having a single master node in our setup and that’s because we’re just working to get started here. At some point you’ll find though that one master node is just not enough; the tasks that you’re performing will be recurring so frequently that you’ll overwhelm the capacity of that master node and you’ll want to start scaling out so that you have additional nodes that can help you do work. In Jenkins, these additional nodes are referred to as agents and historically they’re referred to as slaves. When you click Build Now, if your master is overwhelmed, all of its task slots are filled, which means all of its executors are busy, the master can then send out tasks that will be executed by executors on the agent nodes and it can distribute those tasks across the agents. It can even have agents running multiple tasks because agents can have multiple executors as well. In a pipeline when we’re talking about allocating a node, we’re really talking about finding a free executor to execute some build steps. Once we find that executor and send it the commands to run, a workspace will be created and the relevant operations will be performed. So this node step that we’re going to look at helps us allocate an executor on one of our nodes. Now you might be wondering why would we need this? We didn’t need this with the freestyle job. Well, it turns out in Pipeline jobs, one of the degrees of flexibility is from the fact that we can allocate multiple nodes and we can do a couple of different things with that. We can execute tasks in parallel, for example to split up our unit tests and run them across multiple machines. We can also split up a serial chain of tasks and pause in between for some human intervention, perhaps to approve a deployment into a staging environment, so in that case we’d first want to allocate a node perhaps to build and compile and package our application. Once that’s done then we would wait for human interaction to approve deploying that build, during which time we wouldn’t want to hold one of our executor slots so we would free it up and once somebody approves the deployment, we can go ahead and allocate a new node, perhaps on a different agent with a different executor to go ahead and perform the deployment. Okay, now that you have a little bit of that understanding in place, let’s switch back to configuring this node step inside of our build pipeline.

Allocating a Node and Workspace in a Pipeline

Fortunately, all we have to do here is leave the defaults, just select node and hit Generate Groovy and you can see what this looks like. It’s another block statement so let’s copy that. Let’s go back to our Pipeline view. Then what we need to do is come all the up to the top here because we need all of this code to run inside of a node block so that we have a workspace allocated. Let’s go ahead and tab this in here, so now all of these steps will run inside of an allocated node, which will have a workspace. There are some operations you can perform without a workspace, but most operations require a workspace. Okay, let’s go ahead and save this and let’s click Build Now and with any luck, things should work this time. You’ll see down in the lower left we have Build #2 and it looks like we might have a little bit of a problem so let’s drill in. It looks like we got further along this time at least. Let’s come into the console output and actually it looks like things are still okay. The red here is just indicating that the previous build had failed, so the red ball flashing means the previous build had failed. Don’t get confused because that doesn’t mean that this build has currently failed. This will take a little while here to clone down the repository for the very first time into the workspace and then once that’s done, the project will start executing the build steps that we’ve defined. The next one will be Maven and then the one thereafter will be archiving the package. You can see here that Maven has kicked off. Our project is building for the very first time. We’re compiling right now. It looks like our tests are now running. It looks like our tests are complete and our test actually passed and it looks like the artifacts were archived. Okay, so it looks like our pipeline ran successfully this time. Let’s scroll way up to the top and make sure though and the ball is still flashing, sometimes that happens. Refresh here and you can see the ball is now blue. What does blue for a ball mean when it comes to the status of a build? Don’t forget that blue means everything was fine. It’s kinda weird. We’ll see later in this course how we can get green balls set up instead of blue. So that’s the very basics of setting up a pipeline.

High-level Progress with Pipeline Stages

When we’ve been running our builds, we’ve been coming into the Console Output just to figure out what’s going on and while that works, it’d be nice to surface some high-level information so we don’t have to drill in. This is a first step toward trying to get some feedback a little bit quicker about what’s going on. It’d be nice to leave the console output for when we have a problem and instead, we should be able to come up to the build level, this status page here and see something or maybe even on the project level, see something that could tell us about what’s going on and here on the status page, you’ll notice the status page for the project or pipeline. We’ve got this little warning here in the middle that says, “the pipeline ran successfully, but it doesn’t define any stages.” So what I want to show you next is how we can define some stages and get a high-level overview of what’s going on and this is one of my favorite features of the Pipeline. So let’s come in and configure this pipeline again and let’s scroll down to that block that defines our script. Now I’ve got a question for you. I just told you that there is something called a stage in a pipeline. If you wanted to figure out a little more about what that does, how could you figure that out right here inside of Jenkins? Hopefully you guessed it. We can come over to the Pipeline Syntax Snippet Generator. We can select stage from the dropdown here and then we’re presented with some options. Now we’re just going to focus on the stage name for now, so for the stage name, we can put in whatever we’d like. So for example, we might have a stage for checking out our code, so how about check out? We can click Generate Groovy then and you’ll see this is actually a pretty basic function to call. We can switch over to our Pipeline then and we can paste this right above our call to Git to check out our repository. So this is the checkout stage. Now these groupings that you can use with stages are entirely up to you. You probably won’t get as granular as I am here, just know that you can define these however you like to get a high-level overview, so check out might be one stage and then perhaps we want another stage for compiling and packaging our application, so compiling, actually testing, and packaging. Some people will just refer to that as build and then we could also have a stage here for archival. Again, entirely up to you. We can click Save and now let’s click Build Now, but before I do that, what do you think will show up here in this stage view? Just take a guess. Okay, click Build Now and in a second, we should see #3 pops up here, but look at this over on the right-hand side. We now have with Build #3, we now have some stages showing up and this is the very first time that we ran through our stages. Each of the stages will show up as they’re hit in the Pipeline. Now our old builds, Build #1 and #2 don’t have any stages defined in them so they don’t map to anything here, but you can see that we’ve already successfully checked out and that’s what the green here indicates. We can even click and get some log information if we want about just the checkout phase of our pipeline and right now we’re running through testing. So far, things are okay. We can click logs to get some information here as well and we can scroll through, in this case, our call to Maven clean and package and it looks like everything was successful there, so we can get some log output relevant to each stage and it looks like Archival completed successfully as well. We can get logs for that as well. So this is a much better way of getting some high-level overview of what’s going on and then we can drill in when there’s a problem. For example, if I come here and configure this project, let’s come in and break something, come down to the pipeline and just mess up the repository, save that, and now let’s go ahead and build this and you’ll see right now we’ve started Build #4, we’re in the checkout phase, but that never succeeded, so that’s red now and the whole build is actually red. We can click here to get some logs and if we scroll here, you’ll see that there was an error fetching the remote origin, so we’ve got a problem with our repository. We can then dig into that part of our Pipeline and know exactly where the problem is at. Much better workflow than trying to just dig through that console out and find wherever the problem is at. So let’s go ahead and fix that problem, save that. Let’s click Build Now and you’ll what it looks like when things are successful again and we’ve now passed through the checkout phase. I’ll go ahead and pause recording and join me back in the next clip where we move on and talk about more Pipeline features.

Triggering Automatic Builds

It would be nice not to have to manually run this Pipeline or freestyle project every time we have a change. Instead it’d be nice to automate this process when people commit and push a new change to a source code repository, so let’s take a look at setting up that next. Now this is not specific to Pipeline job types. This will work with freestyle as well. If you come into the configuration and click Build Triggers, you’ll notice there are a couple of options here. We’re going to focus on the poll SCM option which sets up Jenkins to poll your source code repository, in this case my GitHub repository. Jenkins will poll that repository periodically and look if there are changes and if there are changes, it will kick off the build process. This is one common style. It’s great for when you’re getting started, but eventually you won’t want to overwhelm your VCS system, especially if you have a lot of repositories and in that case then you can set up a push notification where GitHub, for example, can reach out and tell Jenkins that there’s a change, so that Jenkins doesn’t have to poll and also this speeds up the process because the second that you push to GitHub, GitHub can push to Jenkins then, so it’s a push-based approach instead of polling. There are also other approaches like scheduling, so building periodically and you can do things like triggering a build from remote, for example, if you a script somewhere that you’d like to kick off a build process and these are just the options that we have out of the box because of the default plugins that we’ve set up and you can grab plugins that extend these options as well. So for now, let’s go ahead and click Poll SCM and then we need to specify a schedule and this is one of the areas where I say, hey, refer to the help because I never remember the exact format. In the help you’ll see that we’re using a syntax similar to scheduling something with CRON and there are even some examples down below if you scroll down. I’m just going to put in five asterisks here, which will mean every minute check for changes and when I click out of that box, you’ll notice right down below the box, we get a little bit of help, telling us what it is that we configured, so Jenkins tries to tell us what our CRON tab expression represents because come on, let’s be honest; CRON expressions are not exactly intuitive. So this will tell us here that, and it’s actually warning us, did you mean every minute, and yes I did, so that’s what I’ll go with and then the nice thing is, it will also tell us the next time that this should run. First off, this would have last run at this time and next will run at this time, which is a really helpful way to make sure that you’ve entered the right expression. You can close the help by clicking the question mark again. So that’s it for configuring the option. I’m going to click Apply to save that. Apply will save these changes without exiting this form and then I’m going to scroll down because what I want to point out is if you want to follow along with this demo, you’ll need to go out and fork my GitHub repository, you’ll need to fork it so that you have your own copy of it so that you can make changes to it because the only way we can test this is by pushing a commit. So let’s hop over to the command line and I’m in the checked-out repository here and I’m also inside of the nested folder for the atmosphere sample and if I do a git diff here, you can see I’m actually working on some changes, so it just so happens that I have something I’d like to check in. So if I run a git status, you’ll see that the POM file is modified, so let’s do a git add pom.xml to add those changes and then let’s do a git commit -m and let’s specify a message here. Something nice to see come up in Jenkins because this will allow us to also see finally some changes coming through Jenkins. This is the first time we’ve committed something since we set up our project to build in Jenkins so now we should be able to see the correlated change log, so we’ll add a message adding in JaCoCo code coverage. Okay and then I’m going to clear the screen here and let’s just push this and I’m getting a warning back here that I need to set the upstream so I’ll copy this, paste this and now we’ve pushed our changes out. We should be able to come back to Jenkins and I’ve already saved the trigger so that should be working now. Let’s come to the Pipeline page here and let’s just wait for this to kick in. Number 6 is kicked off. You’ll even notice on the right-hand side here in the stage view that #6 is running at this point in time. You’ll get estimates of how long each of the stages should take, so 2 seconds for checkout and 25 for compiling testing packaging and archival. Wow! That’s really fast at 250 milliseconds, and also, these little blue bars in here indicate the amount of the total progress bar that’s taken up by that stage. So you can see most of the progress bar is taken up and running through the compilation and packaging stage, and you can see right at the end, a tiny little bit for archival and right at the beginning a little bit for checkout. Let me close my drawing tool there and you can see everything is actually done now. Also notice the times show here. Now let’s take a minute and look at a few things. First off, let’s come over to changes and you’ll see here there’s only one change and that’s because we’ve only changed one time since we set up this Pipeline job and you can see it’s got the commit message that I provided, adding in the JaCoCo coverage. It even shows who did that and it gives me a link here that I can hop out and take a look at those changes and in this case, I’m brought right over to GitHub and right in to the actual commit itself where I can see what changed in each of the files, so that’s pretty slick. So you’ll have that once changes start flowing through. Now this is just the change log on the level of the Pipeline and we can also drill into an individual step here in #6 and now inside of the status page, so remember we’re on the status page right here, you can see we have changes and in this case we just have one change which is adding in the code coverage. So this change output comes right in the middle of our other summary information about this build and you’ll notice if I go to the previous build, there are no new changes associated with this, so this was just rerunning whatever the previous build had built. Now keep in mind here with our Pipeline, we’re also getting our build artifacts just like before and then one unique thing here, you’ll see here in the build status, that this particular build was triggered by an SCM change and if I go back to the previous build, the one that I pushed the button for, you’ll see that it says it was started by user Wes Higbee, which means it was manually started. So you can tell here what triggered this particular build to be executed. The nice thing is here, now we don’t need to click the Build Now button when a developer checks in changes. Those changes will automatically be executed and in this case because we’re polling, it will take at most a minute to figure out that there are changes.

Configuring an Email Server

So now that we have automated triggers to build our application every time somebody checks in and pushes a change, it would be nice to have something then to tell us so that we don’t have to come to the Jenkins UI all the time and we’ll be doing that with email. So let’s pop open a new tab here and you can get to this one of two ways. You can go to Manage Jenkins and pick Configure System or I’d like to show you this quick little search utility. You can just type in config here and hit Return and you’ll be taken to the Configure System page as well. You’ll notice this is Jenkins and then configuration. Just to help you out in case you are using the other navigation route, this is the Configure System option under Manage Jenkins. If you scroll down here way down to the bottom, you’ll notice some email settings. Now just be careful and make sure you’re filling out the Extended Email Notification settings, not the Email Notification settings. We need to come in and tell Jenkins where our email server is. Now if you have something like Gmail, you could wire up your Gmail credentials in here, but I think that’s somewhat of a hassle. I can never seem to get the right ports and authentication settings to work with Gmail so instead I like to run a local SMTP server. Now I don’t want to take much time to show you that; however, if you are running Docker or know how to run Docker, you can start up a container based on the MailHog image, it’s under the MailHog organization/ the MailHog repository. This is out on Docker Hub. You can spin this puppy up and it has two ports. 1025 is the SMTP port and 8025 is the web port where you can go in via the web and see what emails have been received. So I’m going to fire this up with Docker. You can do the same if you want, or you could just skip following along with this demo and watch what I show you here. We can come back then to Jenkins. We can plug in the location of the server so localhost is the SMTP server. Then you need to click Advanced to get access to the port and the port is 1025. Under the default content type I can flip that over to HTML and I think that’s enough for now, though you’ll notice there are a bunch of other settings you can use. Go ahead and click Apply or you can click Save. Save will bring you back to the home page. We can close that now because we already have our Pipeline script open and then if you want to see if MailHog is working, localhost 8025, you’ll notice that there’s nothing that’s been received yet. This section down here will contain emails. So that’s configuring the email server. Join me in the next video where we set up a notification in our pipeline.

Email Notification in a Pipeline

Now we’re ready to wire up the notification and this is a step just like any other so we can use the Syntax Snippet Generator to help us out. I’m not going to use this, but I do want to point out to you to be careful when you’re looking through the step choices. I’m going to be using emailext because that’s what we just configured. You’ll notice there is also a mail option and also if you go under step, that general step, there is a mail option in here as well, Email Notification. Now these all wire up to different email notifications. Just make sure if you’re following along that you use emailext like I am. I already happen to have a little snippet set up that I’m going to just paste in here and I actually have a function defined, so what I’m going to do here, right after the node is paste in this function and make a reusable function to send emails so that I can send wherever I’d like to send at. So the first thing I’d like to do maybe is send a notification when we kick off the process. So how about we come up here and call our notification function and pass in a status of started. So that will call notify, pass in the status here. We’ll email this to myself. This is beyond the scope of this course, but you can set up recipient lists and use those instead, but for now I’ll just email this to myself. I’ll set up a subject here with the status first, colon, and then the job name, followed by the build number. Now if you’re wondering where did these job name and build number variables come from, which by the way I am interpolating into the string. Well, that’s also something you can get some help with the Pipeline Snippet Generator. It’s over here on the left-hand side. There’s a Global Variables Reference and inside of here you’ll find an env variable. The options in here will expand based on the plugins you have installed, but these are the defaults with the default plugins and you’ll see in here there are many environment variables we have to choose from, the build number, the build ID, the job name, many different things that we could include in this email. And then I’m also setting up a body here, repeating most of the information, but the most important part of this is I’m including the build URL so you can click right from the email to get more information. So a very basic email and for now we’ll just send one we start the job. Let’s go ahead and click Save or Apply. I’ll click Apply so we can keep this window open and then I’m going to come up here to the atmosphere build. I’m going to open this in a new tab. I’m going to click Build Now and watch the notifications in the upper right. Did you see that come in? This is MailHog, another nice reason to use this locally. We’ll pop up a little notification telling you that you have an email. You can click on that then to be taken to MailHog and you’ll see here, here’s the email we sent, so here is the subject message and here is the body that we set up as well and of course you can see all of your messages here in MailHog by clicking on your inbox, so this looks like a simplified email inbox and then we can click on this link here to jump over to Jenkins and see this specific build so we jump right into Build #7 here. We could go about looking at the status of things and jumping into more information, perhaps looking at the console output, all from an email. Now getting an email when something starts is not too helpful, which by the way, in MailHog you can click Delete All Messages to wipe out all the messages, which is nice as you’re testing to make sure there isn’t something carrying over from a previous run, and then we can come back to our Configure page and how about instead of when things start, well we can leave that one, it doesn’t hurt, but how about we find out when things are done and you might also be wondering, well, what about when things go wrong, which is really what you want to know about. Well, join me in the next clip where I show you to set up some code that can catch exceptions that happen in this Pipeline.

Notifications When a Build Fails

Finding out when there’s a problem is likely the type of email that we’d like to receive. We probably don’t care too much if everything is okay, but we do want to know when things go south and since this is Groovy, we can use a try/catch block to be able to catch exceptions and do something about them and to get a little help with this, come over to the snippet generator and instead of looking in the sample steps, click on the step reference here and inside of this step reference, just look for the word try and then an opening brace. You’ll notice there are a couple of examples here that you have to pick from. We can code up what is just basically a try/catch block and choose how to react to that however we’d like. So I’m going to copy the catch portion of this here and then I’m going to come back to our Pipeline script here and I’d like to find out about any problems that we might have, so how about start our try block here and then we’ll want to indent everything. This is getting to be quite a bit of indention. You could extract a function if you’d like and then we can add a catch block. So we have our notification that we start and then if everything succeeds, let’s put that in here as well. We’ll nest that and we’ll call that Success instead. Instead of echoing that there’s an error, how about we send a notification, and we could interpolate that error here as well. To actually test this out we need something to fail, so let’s mess up our URL here and save that and then let’s come back to our Pipeline page and let’s click Build Now and you’ll see we get the Started notification and boom, we have an error right away. So let’s click on that. Let’s go to the inbox first here and you’ll see that we have Started as the first email and then we have an error right here and it’s in the job atmosphere pipeline and the build number is 8, so we can click on that. Drill in and we can see that there’s an error here. We have an abort exception and that’s not too helpful so we probably click here to go to the Console Output to get more information and scrolling through here then we could find the problem we failed to fetch from this repository so we could do something about this problem then. Now normally our problem wouldn’t be something related to the repository and instead will be probably related to a code compilation failure or maybe a test failure. So let’s delete all these messages. So that’s how we can set up a process to send notifications and of course, if we come back here and fix this, and we click Apply, run this again, you’ll see we get started in the upper right and you can see we have our Success notification. Now in most cases you’ll probably only want emails when something goes wrong, but as you can see, you can plug those in and see at any point in the Pipeline you can send yourself an email.

Duplicating a Job

We’ve got a pretty good process in place now, but it might be nice to have some more reporting. For example, right now, if we want to take a look at test results, we have to drill into the Console Output and look for test information and if we scroll way down to the bottom, we’ll see the test results here. Well, that’s not too helpful. We might like to know which tests are passing and failing and we might like to know that on a high level. Now to me, one test is not that exciting, so I have another repository set up with a new project that we can use that has quite a few tests in it, so let’s take a look at that. Now if you come out to that gist I showed you earlier, you can see a link here for module 3 to this spring-petclinic example out of my GitHub organization for my user g0t4, so click that and you’ll be taken to the repository that I want to work with next and of course, then you’ll want to grab the URL to work with. Okay, so let’s come back over to Jenkins and instead of losing everything we’ve got set up here and starting from scratch, I want to come to the top level here and click New Item. I want to show you how to copy another job. So first off, give this name and let’s just call this petclinic and then scroll way down to the bottom and you should have a Copy from option and inside of here we can put in atmosphere, select atmosphere pipeline and now what we’ll do is we’ll make a copy of the atmosphere pipeline called petclinic, so maybe we want to put pipeline in here as well and then click OK. So that shows you how you can copy a job to speed up the process of setting up a new job. Then we can scroll through here and take a look at some of these settings, we can add in polling as well. We can take a look at the Pipeline script then and really what we need to change out is this last part here. Now in doing this, you’ll notice that a build kicks off because we have the polling set up. Not a big deal. The only problem is that first build will actually be of the old repository, the spring-boot example. Now in this particular example, the POM is at the root of the Git repository; there’s no nested project we need to work with, so we can get rid of changing the directory and just have one level of nesting inside of here or one less level. We can continue to notify on Started. We have our checkout step. We’ve got the correct repo now. We can still have the compiling, testing, packaging here and then we have our archival of our artifact, though in this case I believe this is a war file so let’s just switch to a question mark to pick up whatever, a war or a jar, and then the rest of this should be okay to send our email. Let’s go ahead and save that. Now this first build is inaccurate. That’s the wrong repository, so let’s click Build Now. Okay, our build completed successfully, so that’s how we can copy a job. Join me in the next video where we wire up some additional test reporting.

Visualizing Test Results

If we dig into the Console Output here for the step where we run our testing, we should somewhere in here see some test results and you can see right here we have our test results, but that’s all the information we have right now and we really don’t want to dig in here to look at this information. It’d be nice to have this on a top level and this is where we can configure some additional post-build reporting. So we can come into configure and then like everything before, we can go to our Syntax Reference. We could come back up to the top here and actually come to the Snippet Generator and inside of here, if you click here, you’ll see a bunch of different options. The one we’re looking for is actually one that’s under this General Build Step. So it’s a type of post-build action that we could have with a freestyle project as well. So click in here under Build Step and choose Publish JUnit test result report and so what we need to do then is point to where the XML files are at with the JUnit test results. If I come over to the command line, I can show you where those files are at. Under the target older under surefire reports, you’ll see some test result files; it’s the list at the top that starts with the word capitalized test. So let’s come back to the Snippet Generator and paste in that location, which is target/surefire-reports/TEST and then -*.xml. That will give us all of the test files in that directory. Then we can go ahead and click Generate Groovy. This one’s a little more basic, but we can copy this then, come back over to our Pipeline and just plunk this in. Let’s do that right as a part of the archival steps. Once we click Apply here we can then come over and run our build again, except this time we’ll want to run a different build, we’ll want to run the petclinic pipeline, so I’ll kick that off here and then I’ll jump in here to take a look at what’s going on. You can see we’re already running our tests again. Our build is done. I’ll refresh here and then the first thing we can take a look at is clicking on the Build History here and you’ll notice there is an option under Test Result and this specifically tells us that there are no failures at this time. We can drill in here to this test result and you’ll see we have some packages and we can drill into each of these packages to look at the tests inside, but before we do that, take note over the right-hand side, we have some information about these tests, so we have the number that are passing, as well as the number that are failing and skipped, and we even have some timing information. So we have a nice high-level report here. Oh, we also have 62 tests in total that took 9 seconds to complete. So we have some nice summary information instead of drilling into that Console Output to figure this out, and of course, we can drill into any of these packages and see some of our individual tests. We can go back up to the Test Result summary here and you’ll notice we have a history option, which can give us some history information, though right now we’ve only run this once so there won’t be any history. So let’s come back to the Pipeline for our petclinic and build that again so we have another test run. And I just got my notification in the upper right that that’s done now so let’s refresh this history page and you’ll see we now have some additional information and in this case, our graph here is showing us a graph of the time it’s taken and it’s taken less time here for the second test run, 6 seconds versus 9, though the total number of tests here has not changed overall. So we can see here in our history we have two separate builds, so as we build up a build history, we can see the number of tests that are increasing and decreasing and the number that are failing and get some nice high-level information, which by the way, now if you come up to the petclinic pipeline page, you’ll notice this test trend report for something that looks like shows in the upper right corner here. So we now have a test result trend. This happens to be of the number of tests and it’ll show failing versus passing as well, although at this point we only have passing tests. So if we wanted to, we could pop over to the source code and I’ve got the validator test opened up here, and I could just break something, I could break one of these assertions, if we put in asdf here and save that and then I could hop over to the command line, git status here, and you’ll see we’ve got a change. That change is the broken test I’ve now added. I can go ahead and do a git commit, add in that change and a message of break a test, and then I can get push that. I’ll send that change on out. Remember we have a trigger set up. Let me clear the screen here and hop back over. We should have a build start pretty quickly. You can see we’ve got a build already started up. Scroll down here and you can see it in the stage view down below and you can see we’ve got an error now in the upper right-hand corner, we can see that we’ve got a failed build. Down below in our Stage View we have a pink stage entry. So now the stage view was updating, but our test result trend still only has build #3 and 4 in it. Even if I refresh the page though, we don’t get build #5 to show up here and that’s because we had a failure and if you remember, we configured reporting our test results in the archival step. So our failure kicked us out of the process so that we didn’t report the result, so we might want to change that around a little bit, so let’s hop over to the config for this job and we might want to come down in here and take our archival step, grab this, and we might actually just want this to happen after a failure above. I’m going to get rid of the success notification because that might be a little bit funny now. Really, that’s a done notification. Let’s add in archival no matter what happens. You might want a separate try/catch block if you wanted to catch any issues in here, but for now this should suffice to make sure that we at least try and run our archivals and then let’s go ahead and click Save here. And then let’s go ahead and click Build Now, run this again, and this time we should get our test results coming back in the trend. Okay, we get our same error back, except this time when I refresh, you can see. Look at this very thin little red sliver here down at the bottom of our test trend. The red there indicates that we had a failed test now. You can actually enlarge this if you want by clicking on Enlarge and you can see a little easier that we have one failing test. While we are using the JUnit format here, you can use other test formats as well, so other testing frameworks like NUnit for example with .NET and what you’ll want to do in that place is take a look at an XUnit plugin and this brings us to the end of this continuous integration module. We’ve seen how we can kick off a build automatically when someone pushes a change to a Git repository; that could work with any vcs system. We’ve seen how we can get much more rich test reporting and we’ve also seen how we can send notifications and in the process we took at a look at the new pipeline job type. Join me in the next module where we take a look at plugins that extend and integrate additional components into our pipeline.

Finding and Managing Plugins

The Need for Plugins

Back when we installed Jenkins we chose the option to install the suggested plugins and then after this screen we were presented with a screen showing us the progress of downloading and installing each of those plugins and that was a great way to get up and running with some default plugins that we’d probably like to have but we also could have chosen to select the plugins that we’d like to install by default. The suggested plugins would still be selected and then we could scroll here, uncheck some, add some, and then continue with the installation process. One of the things that I like about Jenkins 2 is that out of the box it comes with many more suggested plugins that provide the ability not to need to install anything right away for some common scenarios; however, at some point you’ll want to install plugins, so that’s what we’ll look at in this module. Now we won’t be coming back to the installer to do that, instead we’ll be taking a look at how we can manage plugins within our existing Jenkins installation.

Integrating Code Coverage

To better understand plugins, let’s take an actual use case. Let’s see what we have right now and let’s see what we might like to have that we could go then look for in a plugin. So let’s take this petclinic project and in the last module, you saw me adding JaCoCo coverage to actually the spring-boot sample that we worked on. This petclinic project also has JaCoCo plugged in for code coverage as well, so you could use either project and if we hop over to the command line, right now we’ve been running mvn with clean and package. If instead of package we run mvn clean and verify, then after doing that, and I’ve already done this; you can run it if you would like. After doing this inside of the target folder, there will be a new site folder and if we open that site folder up, inside of there will be a JaCoCo folder and inside of there will be an index.html and if we go ahead and open up that index.html page, you’ll see we have a code coverage report. This is already being generated if we run Maven and pass the verify phase. It’s this nice HTML report that we can click through. Go into packages, take a look at various different elements and look at the code coverage and see where we do and don’t have code coverage from our testing. And we could come over to our petclinic pipeline and configure this to plug in the verify phase instead of the packaging phase. So we could come down here into our pipeline and add verifying which will additionally run verification including generating code coverage, but right now, while we would generate that code coverage, it would just exist inside of our workspace. We wouldn’t have any way to capture it unless we added an artifact to grab it and even doing that, while it would work, isn’t probably the best way to incorporate that information. It’d be nice if we had something like our test result trend that would integrate that code coverage right into our pipeline view here, or something so that we could link into that HTML report really easily, so that’s what we’ll look at first and to do that there are a couple of different plugins that we could use, so let’s take a look at these. Come under Jenkins here and choose Manage Jenkins and then come over to Manage Plugins. You’ll notice there are four tabs across the top here. Hop over to the Available tab. The Available tab shows us plugins that we have not yet installed that we could install and if we just type in JaCoCo in here, you’ll notice that there is a JaCoCo plugin, so that’s one route that we could take. Another route would be to type in HTML here and when you do that you’ll see that there’s an HTML Publisher plugin that comes up. So we have two different choices. We could use the JaCoCo plugin, which really understands JaCoCo, or we could use this generic HTML Publisher because what we have is just an HTML report, so any tool that generates an HTML report we could use this HTML Publisher with. Join me in the next video where I walk through installing these plugins and getting them up and running.

Assessing a Plugin

In the interest of time, I’m only going to install one of these plugins though you could check out the JaCoCo plugin on your own; I’m going to focus on the HTML Publisher plugin. It’s a little more generic, it’ll apply to more situations for you. So let’s take a look at that. Now to install this, you simply have to check the box, but before we go through with the installation process, if you click on the link here, and let’s open this in a new tab, you’ll be taken to the Jenkins wiki where you can find additional information about this plugin. So a workflow for finding plugins, I like to come to my Jenkins install, come to the tab to manage plugins and just look at what’s available because I get this nice list here of names and I usually have a description though in this one case of the HTML Publisher plugin, I don’t have a description, but this is a great way to find plugins for a specific use case and then I can dive into the documentation to learn a little bit more. Out of the wiki, you can see here, we have a bunch of information available to us. We have usage information which can tell us how popular a plugin in. We can find out about version information and when it was last released because if a plugin wasn’t updated recently, chances are it’s been abandoned and you’ll need to look into it to make sure that it’s something that you want to use. You can also link out to the source code, if you want to take a look at the source code for the plugin, so we can open that up in a new tab, and you can link out to things like issues, which in this case links to a different place than GitHub. A lot of time GitHub-based plugins will also use GitHub issues, but not in this case, so you’ll want to refer to the various different links here to go access the source code and look at the activity, look at issues involved with the plugin. You can find information about the maintainers and then if you scroll down here, you’ll usually find some good documentation for getting up and running. You can see some steps on installing here, how to use the plugin, though in this case and sometimes this is the case, you will usually just find instructions for how you could use the plugin with the old freestyle project type because that’s what’s been pretty popular in the past and is still pretty popular today, though the Pipeline approach is growing in popularity every day. So one of the things you’ll want to know right away is does this plugin work with the Pipeline type projects or not and sometimes you can find that here in the documentation. If you can’t find that information here, open a new tab and you can just Google for Pipeline compatible plugins. You’ll be brought to this GitHub page and this is a nice list of plugins that are compatible with the Pipeline. I’ll go ahead and copy this URL into the Gist that I have that has links for this project so you can access the link there as well. If you take a look at this though, you’ll be able to see if something is compatible and you can find some additional information, for example, links out to issues that represent the work might be in progress to make a plugin compatible. By the way, the Pipeline project types were actually called workflow before, now they’re called pipelines so you might see some old documentation that refers to them as workflow. So once you’ve done your research and you decide that you want to use a plugin, then you can come back to Jenkins and carry out the installation process. Join me in the next clip where I walk you through installing this plugin.

Installing the HTML Publisher Plugin

Once you have picked a plugin and you’ve checked the box here to install it, then down below you have some options for how to go through with the installation process. You can choose to either install without a restart or you can download now and install after a restart. Downloading now and installing after a restart is the first approach that was put into Jenkins. Installing without a restart is a newer approach. You’ll just need to be careful that this works with whatever plugin you’re using. I like to be safe and especially when I’m just testing things out, I don’t mind doing the download now and install after the restart. So I’ll click here, Download Now and Install After a Restart. This brings me to an Update center page, which you can get to by just going to Update Center in the browser bar here and you’ll see a little bit of status information about, in this case Jenkins says that the plugin was downloaded successfully and that it will be activated during the next boot and if you’d like, sometimes you have a long list here of plugins that are downloading. You can click this box here to restart Jenkins when the install is complete, which in this case would be after we’ve downloaded all the plugins that we selected and also when no jobs are running, so check that box and you’ll see right away we click into restart mode here. If you are still downloading plugins, that restart wouldn’t happen until all the plugins were downloaded. You also get that message, that big red message that says Jenkins is going to shut down and if you refresh the page here, you’ll find that right now Jenkins is down and eventually Jenkins comes back up. Now depending on how you host Jenkins will dictate whether or not Jenkins comes back up, so how you’re hosting Jenkins might affect how Jenkins comes back up if you don’t have anything in place that would restart Jenkins. You can see here that Jenkins is starting back up and I don’t have the Enable Refresh set up, I don’t really like that, so I have to make sure to refresh the page from time to time to see if it’s back up and running. Now you can see here I was brought back to the Update Center page because that was the last page I was at. If I wanted to take a look at this plugin now, I could come over to manage my plugins and now you should see under the Installed tab, if you look for HTML, you should see that the HTML Publisher plugin is now installed. You’ll see on the right-hand side is an option you can click to uninstall this plugin and then at some point in time in the future it’s possible this plugin will be updated. Right now we’re at version 1.11. You can come to the Updates tab and see which plugins have updates available. For example, you can see right now this durable task plugin that I have. I’ve got version 1.11 installed and version 1.12 is available. Now I would need to go through a process to make sure that version 1.12 is okay, will work with my system. Once I’ve done that then I can come here and perform the update. So the Manage Jenkins interface is broken down into four separate tabs: what can be updated which is already installed, what’s available, what’s installed, and then there’s an advanced tab where you can do things like configure the location where you’re getting your updates from, so the update site. Sometimes you might change this if you want to enable access to beta plugins. That’s about it for as far as installing plugins. Now what a plugin affects, you’re going to have to refer to the documentation to figure that out and in this case, we know that we’ve installed something that should give us the ability to publish HTML reports. So if I want to verify that the install worked, I should be able to come out to any project type and let’s actually come first to a freestyle project type. I should be able to come to the configure tab and then if I hop down to the build section and actually this will be under Post-build, when I click the link here, you can see there’s an option to publish HTML reports and then I get a little wizard that I can walk through a little form to fill out where I can add reports to archive and then make available, but we’re actually going to do this via a Pipeline job so let’s close this out and let’s come into the petclinic pipeline project and configure that. Join me in the next video where we walk through setting this up in our Pipeline.

Publishing HTML Reports

Okay, so we’re now at our Pipeline for our petclinic project and we’d like to set up our code coverage report to be published and archived so we can access that historically. So just like anything else, we can come out to the Pipeline Syntax Snippet Generator and see if we can get some help, so I’d encourage you to check this out. This will also tell you if a plugin is compatible because if nothing shows up in here then definitely the plugin is probably not compatible, though you’ll want to refer to the documentation because I suppose it’s possible there’s no help available for a plugin. I haven’t really found anything yet though that doesn’t have help over here. So we can click Publish HTML and we’re presented with a little form. It’s the same three options we saw over on the freestyle project type. We have our directory to archive and then we need to point at the index page and then we also need to give an optional title, so in this case this is code coverage so we might call this Code Coverage to be specific in case we have several HTML reports for publishing. You could have static analysis tools as well that generate an HTML report and you might want to name them accordingly and then the next thing we need is just the HTML directory that points to where the files are at that we would like to archive. The files that will also include at the root of that folder the index.html page. Now I showed you this report in the browser a moment ago. You can see it’s in the target/site/jacoco folder, so let’s just copy that. Let’s hop over to the command line and if we take a look in the target/site/jacoco, you’ll see that there’s the index.html up in the upper left of the output there and then all the rest of the files in here will want to grab a copy of all these folders for example because they have the actual test results in them. So we’ll grab this whole folder so we can come back to our Pipeline Snippet Generator, paste in that location, target/site/jacoco and then we should be able to hit Generate Groovy for this and you can see it’s a good thing we’re using the Snippet Generator. There are quite a few options here that we might not like to type out, which by the way, there are some publishing options you can expand here. A couple things you might want to consider. Do you want to keep past HTML reports? And I like to check that box, otherwise you’ll only keep the latest copy so I like to keep a history and then also, do you want to allow the report to be missing? So do you want to fail the build if there’s no report, which is the default behavior here, or do you want to allow it to be missing and in this case I’ll just allow this to be missing because a Code Coverage report isn’t that important as far as failing a build. I’m more worried about a compilation failure or a test failure. I’m not necessarily worried about Code Coverage, though that’s up to you entirely. Let’s copy this then and let’s come back to our Pipeline and now we just need to decide where we should put this. So you tell me, where do you think we should put this in our Pipeline? Where should we be publishing HTML reports for Code Coverage? I think that this goes in the archival step, though remember, you can put this wherever you’d like, so I’m going to add this right as the very first thing that we archive and I’m just going to bring these options down so that you can see all of these options. By the way, you can always edit this code using some external editor and just copy and paste it in, or as we’ll see later on, you can check this script into vcs, which means you could just have a file for this in your code repository. So we’ve added that in. We’ll do that even before we publish test results. Let’s click Apply or Save. I’ll click Save so I can build this now and then let’s go ahead and build our application. Okay, so we’ve scheduled a build. That’s kicked off. We have our little Pipeline starting up here. Oh and by the way, I reverted the failing test. You can see I ran build #7; I didn’t show you running that, but I reverted that so our test should pass now. We should have code coverage here in a second. I will pause the recording until this finishes and it actually looks like we have some sort of failure here so let’s check out what’s going on. Let’s take a look at our logs here and that’s not too helpful, so in this case, let’s drill into our project. It does look like we at least got through successfully the first two stages, but the last stage failed and actually we’ve got a message here, if I would look at what’s on the screen. You can see here, we cannot publish the report; the target is not specified. So let’s see what I did wrong. I did some digging and it looks like there is a little bug in this case with the Publish HTML Snippet Generator. What’s missing is the target attribute, so we just need to copy that and bring that over and if we come into configure our project, come down to the Pipeline and scroll down here to Publish HTML, we just need to specify the target attribute explicitly when calling Publish HTML. There are some short hands when writing Groovy; I am not that familiar with all of them, so just watch for this and occasionally you might stumble, especially if the Snippet Generator generates something that doesn’t actually work. So we can save this now and see if this build this time. So let’s come out and do Build Now. I’ll pause the recording. Okay, I’m back now, but yay, we have a new problem. It turns out when I paused the recording, somehow I lost my internet connectivity and I couldn’t poll from the Git repository so this failed. I did want to show you what happens though when this fails. Sometimes weird things can happen in the Stage View here. You can see we only have two stages now. We have checkout and archival, so our build process never kicked off. Well when I dived into the logs here and I checked out Git, you can see here that we had an issue accessing our Git repository. This could happen from time to time. No big deal. Let’s just go ahead and rerun the build process here. I do want you to be aware though, sometimes your Pipeline can change here if stages don’t run, the Pipeline Stage View will change. Don’t be thrown off by that. You can click Build Now. You’ll notice that right now, our Stage View thinks that there are only two stages because this will use whatever the last two stages are, but obviously, here we’ve already updated and we’re showing a new middle stage compiling test packaging and everything looks okay right there. Now instead of pausing the recording, I think actually while this is running it might be fun to show you, if we dive into #10 here, so go into the build of a Pipeline job as it’s running. There’s actually a Pipeline Steps link here. If you dive into that, you can actually see the step that we’re currently on. So you can see even more information than just the Stage View. So steps are the decomposition of all the individual function calls that we make inside of the Pipeline. So you can see checkout here, which is setting the stage. You can see Git here, which is cloning our repository. You can see compiling test and packaging, which is setting a new stage and then kicking off our Maven shell script. That’s where we’re at right now. So this breakdown gives you a little more information than that high-level overview. Okay, I paused the recording here and let the rest of this finish out. You can see the extra step showed up, including publishing HTML reports, so let’s go back up to the project level and see what’s different here. So it doesn’t look like too much is different up on the project level, though if you see over in the menu on the left side, we do have one difference and here is our Code Coverage report. Now on the Pipeline or Project level, this report is going to be the last report that was successfully run. We can click on this then because this is the only one we have at this time and we see that we have our report here embedded right inside of Jenkins. So look at the URL here. We’re inside of Jenkins right now. So this report has been archived. All the files underneath that folder that was generated are archived and then we can browse through them as if they’re hosted by Jenkins, much like the file that we found here on disk, but take note that this old location is the file system where I built this manually. So I’ll close this now, we don’t need these tabs open anymore and we can drill into this report and look at different levels of code coverage and what not and then the nice thing is, we have a history of this. So if we build this again, get a second copy of this report and we’ll start to build up history. While that’s building, let’s come into #10, the last one that ran and let’s take a look at the build level report here or the build level view. So we have build #10 here and that’s your cue that we are not on the Pipeline or Project level or Job level and here we have the Code Coverage report in the left side as well. This will point to the Code Coverage for this build, so if we want to access an old version, we need to come into the Build History and look at an individual build itself. Okay, let’s come back up to the Project level. So that’s how we can add a plugin that allows us to publish HTML reports. Let’s take a look at some other plugins that we might like to have.

Testing Plugins and Plugin Types

There are tons and tons of types of plugins that you might want to access and download and install. What you’ll want to do is focus on the tools that you use and try to find the plugins that best surface information about those tools to give you the feedback that you need from Jenkins and then test out those plugins and I can’t stress how important it is to not test new plugins in production. It’s so easy to run Jenkins in Docker as we’ve seen in this course, so even if you haven’t used Docker before, I’d encourage you to start learning Docker so that you can use Jenkins to spin up test environments to try out plugins before you deploy them to your production environment. Now as you’re going about the process of trying to find plugins, again, I like to go to this Available tab and just search through the options here and one of the things that you’ll notice if you’re looking at this list is that there are some categories. For example, here are .NET Development plugins. So this very first section focuses on .NET Development. So if you’re using .NET to develop your apps, you might want to check those out. If you scroll down, you’ll see additional sections, like Agent Launchers and Controllers, Android Development; there are just tons and tons of plugins that affect different parts of Jenkins. We’ve seen how a plugin can affect the build steps that we have access to in both the Pipeline and the freestyle project type, but plugins can affect many different things and it’s typical to see that plugins will be grouped by topic and sometimes the topic will help you understand what the plugin will modify. For example, Build tools, well that’s very likely to update the steps that we have in our Pipeline. Build triggers on the other hand, well that’s probably going to update the trigger section of a jobs configuration. Whereas Slave launchers and controllers, that’s something to do with managing a cluster of Jenkins resources. If you scroll through this list of plugins by topic, you’ll find plugins grouped together that you can drill into. For example, if you’re doing iOS development, you might want to drill into the iOS Development plugins and see what’s available to you, but also I want to stress that it’s really important to keep in mind that a plugin is absolutely not necessary to be able to integrate something with Jenkins and that’s because by and large when we are setting up our jobs, especially with Pipeline jobs, in many cases we’re just shelling out. So whatever you can run from the command line, you can run with Jenkins and then I’d encourage you to find generic plugins like the Publish HTML report plugin that will allow you to generically grab any HTML report because that could work with many different tools that you have. It may not be as slick as the integration of some of these plugins, like it might not give you a graph, but it will at least grab the reports for you and make those available. So keep in mind there are categories that can help you out and also keep in mind that plugins can affect many different aspects of Jenkins. For example, let’s take a look for a plugin called Green Balls. You can install this if you don’t like the color blue and instead want to see a green ball to indicate success. So we can click Install and without restart here and that was successful so now we can restart Jenkins. It looks like Jenkins is coming back up now. Jenkins is back up now. We can log in and we can come up to the very top level of the site and now you can see instead of blue, we have green to indicate success. So this plugin just happens to update some of the visual aspects of Jenkins, so plugins can update all different parts of Jenkins. Join me in the next video where I show you how to add in a plugin that adds in a whole sub world within Jenkins. It adds in a whole embedded site and it’s a really important plugin so join me next where we take a look at that.

BlueOcean UI Plugin

If you come out now to the Manage Plugins page and look for blue, you’ll see nothing comes up. There’s a plugin I want to show you called Blue Ocean which is work that’s being done on a brand new Jenkins UI to replace and/or augment the existing UI; however, you can’t access it because it’s a beta plugin at this point, so what you have to do is come over to the Advanced tab and then come down to the update site and add in a different URL here. If I paste this in, you’ll see the difference is we’re going to /experimental/update-center-json. So here is before and here is after. This gives us access to experimental plugins so we can submit this then and then if we go back to managing our plugins, we take a look at our available list, you’ll see nothing comes up. What we need to do is check now for plugins. So click that in the lower right. That will take a second to update the available plugins and now if we search for blue you should see that there are a bunch of plugins that come back called Blue Ocean and what you’ll want to do is scroll way down at the bottom here and check this Alpha BlueOcean UX plugin. So let’s go ahead and install this without restarting and then let’s go ahead and restart Jenkins. You’ll notice there are quite a few plugins here and this is the reason why this page will just sit here and wait until they’re all done by checking this box and then restart Jenkins. Now actually a couple of these fail. I believe that’s still okay for a couple of these to fail. You’ll still get some of the new UI components. Your mileage will vary depending on when you watch this video. If you watch this video really far in the future, perhaps the Jenkins UI will have incorporated these changes and you won’t even need a plugin anymore, but for now if you want to take a look at this, you can check out the BlueOcean plugin, so I’ll pause recording here while this is finishing up. Okay, the plugins finished installing and the restart took place. I could come back in and now you see this new Try Blue Ocean UI at the top of Jenkins. This will show up wherever you’re at inside the user interface. I can click this and be taken to a completely different UX, a totally different user experience. This is not by any means close to complete, but it does give you an idea of the work that’s in progress right now and the focus of this new UI, at least right now, is around the Pipeline job type. So this new UI is meant to really optimize your experience working with the new Pipeline job type. You can see I have some jobs down below here. We have our health information like before, the weather, based on the last five builds. We can favorite builds and when we drill in, perhaps the neatest thing about this new UI, if we click into one of our past builds like build #9 here, we have this nice status of all the different stages and when a build is actually running, let’s go back and do that. When we click Run here and you see we have this new status indicator here, we can click into that and we get nice view here of the stages that are upcoming, we can see where we’re at. It’s just a nice visual overhaul to working with pipelines and a lot more work will be done to improve this interface so you’ll definitely want to check this out and at some point in the future this will be incorporated into or perhaps even replace the existing Jenkins UI, though I have a feeling that’s quite a ways down the road, so this is yet another type of plugin that you can install and in this case, it adds an embedded site with an entirely different UI. Okay, that’s enough with plugins. I’d encourage you to go out and check the list of plugin categories and types and see what you might like to try out with Jenkins. For now, let’s go ahead and move on and let’s talk about continuous delivery.

Building Continuous Delivery Pipelines

Continuous Delivery

Boy, do I have a treat in store for you in this module. We are going to extend our Pipeline to perform parallel testing and to deploy software. I think this is really cool. Let’s walk through what we have here, what we’re going to build out in this module based on what we built in the last modules. So first for a quick recap. We’ve been pulling our source code from GitHub and then when we push changes or when we want to build our application, we’re allocating a node in our Pipeline which also allocates a workspace into which we clone the source code. Just to be safe we clean things up before we do anything and in our example, the clean refers to the mvn clean phase that we’re calling, so we clean previous builds, just in case we happen to get the same workspace again, even though that’s not a guarantee. We then compile our application. We run some tests and in this module we’re going to change the type of test we’re running. We’ll be using PhantomJS to test some browser-based tests, because in this module our example is going to be based on a JavaScript application and then once we’re done testing, we’re going to perform a new type of archival in this module; we’re going to stash our source code and our dependencies that we pull down. We’ll see that in a bit here. This initial node and workspace that we allocate forms what I’ll refer to as a CI process; it’s everything we want to do to get quick feedback and then after this process, which is much like the process we’ve worked with thus far, we’re going to chain on some additional processes. Just like in real life, we might want to perform some integration testing after we know that our application compiles and our fast unit tests work. So in real life in a JavaScript app, we might want to run some tests of Firefox and some tests of Chrome to make sure that our app works in both of those different environments and why wait for each of these to run serially, how about we run these in parallel, and you could imagine we could run many more browsers and in this module, we’ll actually include Safari as well. After the integration testing gets done, we’ll talk about deploying software and of course, the first part of deploying software is just to make sure that we’re approved to do so, so the next thing that’s going to happen, we’re going to pause the pipeline and wait for a human being to come in and say it’s okay before we then go ahead and perform a deployment. Once our software is deployed, we could do many different things, for example smoke test our application, though that’s the one thing in this diagram that we won’t be doing in this module for time’s sake. So this is a high-level picture of what we’re going to build out in terms of a pipeline for this module. Let’s get started.

Backup and Restore

I’ve got an added bonus for you before we get started with our CD Pipeline. I just got home from vacation and as much as I love my MacBook, I would much rather transfer over to my desktop computer which has a lot more horsepower, so I’m going to walk you through how simple of a process this can be. I’m on my laptop right now and earlier in the course I mentioned the .jenkins folder inside of my user folder as the default location for Jenkins data directory. If I list the contents of this, this includes all of the files for my Jenkins installation. So all I have to do is grab a copy of this folder and bring it over to my desktop to be able to spin up Jenkins with all of the files, all of the config, all the jobs, and all of the builds that I’ve been running on my laptop, so backup and restore is stupid simple with Jenkins. I have prepared a command here that we can use to create a tar archive so I’m calling tar and I’m passing c for create, z to also gzip the file, and then f to specify the name of the file and here is the file name. So it’s the end of m4 that I want to move over to my desktop and then I’m going to grab a copy of the .jenkins folder, so I’ll run that, but I’ll take a second and now I have a copy of my Jenkins data directory. By the way, you don’t have to stop Jenkins to do this to take a backup, but if anything is in progress, if anybody is changing anything or builds are running, it’s possible that you’ll only get a partial snapshot of the latest information, so I’d stop Jenkins just to be safe. Just know that in production it’s okay to back this directory up without stopping Jenkins. So that file is in my home directory on my laptop. Let me exit out of my ssh session and come back to my desktop and then I’ll use a secure copy. I’ll connect to my laptop. I’ll grab the archive that I created and I’ll copy it right into my home directory here on my desktop and once that’s done, I have the file over here, so I have that archive that I created. If we want just to be safe, we could preview the contents of this tar archive, so t for listing the contents. I need to put in z to indicate that I’ve g-zipped this tar archive, and then f for the file and I point at the file I just copied over. There’s a lot of stuff in here so I’ll just take the first few lines here and you can see that we do indeed have our .jenkins folder. I will clear the screen here and then just to make sure that we don’t have anything right now, I’ll list the contents of the .jenkins folder. You can see there’s no directory. I’ll run tar this time with x for extract, z for the gzip compression, and f for the file name. When I run that, that will extract and create the .jenkins folder. Just to make that actually exists now, let’s list the contents of it and you can see we have our files pulled over. And then I need a copy of the Jenkins war file so why don’t I just copy that right off of my laptop as well and that was in the Downloads folder, jenkins.war and I need to specify to copy that right here. Okay, once that’s done I can clear the screen here with java -jar jenkins.war and off to the races. You can see here right at the start, the web root is pointing at my user home in the .jenkins folder. Just to be clear, you’ll see here the Jenkins home directory is pointing at my user directory/jenkins, so that indeed is the location that we extracted the files to from the laptop. So the real test will be, does Jenkins have our data copied over? So let’s pull up in the browser and see what we’ve got. Alright, it looks like our data is indeed here. We can log in and now we can continue with our discussion of continuous delivery. I really wanted to share that with you because I’m sure you’re going to want to back up Jenkins and perhaps restore it into some test environments. As you can see, this process is dead simple.

Following Along with Windows

Thus far in the course, the biggest difference when using Jenkins with Windows is just that you need to use that batch step type instead of the shell step type. If you want to work on the examples in this module on Windows, because there are quite a few shell steps, I wanted to give you a heads up where you can get some help as you work on the examples throughout this module, so I’m basically giving you the ending point for this module here and you can come out to the gist, which by the way, here is the URL for that shortened. You can come down to module 5 and you can click on Pipeline ending point in Jenkinsfile, click on this, and you’ll see a bunch of comments if you search for on Windows use. You’ll see a bunch of comments for what you’ll want to substitute, so instead of shell npm install, it’s batch npm install and it’s pretty much just replacing shell with batch throughout; however, there are a couple of places like when using ls where you’ll want to use dir instead or instead of rm-rf you’ll want to use del /S /q and then an asterisk, which by the way only wipes out files, but that’s good enough and in this case, it’s okay to leave the folders around. So if you scroll through here, you’ll see all the substitutions that you can make for Windows and that should help you out to avoid any issues you run into. Also if you have access to the exercise files for this course, inside of the exercise files, if you load these up using the technique that I showed you at the end of module 2, if you drop these into the Jenkins home directory in the jobs folder, you can load these up and these have the comments in them as well. Also, all along the way I will have some text overlays to point out some of this, so just keep an eye out for those as well.

Starting Point and Pipeline Stashing

We’re going to work on a different example in this module. I’ve taken this example from another one of my courses, my System JS course. It’s an example of a game of solitaire built with HTML, CSS, and JavaScript. So instead of a Java app we now have an application that’s based on JavaScript, so things will be a tiny bit different and I want to just tease out those differences by working through this as an example instead of something like Java that’s a compiled application. This is what the app looks like and we’ll at some point actually have deployed the app in this module. For now, to be able to access the starting point for this module, make sure to come out to this gist, which by the way you have the shortened URL you can use to access this here. Scroll down to module 5 and then there are two things here. First there’s a link to the source code. This is the GitHub repository that houses the branch for this particular course. Notice I have a branch here, jenkins2-coures and then also notice that there’s a Pipeline starting point. I didn’t want to go through the process of manually recreating the Pipeline steps that we’re already pretty familiar with, I just wanted to have that included out of the box and explain that instead of building this step by step so we can spend our time on more advanced concepts in our continuous delivery pipeline. So you’ll notice that this pipeline like before, has a reference to a git repository and this time I’ve got something a little bit different. I’m specifying a branch as well. So instead of the default master branch, I am working on the jenkins2-course branch. After the source code is cloned, then we run an npm install. If you’re not familiar with Node and JavaScript development, npm install is polling dependencies. This would be a lot like polling down RubyGems or some Maven dependencies. I want to skip over the stash step here and we’ll come back to that because that’s actually new. After we run npm install, then we execute npm run and then we pass test single run. What this does in particular project, I have a set of tests set up and when you call npm test single run, you’ll run those tests just one time and then I pass some arguments to specify the browser that I’d like to use and in this case I’m specifying PhantomJS. So if you want to follow along with this example, you’ll need to make sure that you have PhantomJS installed on your computer. I’m not going to go through that, otherwise all of the dependencies in this module will probably blow up into a course of its own. The last thing we have here, just like before, we then archive our test results and it just so happens to be that we can use JUnit to do that and that’s because behind the scenes, up here when I run the tests, I’m using Karma and I’ve configured Karma to output test results in a JUnit format. So when you’re working with different testing frameworks, oftentimes you can control the report format and a lot of test frameworks, regardless of the language that they work with, understand how to report to the JUnit format which can make it really convenient for publishing results then. And of course, down below, even though we’re not using it, we have our call to notify and we can use that later in our pipeline. So I’ve taken out some of the notifications we have thus far so we can simplify this initial Pipeline. So while we’re at it, I’ll give you something new here. I’ve got this stash block and inside of a stash block, you have a couple of arguments that you can pass. First is a name because you can create multiple stashes. Think of this as like a folder on your disk somewhere that holds some files for you. This is actually a lot like archiving as well, but it’s temporary archiving. It’s archiving that only lasts the duration of a pipeline. So once a pipeline is done executing, this stash of files would no longer be available. When we stash files, we have to specify what we want to include and so with two stars I’m specifying that I want to include all the files in the workspace. Now I also have an exclude setup, though because of where this is at in the pipeline, this isn’t necessary, but I wanted to show you what it might be like to exclude some files as well. For example, in this case I’m excluding test results. What I’m trying to do here actually with this stash, after I check out my code and I poll third party dependencies, I’m stashing that so if I need that later on, I don’t have to poll the code and the third party dependencies again, so that will help speed up the pipeline, but also it will give me the opportunity if I need these things later on, to use the exact same version, so there are two reasons why I’m stashing this information. Okay, now let’s go about the process of getting this starting point plugged into a new job in Jenkins; however, I’ve got a challenge for you. I want you to try to do this on your own, so take this script here, this Pipeline script and plug it into a new Pipeline job type and then join me back in the next video and I’ll walk through this.

Browsing Workspaces in Pipeline Jobs

Okay, let’s get this moved over into a new job type. So let’s copy the contents of this script and let’s bring it over to Jenkins and then I just need to choose a new item, give it a name here called a solitaire, if I can spell that right, and then I’ll also add Pipeline for consistency with our other jobs. I’ll choose the pipeline job type and then click OK here. All I have to do is scroll down to the Pipeline section here and just paste in that sample script. I can save that then and click Build Now to see if this works and then one thing to note, this sample job that I’ve created only has one stage called CI. I just rolled up everything we did in the previous module into a CI stage. You could break that out into however many stages you’d like though. Okay, that’s done. It looks like things are successful. If we want, we could drill into this #1 build result and you can see a couple of things here. There are some tests that execute and those tests were run with PhantomJS. You can see here that that was the package name that was used by Karma so when Karma runs in a particular browser, it generates a package name with the browser information so PhantomJS 2.1.1 and even on a Mac computer and you can see the different tests we have here, the different test classes, you could drill into individual tests, but overall we have 64 tests that took about a quarter of a second. Okay, so one of the big differences here is that we’re not really packaging anything up when it comes to this JavaScript application nor are we compiling anything. All we’re really doing is running the test to make sure that they pass and then we’re also stashing the results of polling third party dependencies in the source code so that we can use that later on, because in this pipeline we want to build out a pipeline that can help us deploy this application. If you did want to archive the artifacts here, it probably would make sense to archive the source code that would run the application, which by the way, if you want to see where the workspace is at when you’re working with a pipeline job, come into the pipeline steps here and now I have question for you. Do you remember where in a Pipeline job we get access to a workspace? This is different from a freestyle project that we worked with way earlier in the course. Well, you might remember that when we allocate a node, that also gives us access to a workspace and we have to do that otherwise we’ll have problems trying to check out our code and run scripts and what not. Remember we had that error in the last module when we didn’t allocate a node? So if you want to see the workspace, you have to come into one of your allocated nodes and you’ll see on the left side here you have access to the workspace then that was allocated for this node and now you can see what was checked out. Now if you wanted to archive the artifacts of this project, you probably would grab this app folder. In this case this Node modules are just used for testing purposes so you would not include those, so you just include the app folder and that’s all you’d really need to archive. All the rest of this is really for testing purposes and development purposes. So if you want a fun challenge, add a step to archive that app folder and keep it around. I won’t do that because we don’t need to do that for the purposes of this module.

A Second Node Allocation

We are now ready to move into the part of our Pipeline that branches out and performs additional testing, integration testing with different browsers. Now if you want to follow along, make sure that you have the relevant browsers installed or just skip configuring them here in the Pipeline. Now inside the configuration of our Pipeline, make sure we pull that up, we could just come here and copy the line that performs testing and we could bring it down below and paste it three separate times and then change the browser to point at Chrome, Safari, etc., and this would work. The problem is this would execute serially when we would like to be executing this in parallel. So we can’t actually execute this inside of our existing node allocation, so this entire block here is allocated to a single node. So everything runs on one node. We’d like to run our tests in parallel so we’re going to need additional node blocks after our initial CI process and inside of here we can paste in one of our test runs, but before we do that we need to understand what happens when we allocate another node. I’ve got a question for you. Do you think we’re going to have access to our source code here? Because we’ve allocated a new node, we’ve also allocated a new workspace. It’s very possible that this is executing on a different computer. Remember we can have multiple agents hooked up to our Jenkins cluster? It’s very possible this is executing either on a different computer or in a different workspace if there are multiple things being executed on a given agent. So each time we have a node allocation, we have no guarantee that we get the same workspace we had in previous allocations. It’s possible, but it’s not a guarantee so let’s do something to understand what’s going on when we allocate a new node here. Let’s just list the contents of the workspace and then right after this, I’m going to remove everything that’s inside of the workspace to ensure we’re starting from an empty workspace and then right after, now I can go ahead and unstash the files that I saved and to do that I only need to specify the name of the stash and since that’s the only argument I’m passing to the unstashed step, I can just specify the name here without the parameter, so we’ll unstash everything and then let’s go ahead and just list out what we have now. So we’ll list before, we’ll remove everything, we’ll unstash what we had stashed earlier in the previous node allocation from our previous workspace and then we’ll go ahead and list everything out. Okay, let’s go ahead and save this and then let’s click Build Now to see what happens. We’ll see that #2 kicks off. We’re in the CI stage right now. By the way, if we hop up to the dashboard level we’ll see over on the left-hand side here under the Build Executor status that part of the solitaire pipeline is executing and it’s build #2. So we can see that we’re using one of our executors on our master agent to do that. Let’s go back into the pipeline because it looks like it’s done now and let’s just come in to the Console Output for build #2 and if we scroll way down here, you’ll see a couple of points in time where we have these messages running on master and it even shows the workspace that we’re inside of. There will be one of these at the top as well. So each time we allocate a node, you should see one of these messages. Now in this case, we’re just running on master and we’re using the same workspace in both cases, but that’s not a guarantee. So in the second running on master, here’s the new output from our second node allocation. First we run our shell and we pass the ls command and you can see here is the list of files we have because we are actually reusing the same workspace. That’s not a guarantee though so don’t assume that. Then right after that, we go ahead and run another shell, this time removing everything. Then we perform our unstash of everything that was stashed before and now when we list the contents with an ls again, you can see we get all the same files back. Not too exciting because we reused the same workspace here, but it is possible we would have been allocated on a different node and we wouldn’t have had the same workspace to begin with, so that’s why the unstash is necessary. And to really understand that, we need a second agent so join me in the next clip where I show you how to set up a second agent to test this out.

Adding an Agent Node

The best way to understand how a pipeline works when we have multiple node allocations is to have an additional agent set up, so let’s come up to Jenkins here. Let’s go to Manage Jenkins and let’s come under Manage Nodes. Now we’re going to quickly set one up here, another agent or slave, but the way we’re going to do this is probably not the way you would do this in a production environment. Usually you’d have some type of plugin set up that would flex a pool of cloud agents that would scale up and down as you need them. For now we’ll just manually create a new node here. So click New Node, give it a name. We’ll just call this agent for lack of a better name, agent 1. We’ll make this a permanent agent. If you have plugins for other agent types, you might see some more options here so you can click OK then, that creates it, and you’re brought to a screen where you can configure this agent. We only need to set up a few things, one of which is the location for the data directory for this agent so I’ll use temp/jenkins-agent and then 1. You want this directory to be unique to each agent, otherwise they’ll step on each others’ toes. Then there are other options here. We can specify the number of executors, which is the number of builds that this agent can perform in parallel. Let’s set this to 2 just like we have on our master node right now and then we can also give a label to be able to identify this agent. This is a means of pooling together agents, so since I’m on a Mac, I might just give this a label of mac so that I know that this agent runs Mac OS and if I need a build to run on Mac OS, I can specify this label then and we’ll see that in a moment. After I do this, I’m going to leave one of the options set here to launch the agent via Java Web Start. There are other modes of launching, you can explore those on your own, and as I said earlier, you’ll probably use some type of cloud pooling to scale up and down. So we’ll leave the defaults here and click Save and we’re brought to this list here where we have our master node and we have the new agent one that we created. Click into agent 1 and what we have here is basically this specification of an agent, but this agent is not yet running so you could think of this just as a record in a database that points to the agent. We now need to get the agent up and running. However, it looks like, and you might not have this problem, somehow I disabled the agent port on the master node so I need to go over to the security config to set that up so I’ll click on that link here and under security make sure if you have the same problem that you check this box here and put port 50000 in which is the default port. You can also randomly assign this if you want, but I’ll use a static port here. Save that and then let’s come back to Manage Jenkins and then you can come down to Manage Nodes, click into agent1 and now you’ll see we have some options here for how we can launch this agent because right now we’ve only defined it. So first we could launch it via Java Web Start or we could also download a slave jar file and then execute this command at the command line to start up this agent. I prefer the later approach so I’ll go ahead and download this jar file and I’ll click Keep This and then I’m going to copy this command. Once I’ve got that copied I’m going to come over to the command line. I’m in the Downloads folder. You can see the slave.jar I downloaded. I’m going to paste in that command and you’ll see that it points at the server that we have set up and it passes a secret that this agent will use to authenticate itself. So we’ll run this here. We’ll come back to the web UI here and go back to our list of agents and now you’ll see the agent1 is connected. It doesn’t have that red X next to it and you can see some information about this agent. For example, it’s a Mac OS agent. So that’s enough to get us a second agent set up. Notice in the lower left here under Build Executor Status, we have both a master and agent now and each of them have two slots. These are the executor slots so each can run two executors or two builds simultaneously. So we have the capability right now to run four builds in parallel and a build in terms of the pipeline really refers to the a node allocation because a pipeline can have multiple node allocations and we can even have some running in parallel which is what we’ll set up next. So now let’s go ahead and use our new agent to run that second node allocation and we can force that by coming into the configuration for our new solitaire pipeline, clicking configure, and going down to the pipeline here. We can come in where we declare the node block and we can pass some parameters here to this node function. We can specify a label that we’d like to target and in my case I used Mac so I’ll put Mac in there. Whatever label you’d like to target you can include. For example, maybe you need a node that has Windows on it so you would’ve labeled that node Windows or maybe you need a node that has Docker on it so you might label a node as Docker. These labels allow us to specify the node that we need for a particular part of our pipeline. Okay, so I’m targeting the Mac node now so this won’t run on the master, this instead will run on that new agent so let’s save this and then I want to split the screen here and on the left-hand side I’ve pulled up the dashboard; I’m at the root of the Jenkins site. You can see all of the jobs down below but the most important part I want to show here is the Build Executor Status. In fact, we can even click into that to get a little more information including our list of agents below. So that’s on one side. Watch these executors as we go ahead and kick off our new solitaire pipeline. So click Build Now. I’m still over in the solitaire pipeline on the right here. Click Build Now. That will kick off a new build and just take a look at what’s going on on the left side here. You can see master in slot 2 is running part of the solitaire pipeline. That happens to be the part of our pipeline that compiles our application or actually in this case it’s not compiling, it’s just running some tests. Now do you see how on agent1 we’re running another part of our pipeline and now you see that’s done so the first node allocation went to our master node in slot #2 and our second node allocation went to our agent1 node in slot #2. Let’s come over and take a look at the build output so come here and click on Console Output and let’s scroll down here, so let’s look for those running on messages again. Those are your key to see when a node allocation occurred, so running on master first inside of this workspace and if we look for our next running on, you’ll see this time instead of running on master, we’re running on agent1 and we’re in an entirely different workspace. Do you notice that temp directory that I specified? So I just happen to be on the same computer in this case because I launched the agent right on the same node that my master is on. Normally that would be different, but we are at least in a different data directory here and we’re in an entirely different workspace, so there’s not going to be any common files to being with and you can see that here if you look down below, our shell script that lists the contents of that directory has nothing in it. You see how we just go right into our next shell script there? So there’s nothing in this directory and that’s why we can’t guarantee that our files will still be around, that’s why we need to stash our files. Now in this case when we remove everything from our workspace, that was unnecessary, but it might not have been. It’s nice to start from a clean slate so for example, there aren’t test results still sitting around which might get archived a second time, for example. Okay, so now we perform our unstash and then down below when we list the contents of this directory, if I scroll down a bit, now you see we have everything in the repository restored, so that’s why we need the stash and unstash because when we perform testing, we’re going to need those node module dependencies again and we also need our application code and well, we need test code if we’re going to run our tests again. Okay, join me in the next clip where we set up our testing now that we understand how these different node allocations work.

Setup Parallel Integration Testing in a Pipeline

We are now ready to come into our pipeline and set up our parallel integration testing. In the interest of time, I’m going to paste in some code here and just explain it. So first off, we’re going to kick off a new stage in our pipeline and I’m going to call that browser testing. Next I’m going to kick off a parallel declaration and I’ll just call parallel to start setting this up. Remember we have a DSL here and let me get a little more room to work here. We have a dsl that describes how we’re going to execute our pipeline so this is a declarative dsl so we’re just describing a parallel step now and don’t forget, you can always come down below here and open up the Pipeline syntax if you want a little bit of help. In this case, choose parallel and this happens to be a step that has no config so it’s just got some help here for how you can use this. So you’ll see that this parallel step takes a map with branch names and then blocks or closures that allow us to configure each of the branches. I think this will make more sense when you see it so let’s come back to the configuration, let’s come up here and I’ll just paste in three separate branches, one from Chrome, one for Firefox, and one for Safari and then I’m specifying a block for each to specify what I want to run in each of these branches. Now I don’t have this part yet, I’ll bring that in next, but basically what I’m doing is I’m saying hey, I want three branches and in each of them I want to run tests, but for each of them I want to specify the browser, so Chrome, Firefox and Safari so that’s what we’ll pass eventually to our npm run, we’ll specify the passed argument there for the browser. So let’s scroll down here and what I’ve got is a runTests function that I’m going to define here. I’m writing my own function now, just like we had notify way down below. Let’s walk through this function. First this function takes a browser parameter as a string argument, so whatever we pass in up here will get passed down below. Then we allocate a node here to run all of our script inside of here and the first thing we do, we wipe out that workspace just to be safe. Then we unstash everything, so just like we did before, so we make sure we have our source code, our tests, and our node modules, and then now we run our tests here and this looks like exactly like above, except we inject the browser parameter that we passed, so really what’s happening here, this Chrome is getting passed into this npm run command and then the last thing we do, we archive the artifacts here and we’re just sweeping up everything inside of that test result folder, which is why it could be a good idea to not stash that in a previous node allocation because then we might report tests back multiple times. That’s why way above here you’ll see that I stash before I run the PhantomJS test, though with this exclude it would have been safe to do that after as well. Come back down here. All we’re doing is calling this runTests function for each of the branches which will set up a node for each branch then, so inside of each branch we will allocate a node and perform our tests with the relevant browser, so let’s go ahead and save this and then join me in the next video where we kick this off.

Executing and Monitoring Parallel Pipelines

Okay I’ve gone back to the split screen approach here. On the left-hand side I have our Build Queue and Executor Status like we did in the last demo and on the right-hand side I have our solitaire pipeline and what I want to do in this demo, I want to actually open up the Blue Ocean UI because the graphic visualization in here is really helpful to understand what’s going on. So come in here and click solitaire pipeline. Now I know that this will probably change before this is released and integrated in with Jenkins in the future, but it does illustrate and help you see what’s going on so we’ll use this for learning purposes, but don’t rely on this being there in the future, it might be different. Okay, so on the left side we have our executor status. I want you to keep an eye on this and I have a question for you. How many different node allocations do we have at this point in time? It turns out right now we have five node allocations. We have our CI process and then we have that one where we did the testing to see how a node allocation works and then we have our three new ones for our integration testing branches. Now I’ve got another question for you. How do you think that our node allocations will play out in terms of the master and agent nodes? How do you think that those will be scheduled amongst the executor slots on both of these two nodes? It turns out there’s really no way to know. Your guess is as good as mine and even if you think you see a pattern, I wouldn’t rely on that being there forever. It’s possible Jenkins can change how it schedules in the future. There’s no explicit guarantee of the order. The only thing you can do is specify a label if you want to guarantee that a build runs or a node allocation runs on a particular agent type. Okay, so let’s kick this bad boy off. Let’s come over here to the Blue Ocean UI and click Run or if you don’t have this up you can click Build Now over in the traditional Jenkins interface and you’ll see that a build #4 is scheduled here. On the left side you’ll see part of that is starting to run on a master node. That’s the CI part of the pipeline. Now you’ll see we’ve got the three branches running and we’re using two executor slots on the master and we’re using one on the agent and actually it looks like things completed here and what I want to do is drill into the interface in Blue Ocean. I wanted you to see this view of our Pipeline here in Blue Ocean because it’s really neat. It actually shows the branches that we have here within a stage so we have our CI stage and we also have our browser testing stage and inside of these stages, the branches show up as a bubble including the status of that particular branch. Let’s run this again just to see this visualization in action so click Run here. Actually I can run this many times. Let’s run two of these at the same time. See all these node allocations kicking off on the left-hand side and let’s come into one of these, #6 here. We have this nice interactive view of our Pipeline including the branches and then down below we have the steps. Now for a little play by play, over in the exec status, we’re clearly in that part of the Pipeline where we were testing out unstash, where we’re not really doing anything meaningful and now we’re at a point where we are executing the branches. We’ve saturated our nodes and the only point that would happen in is if one of these pipelines is into the browser integration testing. As our Pipeline continues, you can see down below we get more stages showing up so we can keep track of what steps are executing and if you notice, those are the steps that we defined in our Pipeline script. I’m going to reload the page here because sometimes this doesn’t update fast enough for me. You can see we’re now into the browser testing part of the Pipeline and we’re now looking at the steps down below for the Chrome branch. It looks like we’re now done. By the way, you saw a few windows pop up. I must need to configure something yet to make sure the browser windows that are testing don’t pop up. Not a big deal, but just be aware of that when you run this. So as you can see here, we can run multiple branches in parallel. We can even run multiple pipelines at the same time. Obviously this will help speed things up. Let’s take a moment and look at the output of these pipelines that we ran, so we ran three of them now. We have 4, 5, and 6 and this is over in the traditional interface and you can see here we have our traditional stage view with this new browser testing stage, but obviously this isn’t as nice as the Blue Ocean interface because we can’t see the branching in here. Nonetheless we can see some information and if we scroll up here we can see the test trend is interesting. On builds 1, 2, and 3 we only had the PhantomJS test so there are only about 60 of those, but then now that we’ve added in the integration testing, we have four times that number so we have about 260 odd tests and if we drill into one of these, perhaps #6, we could take a look at the test results and you can see we have Chrome, Firefox, PhantomJS, and Safari, so the nice thing here with these additional integration tests, we could find problems specific to a browser, even specific to a browser on a given O/S. We can already start to see that obviously when we perform integration testing it takes more time than when we performed the PhantomJS testing. Now this isn’t a lot of time here, 0.34 seconds for our integration testing, but I’m sure if you’ve ever done any integration testing, you know that integration tests on a browser can rack up minutes easily. Running these in parallel would be a huge benefit over running these serially one after the other or trying to somehow run them in parallel within the confines of a single node allocation. You don’t have to do that because Jenkins gives you this nice ability in a pipeline to be able to split things out. If you are curious at all as to the individual steps that executed, in the traditional interface you could come into pipeline steps and see that information here. For each node allocation there will be a separate block here so here is our CI process. Here is the testing of unstashing and then as you scroll down here you’ll see we executed some branches in parallel and each of the branches are listed separately. So you can jump around and get some information. You can even see if a particular step in a branch failed versus the overall pipeline. Also you can check out the Blue Ocean interface and over here you can click on the different branches and you can see the steps down below and you can see the status of each of those. I prefer this view because I can click around and visually just limit what I’m looking at. I can drill in then and see the log for each particular step. Okay, now that we have our parallel testing set up, let’s move on to deployments.

Manual Approval for Deployments

With our integration testing in place, now let’s move on to talk about deployments and as I mentioned before, we want to make sure that we have approval to be able to perform a deployment so let’s start out first by adding an input step that will ask and by ask I mean pause the pipeline until a human comes to approve the continuation of the pipeline. Okay, let’s hop into setting up our deployment including first a manual verification before we proceed. We can create an approval by simply using the input step type and specifying a string to display so in this case, asking the question, is it okay to deploy to staging? Now this means that somebody needs to come over to Jenkins and manually approve this. That’s okay. This might be the expected workflow to decide if you want to deploy to a particular environment because you might not want to deploy to production automatically, maybe even to a staging environment you might not want to deploy automatically either. Of course, we’ve already seen how things can be automated so if that’s what you want, you don’t have to include this. Now it might be nice right before we do this to notify perhaps via email that there’s a deploy waiting to be pushed out to staging. Now one thing about the notify, it needs to be wrapped in a node allocation. However, the input, do not wrap that in a node allocation because this input will pause the pipeline and it will pause it until you come and approve or deny the deployment and if you use a node allocation around the input step, you will lock up that executor until somebody comes and pushes a button, which means that executor will be frozen and taken out of the pool which is not a good idea. You want to release that capacity so that it can be used for other builds. So make sure your input is outside of a node allocation. Now you might be wondering, well if the input is outside of a node allocation, where does it actually run at? Well it turns out that this runs on what’s known as a fly weight executor. Let’s talk briefly about that. In a Jenkins cluster, we have our master node and then we can also have agent nodes and we’ve spun up one agent thus far and throughout this course we’ve talked about executors a few times and the executors that we were talking about can run on any of the nodes including the master and these executors are known as heavyweight executors. You can think of them as the executors that do the bulk of the real work involved in building software, deploying it, and testing it. There’s another type of executor that’s used when we don’t allocate a node in a pipeline and that’s a flyweight executor. This is only on the master node and it’s used to execute code that’s outside of a node allocation. So in the case of input, we keep this outside of a node allocation so we don’t lock up one of our heavyweight executors; instead we just use this flyweight executor to pause the pipeline. These flyweight executors on the master do not eat into the capacity we specify for the heavyweight executors. So when we configure a node to have a certain number of executors, that’s referring to the heavyweight executors. These flyweight executors actually run the entire Pipeline script aside from the parts that we allocate to a node and the good thing is that they’re always available so there’s not a limit on the number of them, unlike the heavyweight executors. Okay, let’s see how this input process works by saving here and let’s come out now and click Build Now and you’ll notice that build clicks off. I’m going to pause this and come back when it gets back to that input step. You’ll notice in the upper right, we get that email from MailHog. We can click on that if we want to simulate what this might be like to get this on our own computer and we can click here to come to the Jenkins UI. We’re brought right into the build and if I make this full screen here, you can see over on the left-hand side there’s this paused for input, though I don’t think this stands out well so I don’t really like to come in via this approach. I’m going to close this and I’m going to hide MailHog. I actually like to look at things from this stage view overall and you’ll see right here inside of the browser testing that it says paused for 13 seconds. So whatever stage you have that input to pause at, will show paused here and then if you hover over it you’ll see we have the option to deploy to staging and we can either proceed or we can abort and if we abort, which we’ll do for now, you’ll notice then that we have this message aborted. Okay, now that we have our input in place, next let’s turn to deploying our software.

Setup Deployment to Staging

Once we have the approval, we can now move on to our stage for deploying and I’ve named that deploy and I’ve also passed an additional parameter here specifying a concurrency value of 1. Let me put some notes in here for you to reference. By default, multiple pipelines can run at the same time, we saw that earlier. If we try to deploy twice to the same environment, that would probably break the deployment, so by setting a concurrency value of 1, only one pipeline can deploy at a time and if there were multiple pipelines executing, the most recent pipeline is the only one that will be allowed to deploy which makes sense. We wouldn’t want to deploy an older version of our app if we have a newer version waiting. So the newest pipeline wins and the rest will be cancelled. So this stage has a dual purpose. Not only can it name a section of our pipeline, it also allows us to limit concurrency. I’m going to paste in the deployment here. This starts out with a node allocation and then I’ve got a couple of things in here and I don’t expect you to understand this; I don’t expect you to run this on your own computer unless you already know how to use Docker. I did want to use Docker because it makes it very easy for me to show you a deployment process because I can create a container with our application code in it and have it running almost as if it’s a service on my computer very easily. So if you don’t know Docker, that’s okay, just watch the rest of this clip. If you do, you can try this out yourself. If you’re a Docker person already, you’ll see that there’s a Docker file and a Docker compose file and that Jenkins2-course branch, so we’re going to take advantage of that. I’m doing a couple of things here to help us understand what version of our application we have deployed. So there’s an app/index.html page that is that solitaire game I showed you way at the beginning of this module. I’m just going to write the build number at the end of the page and surround it with an h1 tag so it looks really big. That way each time we deploy we’ll know what version we deployed. It gives us the ability to see changes rolling out. So I write that into that file in the workspace here, before I deploy, so each deploy will have its own build number inside of it, which will be a number which represents which pipeline did the particular deployment we’re looking at and then next I call docker-compose, which includes building a new container that has our application source code injected into it. So I grab an nGenx container and I embed our site because this is just a static HTML site. That’s all in the repository if you want to look at that. I basically just spin up that container though. You can think of this as starting a service on a computer and I do that by calling up and then specifying -d for detached, so run that in the background. That way our build process can complete and then I also have a –build to force the container to always rebuild. And then the last thing I do here is I send out a notification that the software was deployed. Now maybe if you send this notification out you’d want to include the URL, but I think this gets the point across. Long story short, the important part here is that we just add another node allocation at the end of our pipeline and do the deployment. Whatever that deployment looks like, if you can scrip it, you can automate it with Jenkins then. Join me in the next clip where we kick this off. For now let’s go ahead and save this.

Executing a Deployment Pipeline

Alright, let’s fire off our deployment. So right now you can see on localhost port 3000 that there’s nothing hosted. This is the location that our application will respond at, our “staging environment.” So let’s come over to Jenkins then and let’s click Build Now and see if we can get our deployment through. I’m going to pause the recording until we get to our prompt. Okay, I’ve got my email notification here which means we must be ready. You can see that our browser testing is now paused, so we can come in here and we can click Proceed this time and you can see here we’ve now hit our deploy stage so theoretically we should have our application deployed. Let’s find out and viola! I got lucky the very first time deployed. That’s pretty awesome. So the nuances of using Docker don’t really matter. The important part is that we can compose a pipeline with deployments inside of it and I hope you’re starting to visualize all the different things that you might want to plug into this pipeline that represent automating every single step of developing software through to delivering it to a production environment. We could easily come in to our configure here. Go into the pipeline and I’m sure you could imagine that we could have additional stages here. This could be deployed as staging, we could copy the exact same code and paste that down below and add in Deploy to your production, and of course we’d probably want another verification in here as well. Take a minute and think about the process you use right now to deploy software to various environments. Maybe not production, but maybe a staging environment or some type of test environment or demo environment. Think about what that process looks like and map out what the nodes might be like. Where is parallelism going to be beneficial? Think about what your process might look like if you were to put it into a Pipeline job.

Jenkinsfile - Checking the Pipeline Script into VCS

Let’s wrap up with one more demo before we end this course and this one’s really cool. So we have this Pipeline script and we’ve been working with it inside of the Jenkins UI and that’s great while you’re testing things out and learning about the process, but in reality you might want to take this script and copy it and put it into version control so you can version it like every other file that’s a part of a project. One of the big benefits of doing this is you give back control to the teams that are developing software to define what this pipeline looks like. You give them the autonomy to do what they need to do to deploy their software and you don’t have to be responsible for it if you are perhaps just responsible for maintaining a Jenkins server. You also have a lot of benefits around versioning this file. You can see who made changes, when they made the changes, why they made the changes, etc. You can even correlate the changes to changes in your application code. So let’s copy this and let’s see what this set of process looks like. By the way, if you want to follow along, you’re going to need to clone this repository into your own repository on GitHub so that you can make changes to it and to do that, hop over to the command line and go ahead and pull down the repository that you cloned. Once you pull this down, make sure you check out the branch that we’ve been working with. That branch is Jenkins2-course and then from there, let’s go ahead and create a new branch to work with this new approach of checking the script in. So we’ll check out a branch that doesn’t exist yet, use -B to create it. We’ll call this jenkins2 and then -jf for Jenkins file and actually let’s do course-jf. The Jenkins file is the name of the file you check into version control. List the contents here. You’ll see the files that we’ve been working with. Make sure you see the docker-compose file. Now all we need to do is open up a file here called JenkinsFile right at the root of our repository and we need to paste in the contents of our script. We will need to make one small change to this way up at the top. Instead of using the git declaration here, we’ll instead provide a generic checkout scm because we’re going to configure the git repository outside of this script, obviously because well, our job has to know where this script is at so it already needs to know where the repository is at. So all we need to do is say checkout scm which knows to then check out the appropriate repository that we’ve already configured and we can save our file and then close out of our editor and if we do a git status here, you should see the new file. We can git add the Jenkinsfile and then we can go ahead and git commit, adding our Jenkinsfile. We can go ahead and push this in. Whoops! I actually should push the other branch. There we go. We’ve got our new branch pushed out. And then we need to come back to Jenkins and we have a couple of options for what we can do here. The very first thing we can do is choose Pipeline script from scm. We need to specify git as our repository type, though to be honest, I don’t want to make these changes because I want to leave this job around so let’s come back up to the level of our dashboard and let’s go ahead and create a new item. We’ll name it the same, solitaire pipeline, Jenkinsfile this time though, and then we’ll come down here and we’ll copy an existing job. We’ll grab our existing pipeline. We’ll just make a copy of that. Now we’ll come in here. We’ll come down to our Pipeline script and change this to be from scm. Now we can specify git. We need to provide the repository, so I’ll paste that in. If you fork it, make sure you put yours in, not mine. Then we need to come down to the branch specifier and put in /jenkins2-course-jf, exactly like we checked out on the command line and then that’s it. We’ll leave the path alone because we used the default Jenkins file. You could change the name if you wanted to. We can save this now. So now we’ve just configured and pointed out our repository. We can click Build Now and the process kicks off. I’m going to pause the recording until something interesting happens; hopefully we have a deployment. And now that I think about that, we probably will have a problem deploying because we’ve already deployed from our other location and if you know about Docker, we will have some trouble here trying to do a Docker compose up with the same ports and that’s okay. The important point is you can already see here by simply pointing at our source control, we’ve kicked off our Pipeline. We already have the CI stage showing up here, which means we’ve indeed read from that file instead of configuring things inside of Jenkins. So while this is running, we actually could come over to the command line. We could edit that file again, the Jenkins file, and we could do something like change the name and we could come over here and let’s just say deploy to staging, because that’s more specific. Save that. We’ll commit our changes. We can push our changes then. Clear that out. We can go back to Jenkins now and I think I was saved by virtue of the fact that we paused for permission to deploy so let’s just go ahead and abort and let’s go ahead and click Build Now again. We don’t have a trigger set up. We could have that. In this case we don’t so we have to manually push Build Now and the process kicks off again. I’ll pause the recording until we get to the deployment. Okay, we’re now at the pause here. I can choose to proceed and the important thing here, you can see we now have our new title, deploy to staging. That’s pretty cool. So we now have control to change everything from the repository. Now just keep in mind the reason this failed here if you’re following along is because I’m trying to deploy another container with the exact same port as the one that we already have deployed, so that’s not going to work. With that we are now done with this course. I’m going to wrap up though sharing a few pieces of summary information and some resources.

Summary

Why Pipeline

Before we wrap up, I want to take a minute and talk about the benefits of using the Pipeline approach. I waited until the end of the course because I think really the best way to understand this is just to have seen it in action because now I can talk about all the benefits and they should resonate with demonstrations you’ve seen. One of the first benefits we saw was the ability to break our jobs out into different stages and we can have whatever stages we’d like to represent the process we use to deploy software and of course, if anything goes wrong, we can see which stage had the problem; our deployment failed right here, for example. We even have the ability to add in verification before we proceed and in this case we aborted. We have the ability to run stages in parallel so we could have multiple tests executing in separate branches very easily. It’s not like we couldn’t do that with the freestyle job type, but we’d have to wire up our own way to do that. Now we don’t have to. With a very declarative syntax, and let’s look at our other Pipeline job where this is in Jenkins, we can configure using a nice DSL, all the different steps in our Pipeline and we can do that programmatically and one of the things we haven’t see yet is the ability to read files. You might have some different browsers listed in a file somewhere and you might want to read that file to decide what nodes to run, so you could loop over the contents of a file and generate each of these different branches, which means that you could have a very dynamic process as opposed to a freestyle job where everything is pretty much set in stone up front and has to be done inside of the Jenkins UI. So the other big benefit we saw was our ability to check in this script under source control and delegate the responsibility back to the individual people or teams that are supporting the software so that they can design the process to match whatever it is that they want it to do. You don’t have to standardize and centralize all of this configuration. Now one thing you’ll need to be careful and I mentioned this way at the start of the course, there are some plugins that sound that they’re related to pipelines, but they’re not. These were actually first attempts at trying to make what we have now with the Pipeline job type. These attempts strung together through a lot of configuration, the ability to chain jobs together, freestyle jobs, link them together and try and produce some interfaces to visualize things on a high level, but it’s very brittle and it wasn’t a lot of fun to set up and it definitely wasn’t as simple as a Pipeline script. So watch out for these, but also know that there’s a rich history of plugins that tried to do many things before we arrived at what we have today with the Pipeline plugin, giving us these Pipeline scripts. Another benefit I didn’t get a chance to show was that these pipelines can persist across reboots, so if something goes wrong and the Jenkins server goes down, you can pick up where you left off with the pipeline as opposed to a freestyle job, if an agent goes down while executing it, you’ll lose your work in progress so another thing is these pipelines can actually be long running. It might take a long time before you deploy into staging and then deploy into production; that’s not a problem. The one thing to keep in mind though is try not to block your node allocations. Don’t put manual input verifications inside of them. Those are some of the benefits of pipelines. Next, let’s just talk about some resources before we wrap up.

Resources

To continue your learning, I want to share some resources. In the future, I’d like to create more courses, but I’m not sure when I’ll get the time to do that so in the interim, let me point you at some things. First off, there’s documentation out on jenkins.io and that documentation includes a handbook and I’d encourage you to take a look at this because there are some long chapters that are really helpful with visualizations. For example, Pipeline as code, though for this one we would’ve covered quite a bit of this in this course already. But you can see, you could come out here and reference this for individual little tutorials as well. There are some chapters that will be good when you’re ready to deploy so scaling and managing Jenkins and also hardware requirements and security and access control. These are just of few of the topics you might be interested in. There’s also a blog that will tell you about events that are coming up, give you information about new releases and sometimes I see tutorials in here as well. You can access the wiki for plugins as well from this site. We pointed at this earlier. You’ll probably want to come out and look at some of these plugins and see which ones you might want to try out, and again, each of these plugins then has its own nice plugin page that helps explain to you usage information, where to get the source code, and a little bit of information for getting started with each plugin. There is a use cases section that has some of the different plugins or tutorials that you might be interested in for a given scenario, though to be honest, these are not that all inclusive of how you might want to do things in Jenkins; you’ll definitely want to look online and see who’s written blog posts depending on the particular process you’re trying to automate and then under resources, you can get to the wiki for Jenkins, which also is what we looked at with the plugins, but on a high level, there’s some more Jenkins information here as well. For example, Meeting Jenkins and Using Jenkins has quite a few tutorials. Just be careful. Some of this information is a little bit out of date or refers to v1, so you’ll want to keep an eye for what seems like something that might be replaced by new stuff in the Pipeline script that we looked at. There is a cookbook out on CloudBees and there’s other documentation as well out here that might be helpful. I have included this link as well and the links down at the bottom of the Gist as well as a list to the jenkins.io website so you can refer to these. So these are a couple of places you can take a look and see what other people are doing with Jenkins and get some more ideas about what you might want to do. If you want to see what I’m up to, you could come out to my blog, weshigbee.com and you can sign up for my newsletter at /newsletter. I hope you’ve enjoyed the course. Until next time, enjoy!

Solution to Challenge in Module 2

Let’s go through the solution of getting this atmosphere to build up and running. So first I’m just going to click Build Now to get this to execute and we should see our first failure then and that should help us figure out what’s going wrong. So we’ve got #1 over here and it looks like it failed, so let’s click into that and now can you tell me what I should do to figure out what went wrong here? Well, hopefully you guessed it. We could take a look at the Console Output. And if you look in here, you’ll see we have an error trying to access some remote helper for ttps. Hmmm. Now if I look above here though, I can see what looks to be a URL here and it looks like something is missing. It looks like the h is missing on the front of that. So that’s the first issue. We just have the wrong repository URL. So let’s come back up to the project here and configure this and let’s scroll down here and add that in. Now let’s save that and let’s click Build Now and let’s see where we get with that. Huh. It looks like we still have a failure. Let’s click in here and come to the Console Output and see what happened this time, and it looks like this time the problem I have is that Maven is not available. You might have had that problem too if you tried this out on a computer where you have not yet installed Maven. I happen to move computers in the last module of my course so that’s why that’s not available. So I can pull open a terminal here and I can do brew install and maven to fix that problem, at least here for my Mac computer. Okay, that’s done now so I can come back to Jenkins and then let’s go back and build our project again and we’ll go back up to the project view, take a look at number 3 here and see what we get. This will take a bit so I’ll pause the recording. Okay, that’s done now and we still have an issue. If we come into the Console Output for that, I wonder what the issue could be. So let’s come down to the bottom here and it looks like we’ve got an error when we go to archive our artifacts. There’s no artifacts found that match the file pattern that’s specified, and of course, it’s recommending that this might be a configuration error so we might want to take a look at that. So tell me, what do you think we should do to try and figure this out? Well how about we go take a look at the workspace and see if that can be of any help to us? So let’s go back up to the Project level and just take a look at the workspace right here. So we do have spring, boot and then samples and then we also have spring-boot-sample-atmosphere, which I believe was the path, but just to be safe, let’s pull open the console output in a new tab and maybe split the screen here. We’ll come down to the bottom of the Console Output and we’ve got spring-boot-samples, dadadadada. Let’s copy this and look for this over here. No, we’re not finding that, but I see it there in the title there. Yes, adding those spaces highlighted that, so we’re in the right path. We’ve even got this target folder. So inside of the target folder, we’re looking for an *.jar and there is no *.jar so our pathing looks okay, we’re just not producing the jar file. Let’s go look at the configuration of this job. Let’s make this big and let’s go down to the Build steps here and can you tell me what looks like it might be a problem here? Well, I see a couple of things. First off, we do have some help here. We’re getting a warning that this file doesn’t exist in the workspace, so that’s helpful, if we saw this when we were setting up the project, but do you see why we’ve got a problem? Well, it turns out we’re calling Compile and not Package so we’re not producing the jar. So in this case we’re just not getting the jar artifact to archive. So one thing that you should keep track of is that this post-build archive step will fail the build if it can’t find the artifact. Now there is an option for this to not fail the build; you could check this if you wanted to and then we would not have had an error, but you might not know then that you didn’t archive an artifact for this build of your application. So if we save that now and we click Build Now, let’s see if everything’s okay. Okay. The build has come up. I’ll pause this until it’s done, and hey! It looks like everything’s okay now. So those were the two problems you should’ve found, the URL and packaging instead of compiling.