Author Archive

Automating Commercial Off The Shelf Software Deployment

I currently work for a large enterprise. Our little corner of this company has been very successful in implementing and rolling out build an deployment automation across all of the applications that we work with. This has arisen out of the fact that a lot of our applications are fairly simple JVM based apps with an embedded container. The DevOps team that I work with has created a simple pattern for building, packaging and deploying these apps that we then apply to future projects that are of the same nature. When we find improvements to the existing method, we then work with the teams still on the legacy pattern and apply the change to their process as well. One of the unexpected outcomes of this success is that we are now having teams approach us who are currently not doing any automation and asking us to work with them on automating their build, packaging and deployment process.

These requests always put us in a bit of a bind. We have been successful due to the nature of the applications we are working with. JVM based apps invariably are simple to package up and deploy – especially when there is an embedded container and the application is therefore completely self-sufficient. When a new team comes looking for our help, overwhelmingly they are developing on a proprietary platform that does not lend itself to automation as well as a custom solution does. In this first post, I will discuss the strategy that we take when we approach these teams and how we know from day one if we are going to be successful. In the next post I will review one of the successes we have had in working with these teams – automating the packaging and deployment to an IBM DataPower instance.

This post is going to be relatively short and sweet. If you are trying to reliably automate a Commercial Off The Shelf (COTS) product, you need either an API or a command line based solution that allows you to:

  • Stop/Start/Restart/Reload the running processes
  • Remove and clean the currently deployed application
  • Install the new application
  • Update environment specific information
  • Load any application specific data packages or database migrations
  • Verify the status of the application post deployment

If you don’t have an API or command line that provides this functionality, you are not completely out of luck BUT, in my experience, any solution that includes any of the following workarounds:

  • Driving the GUI via a tool like Sikuli
  • Capturing and templating HTTP calls (POSTS/PUTS) to a web based admin console
  • Reverse engineering the application deployment in the DB or filesystem of an application

will require a full time team to maintain and fix as it will break frequently, will require a lot of re-work when a new version of the COTS product is deployed and will not account for any edge cases. In other words, it is doable if the business insists, and will even provide some benefit but don’t expect that you can ever walk away from it without worrying about it breaking. We only agree to work with teams that have a tool that provides API or a command line based interface that fulfills our criteria. We are not in the business of creating a solution that can’t be maintained by the development team that owns the product and so any solution that uses a custom set of tools to do the automation isn’t supportable.

My next post will focus on the work we did automating the IBM DataPower deployment for 2 teams that we work with. This is a product that does provide the necessary API driven interface to manage the deployment remotely but required a custom solution to be implemented to build the framework necessary to support Continuous Delivery for the teams using the platform.

Testing puppet manifests part 1 – Local Compilation

Testing puppet manifests

The pipeline approach we use to move our infrastructure changes from one environment to the next gives us the advantage of having some visibility into what will happen in an upstream environment. Still, it would be nice to be able to get some quick feedback on potential issues with out puppet codebase before we even apply the changes. We have come up with 2 mechanisms to do this that provide us with very fast feedback and some assurance that our changes won’t immediately break the first upstream environment. The first, covered in this blog post, is the local node compilation.

Node manifest compilation

In the same way that a developer compiles their code locally prior to checking in, the node manifest compilation step is a verification step that runs through each and every node we have defined in our puppet manifests and compiles the puppet code. This catches errors such as:

  • Syntax errors
  • Missing resource errors – i.e. a file source is defined but not checked in
  • Missing variable errors for templates

The code to do this is pretty simple:

  1. Configure Puppet with the manifest file location (nodes.pp) and the module directory path
  2. Use the puppet parser to evaluate the manifest file and find all available nodes for compilation
  3. For each node found, create a Puppet node object and then call compile on it
  4. Compile all nodes, fail only at end of run if any nodes fail to compile and provide all failed nodes in output
require 'rubygems'
require 'puppet'
require 'colored'
require 'rake/clean'

desc "verifies correctness of node syntax"
task :verify_nodes, [:manifest_path, :module_path, :nodename_filter] do |task, args|
  fail "manifest_path must be specified" unless args[:manifest_path]
  fail "module_path must be specified" unless args[:module_path]

  setup_puppet args[:manifest_path], args[:module_path]
  nodes = collect_puppet_nodes args[:nodename_filter]
  failed_nodes = {}
  puts "Found: #{nodes.length} nodes to evaluate".cyan
  nodes.each do |nodename|
    print "Verifying node #{nodename}: ".cyan
    begin
      compile_catalog(nodename)
      puts "[ok]".green
    rescue => error
      puts "[FAILED] - #{error.message}".red
      failed_nodes[nodename] = error.message
    end
  end
  puts "The following nodes failed to compile => #{print_hash failed_nodes}".red unless failed_nodes.empty?
  raise "[Compilation Failure] at least one node failed to compile" unless failed_nodes.empty?
end

def print_hash nodes
  nodes.inject("\n") { |printed_hash, (key,value)| printed_hash << "\t #{key} => #{value} \n" }
end

def compile_catalog(nodename)
  node = Puppet::Node.new(nodename)
  node.merge('architecture' => 'x86_64',
             'ipaddress' => '127.0.0.1',
             'hostname' => nodename,
             'fqdn' => "#{nodename}.localdomain",
             'operatingsystem' => 'redhat',
             'local_run' => 'true',
             'disable_asserts' => 'true')
  Puppet::Parser::Compiler.compile(node)
end

def collect_puppet_nodes(filter = ".*")
  parser = Puppet::Parser::Parser.new("environment")
  nodes = parser.environment.known_resource_types.nodes.keys
  nodes.select { |node| node =~ /#{filter}/ }
end

def setup_puppet manifest_path, module_path
  Puppet.settings.handlearg("--config", ".")
  Puppet.settings.handlearg("--manifest", manifest_path)
  Puppet.settings.handlearg("--modulepath", module_path)
  Puppet.parse_config
end

Code available here: https://github.com/oldNoakes/puppetTesting

Note that in our production code, we break up our nodes into subsets and then fork a process for each of these to compile in. Currently we run 20 parallel processes for over 400 nodes – typically takes about 45 seconds on a fast machine (i.e. our build server) and up to 120 seconds on a slower one (i.e. the worst developer station that we have).

Exposing facter facts via mcollective YAML plugin

At my current client, we use MCollective to support the deployment of code, configuration and test data amongst a large number of potential nodes. In order to ensure that we target the correct machines to run these tasks, we rely on the following:

  • A set of values in the /etc/mcollective/facts.yaml file that are application and node specific (i.e. deployment environment)
  • An additional set of custom facts that are deployed into the /var/lib/puppet/facts directory
  • The default set of facts made available by facter

To expose these to our mcollective server, we started off using the FactsFacter plugin along with a custom fact that read the contents of the /etc/mcollective/fact.yaml file. This has worked but we have noticed that the time taken to instantiate the facts on a give node can be quite lengthy. This can impact our configuration deployment (which happens via an mcollective puppet agent) because the time required to get the facts established (as well as other issues) causes the agent call to timeout.

In order to improve the speed of facts collection by mcollective, we decided to return to using the YAML plugin – we just had to find a way to expose all of the facts that we relied upon previously into a YAML file and then make that YAML file available alongside the /etc/mcollective/facts.yaml that already existed.

Our solution is a cron job that reads all the facts from facter as well as our custom facts and writes them into a secondary yaml file in the /etc/mcollective directory.

Here is the script (facter_to_yaml.rb) that generates the yaml files on each of the nodes:

#!/usr/bin/ruby
require 'facter'
require 'yaml'
rejected_facts = ["sshdsakey", "sshrsakey"]
custom_facts_location = "/var/lib/puppet/facts"
outputfile = "/etc/mcollective/facter_generated.yaml"

Facter.search(custom_facts_location)
facts = Facter.to_hash.reject { |k,v| rejected_facts.include? k }
File.open(outputfile, "w") { |fh| fh.write(facts.to_yaml) }

We then deploy this script and use it in a cron job configured via puppet:

  file { "/usr/local/bin/facter_to_yaml.rb":
    source  => "puppet://puppet/modules/mcollective/usr/local/bin/facter_to_yaml.rb",
    owner   => root,
    group   => root,
    mode    => 0700,
  }

  cron { "factertoyaml":
    command => "/usr/local/bin/facter_to_yaml.rb",
    user    => root,
    minute  => [13, 43],
    require => File["/usr/local/bin/facter_to_yaml.rb"],
  }

Finally, we configure our mcollective server.cfg to use the newly generated file (snippet only below):

  # facts
  factsource = yaml
  plugin.yaml = /etc/mcollective/facter_generated.yaml:/etc/mcollective/facts.yaml

ISSUES

  • Something to note about the order to the YAML files listed in the plugin.yaml config option – the order matters. The second yaml file values take precedence over the first – therefore, if you are overriding any of the default facts (or any of your custom facts) in the facts.yaml file, it must be second. Not an issue in our case but something to keep in mind.
  • The other issue with this approach is if we create a new custom fact/update an existing one OR if something on a node changes that would affect one of the default fact values, it will not get updated for our mcollective configuration until the next time the cron job runs. This could cause us issues in the future and it is likely that we will also create an mcollective agent that can call the ‘facter_to_yaml.rb’ script outside of the regular cron times to provide us with the option of calling it on an as-needed basis.

ALTERNATIVES
Alternatives to our approach – if you are simply looking to expose certain facter facts to mcollective then you should consider the approach detailed on the mcollective-plugin wiki: FactsFacterYAML

Environment based DevOps Deployment using Puppet and Mcollective

One of the challenges that we ran into at my current project was how to treat the deployment of our puppet configuration in the same way as we treat the deployment of our applications – i.e. push it to ‘test’ environments to verify the changes prior to pushing the changes to the production environment. We needed a way to validate that changes in the puppet code would produce the expected results when applied to the production environment without actually pushing them there.

Our solution to this was to setup 5 different puppet environments that represented each of the different environments into which code gets deployed. We then used a combination of puppet, mcollective and mercurial to promote changes between environments. With appropriate tests in each environment, we were able to validate that the infrastructure changes we had made were ready to be promoted up the ladder.

Technical Setup

We configured our machines into separate collectives that represent the deployment environment in which they lived. Each of these collectives had a corresponding environment allocated in puppet such that when they executed a puppet apply, they pulled their infrastructure code from their environment codebase. A successful application of the infrastructure code to the previous environment triggers an update of the environment codebase to the same mercurial revision via our continuous deployment server.

Our puppetmaster config (in /etc/puppet/puppet.conf) looks as follows:

manifest = /usr/share/puppet-recipes/$environment/puppet/manifests/site.pp
modulepath = /usr/share/puppet-recipes/$environment/puppet/modules

Our puppet application is triggered via an mcollective agent running the following command:

/usr/sbin/puppetd --environment=${collective} --onetime --no-daemonize --verbose

Execution Setup

The puppet environments we have configured are:

  1. NOOP
  2. CI
  3. DevTest
  4. UAT
  5. Production

Each of these environments corresponds to a different stage in our continuous deployment server. The first stage is the most interesting as it has the majority of the tests in place to catch issues with our puppet manifests. The NOOP run does the following:

  1. Pulls the latest checkin into the NOOP puppet environment codebase
  2. Compiles the catalogs for each of our nodes using the NOOP codebase – this catches the majority of typo errors, missing dependencies, forgotten variables for templates and missing files.
  3. Runs a puppet NO-OP run against all nodes – this catches most of the remaining logical and cyclical dependency errors that can be introduced by a puppet module change.
  4. The puppet NO-OP run also produces an output report that provides us with the visibility to understand what changes are going to be applied to each environment with the latest codebase – this is very useful for auditing and tracing purposes
  5. If the NO-OP run completes without any errors, the mercurial revision of the last checkin is exposed via our continuous deployment server

The following four stages all do the same thing:

  1. Grab the mercurial revision exposed by the last successful run of the previous stage and update the appropriate environment codebase to that revision
  2. Trigger a puppet apply run for all the machines in that collective – capture and analyse the output to verify no warnings or errors
  3. If the run completes without any errors, expose the mercurial revision that was just applied out via the continuous deployment server

Because each of our deployment environments is a similar setup to environments above it, this setup provides us with the opportunity to verify that changes applied to a server are going to work in upstream environments. The primary difference between a CI environment and a production environment in our case is that one will have more servers (of the same type) and may offload some work to a dedicated server instead of hosting it on the same box as the application runs (i.e. a db server running alongside an application server in CI and DevTest vs. an independent db server in UAT and Production).

This setup isn’t perfect – in particular, running the puppet NOOP stage whilst also running another stage can cause issues as puppet will fail if it detects another puppet run ongoing – but it provides us with a reasonable amount of certainty that the changes we have made are correct and will not break any of the systems in later environments.

Useful shell commands

As with all devs on a linux project, I seem to be spending a great deal of time figuring out how to do things on the UNIX command line that I know will come in handy again in the future. As such, I will contribute yet another command line post…

SSH TUNNEL:

ssh -f user@remote.machine -L localport:remote.machine:remoteport -N

Sends a message to localhost:localport and have it ‘tunnelled’ over to remote.machine:remoteport.

Delete files older than X:

find /path/to/dir -type f -name *foo* -mtime +3 | xargs rm -f

Finds all files in /path/to/dir that match the name *foo* (using wildcards) that are older than 3 days and delete them

Search contents of files for string:

find /path/to/dir -name *foo* | xargs grep "string to search"

Find everything in /path/to/dir that match the name *foo* (using wildcards) and then grep each for the string specified

Find files that match multiple different names:

find . -type f \( -iname "*.erb" -or -iname "*.rb" -or -iname "*.pp" \)

Find all files in /path/to/dir that end with (erb || rb || pp)