Automate Multi-Source Code Manager during Puppet Enterprise Install

I’ve been a fan of Puppet Enterprise’s Code Manager since it shipped in PE 2015.3.  When we originally converted our deployments from zack/r10k to CM, we had just a single control repo with all our code environments as branches.  Later, we realized we would streamline our workflow by separating hieradata from control code into its own repo, with different security and its own branches, each consumed by different PE deployments.

In preparing to migrate our PE deployments to Azure, I’ve been developing automation and scripting around the operations involved to avoid misconfigurations as much as possible.  One of the areas I’ve been wanting to automate is Code Manager setup.  Puppet’s documentation on the subject is great, but the current (2016.4.2) PE installer doesn’t allow for CM configuration that involve multiple source repositories or proxy servers.  Complex configurations have to be set up after install.  I figured out how to do this in a streamlined fashion by creating a temporary, local hiera data directory using a relative path at the bottom of the hierarchy and injecting Code Manager configuration data into its common.yaml.

Note that because Code Manager is only configured to deploy per-branch environments to the PE Master’s $codedir, the example I have here deploys our hieradata repository’s branches to $codedir and they will appear as environments named “hiera_<branchname>”.  They can be safely ignored in the PE Console, and should not be assigned as an environment to any node groups.


After Puppet Enterprise setup, place an SSH private key with read-only access to your source repos in the location recommended in the PE documentation.

On the PE master, create /var/tmp/codemgr_key.pp (insert your own private key’s contents):

# /var/tmp/codemgr_key.pp
$codemgr_private_key = '-----BEGIN RSA PRIVATE KEY-----
<private key contents>

file { '/etc/puppetlabs/puppetserver/ssh':
  ensure => 'directory',
  group  => 'root',
  owner  => 'root',
  mode   => '0755',

file { '/etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa':
  ensure  => 'file',
  group   => 'pe-puppet',
  owner   => 'pe-puppet',
  mode    => '0400',
  content => $codemgr_private_key,

Create the Code Manager key by running (on the PE master):

sudo puppet apply /var/tmp/codemgr_key.pp

Make sure to delete /var/tmp/codemgr_key.pp when done if you don’t want to leave your key contents in that directory.

Run this as root on the PE master to set up a Code Manager deployment user and retrieve an authentication token for it:

# Edit as appropriate for your deployment
# Environment variables
CERT="$(puppet agent --configprint hostcert)"
KEY="$(puppet agent --configprint hostprivkey)"
CACERT="$(puppet agent --configprint localcacert)"
SERVER="$(puppet agent --configprint server)"

# Use Puppet's curl
alias curl='/opt/puppetlabs/puppet/bin/curl'

# Install jq
puppet resource package jq ensure=installed

# Make a root .puppetlabs directory for token
mkdir /root/.puppetlabs

# Create deployment user
curl -k -X POST https://localhost:4433/rbac-api/v1/users \
  --cert $CERT --key $KEY --cacert $CACERT \
  -H "Content-Type: application/json" \
  -d '{"login":"deployment", "email":"", "display_name":"Code Manager Service Account", "role_ids": [4], "password":"puppetlabs"}'

# Request an authentication token and store in /root/.puppetlabs/token
curl -k -X POST https://localhost:4433/rbac-api/v1/auth/token \
  --cert $CERT --key $KEY --cacert $CACERT \
  -H "Content-Type: application/json" \
  -d '{"login":"deployment", "password":"puppetlabs", "lifetime":"10y", "label":"PE Master token"}' | \
  jq -r '.token' > /root/.puppetlabs/token

Inject Local Hieradata

# Remove proxy setting if not required.
mkdir /etc/puppetlabs/code/hieradata
cat > /etc/puppetlabs/code/hieradata/common.yaml << COMMONYAML
 private-key: '/etc/puppetlabs/puppetserver/ssh/id-control_repo.rsa'
puppet_enterprise::master::code_manager::proxy: '$PROXY'
 remote: "<clone URL for control repo>"
 prefix: false
 remote: "<clone URL for hieradata repo>"
 prefix: true
puppet_enterprise::profile::master::code_manager_auto_configure: true
puppet_enterprise::profile::master::file_sync_enabled: true
chown pe-puppet:pe-puppet /etc/puppetlabs/code/hieradata/common.yaml
mv /etc/puppetlabs/puppet/hiera.yaml /etc/puppetlabs/puppet/hiera.yaml.old
cat > /etc/puppetlabs/puppet/hiera.yaml << HIERAYAML
 - yaml
 - "nodes/%{::trusted.certname}"
 - common
  - "../../../hieradata/common"

# datadir is empty here, so hiera uses its defaults:
# - /etc/puppetlabs/code/environments/%{environment}/hieradata on *nix
# - %CommonAppData%\PuppetLabs\code\environments\%{environment}\hieradata on Windows
# When specifying a datadir, make sure the directory exists.
# Restart pe-puppetserver to pick up new hiera configuration
puppet resource service pe-puppetserver ensure=stopped
puppet resource service pe-puppetserver ensure=running

Configure Code Manager

Perform a Puppet Agent run as root to configure Code Manager:

sudo puppet agent -t

You should see output indicating the setup of File Sync and Code Manager is being done:

Notice: /Stage[main]/Puppet_enterprise::Master::Puppetserver/Pe_hocon_setting[jruby-puppet.environment-class-cache-enabled]/ensure: created
Info: /Stage[main]/Puppet_enterprise::Master::Puppetserver/Pe_hocon_setting[jruby-puppet.environment-class-cache-enabled]: Scheduling refresh of Service[pe-puppetserver]
Notice: /Stage[main]/Puppet_enterprise::Master/Pe_ini_setting[puppetconf environment_timeout setting]/value: value changed '0' to 'unlimited'
Info: /Stage[main]/Puppet_enterprise::Master/Pe_ini_setting[puppetconf environment_timeout setting]: Scheduling refresh of Service[pe-puppetserver]
Notice: /Stage[main]/Pe_r10k::Config/File[r10k.yaml]/ensure: defined content as '{md5}1a51daddd57c58615646202f69847914'
Notice: /Stage[main]/Puppet_enterprise::Master::Code_manager/File[/opt/puppetlabs/server/data/code-manager/]/mode: mode changed '0700' to '0750'
Notice: /Stage[main]/Puppet_enterprise::Master::Code_manager/File[/etc/puppetlabs/puppetserver/conf.d/code-manager.conf]/ensure: created
Notice: /Stage[main]/Puppet_enterprise::Master::Code_manager/Pe_hocon_setting[webserver.code-manager.client-auth]/ensure: created
Info: Computing checksum on file /etc/puppetlabs/puppetserver/bootstrap.cfg
Info: /Stage[main]/Puppet_enterprise::Profile::Master/Puppet_enterprise::Trapperkeeper::Bootstrap_cfg[certificate-authority-service]/Pe_concat[/etc/puppetlabs/puppetserver/bootstrap.cfg]/File[/etc/puppetlabs/puppetserver/bootstrap.cfg]: Filebucketed /etc/puppetlabs/puppetserver/bootstrap.cfg to puppet with sum 49aa06818b9c496279e2543e57bfe6ab
Notice: /Stage[main]/Puppet_enterprise::Profile::Master/Puppet_enterprise::Trapperkeeper::Bootstrap_cfg[certificate-authority-service]/Pe_concat[/etc/puppetlabs/puppetserver/bootstrap.cfg]/File[/etc/puppetlabs/puppetserver/bootstrap.cfg]/content: content changed '{md5}49aa06818b9c496279e2543e57bfe6ab' to '{md5}7144f0a9d7e805620f6a90c8e1a9d55f'
Info: Pe_concat[/etc/puppetlabs/puppetserver/bootstrap.cfg]: Scheduling refresh of Service[pe-puppetserver]
Info: Puppet_enterprise::Trapperkeeper::Bootstrap_cfg[certificate-authority-service]: Scheduling refresh of Service[pe-puppetserver]
Notice: /Stage[main]/Puppet_enterprise::Master::Puppetserver/Service[pe-puppetserver]: Triggered 'refresh' from 44 events
Notice: Applied catalog in 43.61 seconds

At this point, you should be able to perform your first deployment with Code Manager:

puppet code deploy --all --wait

Follow the Code Manager documentation for further instructions on settings up webhooks, etc.

Dell R710 vFlash/Firmware Update Issue

I found an issue with the Dell PowerEdge R710 iDRAC 6 Enterprise/vFlash platform that may affect more Dell 11th-generation servers.  I did some pretty extensive testing before contacting Dell support, and they've confirmed that my issue is not isolated to the one server I'm experiencing this issue on.  I've confirmed that I don't see the issue on the newer PowerEdge R620 server.

The issue is that I want to use the vFlash platform (an SD memory card that's managed by the out-of-band iDRAC management) to stage firmware update packages for remote systems running VMware ESXi.  The goal of this is to be able to send those packages and stage them in advance of a maintenance event, so that I'm not streaming the updates over the network during the maintenance event.  To accomplish this, you can create an ISO image for one or more server platforms that contain all the Dell Update Packages (DUPs) required using Dell's Repository Manager.  One of the limitations of the vFlash platform is that ISO images mounted must be 2GB or less, so I've just created an ISO for the targeted server model.

There are two options for the ISO image: one that is meant to be used by the Lifecycle Controller (LC; a bootable management environment) that contains the Windows DUPs, or a "Deployment" image that is bootable and contains the Linux DUPs.  I chose to use the LC method and created the appropriate image from Repository Manager.  Once uploaded to the vFlash, I restarted the server and hit F10 to boot to the LC.  I used its "Platform Update" feature to browse to the mounted ISO image and it discovered the updates, but complained that the DUPs on the media weren't Dell-branded updates.  When I found a reference to the second firmware update method, I created the Deployment ISO image for the R710 and booted to it.  That method only updated one item, "Dell 32-bit Diagnostics", leaving everything else untouched.  I tried both methods multiple times before engaging my Dell account team, who asked me to open a support case.  I demonstrated the issue to them in the course of a 1.5-hour web conference session and they followed up with me to let me know they're also seeing this issue in their labs.  I'll update with a resolution if/when I get it.

#Dell  #PowerEdge #R710 #wp  

Google+: View post on Google+

Post imported by Google+Blog. Created By Daniel Treadwell.