Puppet was designed as a configuration management tool for Unix-like systems. It runs on Linux, Solaris, FreeBSD, OpenBSD, AIX, MacOS and, since Version 2.7.6, also on Windows.
Over the years, however, it became clear that automation in a datacenter must also involve other families of devices: network equipment, storage devices, and virtualization solutions.
The same interest of companies such as Cisco and VMware who are investors and technological partners of Puppet Labs could only facilitate Puppet's steps into these territories. We are already seeing the results of these partnerships, and the vision of a software-defined datacenter is also taking shape under a Puppet-driven perspective.
In this chapter, we will review the current status of the projects that allow us to use Puppet in these categories of devices and technologies:
The automation of network equipment configuration is a common need; when we provision a new system, besides its own settings we often need to manage switch ports to assign it to the correct VLAN, firewalls to open the relevant ports, and load balancers to add the server to a balanced pool.
It is obvious that the possibility to define the configuration of the whole infrastructure, network included, is a powerful and welcomed point.
There are two main challenges that Puppet faces when it has to deal with network devices. They are as follows:
For the technical challenge, there is some good news. Alternative approaches have been taken to manage network equipment of different nature and different vendors with Puppet:
Besides the technical challenges, for which there are some solutions but still much to do, there are cultural and operational issues to deal with.
In many places, network and system administrators are of different breeds; they operate in different groups and are responsible for their infrastructures, using their own instrumentation.
Puppet's programmatic approach to configuration, which is likely to be pushed by sysadmins
, might not be well accepted by the network people, who are probably less obsessed by automation and more used to deal with static configurations.
Here is where DevOps culture may make a difference. There is the need to automate, and there are the tools, solution is collaboration, sharing of responsibilities, and good common sense.
Puppet users need just basic management of network devices, not their whole configurations; most of the time it is a matter of setting parameters and vlans on switch interfaces.
Many products provide authorization profiles, which can limit users' permissions, so a sane compromise can be to allow automatic management only for simple port settings and prevent changes to more global and risky core configurations.
Many Puppet features originate from community contributions. One of the most versatile and long-standing contributors is definitively Brice Figureau. When there still wasn't anything around on the topic, he proposed an approach to the network device management, which has been the foundation for the approach based on the proxy mode we mentioned earlier.
In his blog post at http://puppetlabs.com/blog/puppet-network-device-management, he introduced the puppet device
application in Puppet 2.7 to manage external devices where Puppet cannot run natively.
This command uses /etc/puppetlabs/puppet/device.conf
, by default, as the configuration file. Here, the hostnames of the equipment to manage, their type, and the method to connect to them can be placed. A sample entry may look like the following code:
[switch01.example42.lan] type cisco url ssh://puppet:[email protected]/ [router01.example42.lan] type cisco url telnet://puppet:[email protected]/?enable=enablepassword
With such a file in place, we can use the puppet device
command on the host we want to act as proxy for the configuration of remote devices.
The first time this command is executed, it creates certificates for all the devices we have defined in device.conf
. These certificates have to be signed by the Puppet Master as normal node's certificates.
The implementation provides two core native types: interface
and vlan
, with a provider to manage Cisco IOS-based devices. We can execute puppet describe interface
/ vlan
for details on their attributes.
To manage switch interfaces (speed and duplex, VLAN, port mode (access/trunk), description, and so on) we can write resources like this:
interface { 'FastEthernet 0/1': description => "Server ${server_name}", mode => access, native_vlan => 1000, duplex => auto, speed => auto, }
To manage router interfaces, we can use the following code:
interface { 'Vlan12': ipaddress => [ "192.168.14.14/24", "2001:2674:8C23::1/64" ] }
To manage VLANs (their ID is the same title) is enough a resource as follows:
vlan { '105': description => 'DMZ', }
These resources can be declared in nodes definitions that match the device names specified in device.conf
. When puppet device
is executed, it behaves like puppet apply
: it retrieves facts from the network device, then retrieves a catalog from the Puppet Master for the locally configured devices and runs it providing a normal transaction report, with the notable difference compared to a normal Puppet run, that the providers that implement the preceding types perform configurations on remote network devices.
On Puppet's core source, there is currently just the provider for Cisco devices and supported transport methods are just telnet
and ssh
, but we can find modules that use the same approach and implement it on different devices.
For example, Puppet Labs' F5 module https://forge.puppetlabs.com/puppetlabs/f5 (Puppet Enterprise is required for installation), introduces several F5 specific resource types, but is based on the network device application and has similar usage patterns. A sample entry in device.conf
might look like the following code:
[f5.example42.lan] type f5 url https://username:[email protected]/
Note that, in this case, the network device type is f5
and the access is done via https
.
A further demonstration of Puppet expandability is a module available at https://github.com/uniak/puppet-networkdevice written by two community members, Markus Burger and David Schmitt, which provides wider support for Cisco devices and implements, over the Puppet device application, a new device type (cisco_ios
). A sample entry in device.conf
looks like the following:
[switch01.example42.lan] type cisco_ios url sshios://user:[email protected]:22/?$flags
The module features a more complete set of resource types to manage different elements of a Cisco IOS configuration (access lists, SNMP configuration, interfaces, VLANs, users, and so on)
Something to consider is that the Puppet agent that normally runs as a service on a node does not implement any device activity. To manage the configured devices on a regular basis we need to place, on the proxy host, a cron job that executes puppet device
.
A proxy-based approach, based on puppet device
, has the benefit of letting us manage virtually any device that in some way allows programmatic remote configuration but has some cons, related to scale, authentication management, and the facts that it behaves differently to any other Puppet command.
You can go a step further when Puppet runs natively on the device to be managed and can apply configurations directly. This is an emerging field where we are already seeing some implementations and which will probably grow with the same concept of the software-defined data center.
In 2013, Cisco released onePK, a Software Development Toolkit that consists of a set of API libraries that allow monitoring and management of different families of Cisco devices and operating systems (IOS / XE, NXOS, and IOS XR), exposing an abstracted interface, which may be used by libraries in different languages.
The family of enterprise switches Nexus 9000 hosts, in a Linux VM container running inside the NXOS, a native Puppet agent which allows the usage of dedicated resource types such as, cisco_device
, cisco_interface
, and cisco_vlan
in a normal Agent / Master setup. We can place the code in a device node as follows:
node 'switch01.example42.lan' { # Definition of the Device, needed for each device cisco_device { 'switch01.example42.lan': ensure => present, } # Configuration of a VLAN on an access interface cisco_interface { 'Ethernet1/5': switchport => access, access_vlan => 1000, } # Configuration of a VLAN cisco_vlan { '1000': ensure => present, vlan_name => 'DMZ', state => active, } }
Directly from the device CLI, we can issue commands such as onep application puppet v0.8 puppet_agent
to run Puppet from the local device, which has its normal certificate and communicates with the Puppet Master as any other node.
The previous resources, when applied on the Linux container where Puppet runs, actually don't operate directly on the switch's configuration, they rather use the onePK presentation API to interface to onePK API infrastructure running on the device.
For more information about Cisco onePK and Puppet refer to this presentation at http://puppetlabs.com/presentations/managing-cisco-devices-using-puppet.
Also Juniper Network boasts a deeper approach to Puppet integration. It provides native jpuppet
packages for its Junos OS supported on all releases after 12.3R2. They install on Juniper devices Ruby, the required gems, and Puppet, which runs locally and behaves absolutely as any other client, with its certificates and node definition.
Juniper has also developed two modules:
netdev_stdlib
(now under the Puppet Labs' control and can be found at https://github.com/puppetlabs/puppet-netdev-stdlib), which contains netdev_*
typesThe Puppet code for a switch node looks like the following:
node 'switch02.example42.lan' { # A single netdev_device resource must be present netdev_device { $hostname: } # Sample configuration of an interface netdev_interface { 'ge-0/0/0': admin => down, mtu => 2000, } # Sample configuration of a VLAN netdev_vlan { 'vlan102': vlan_id => '102', description => 'Public network', } # Configuration of an access port without VLAN tag netdev_l2_interface { 'ge-0/0/0': untagged_vlan => Red } # Configuration of a trunk port with multiple VLAN tags # And untagged packets go to 'native VLAN' netdev_l2_interface { 'xe-0/0/2': tagged_vlans => [ Red, Green, Blue ], untagged_vlan => Yellow } # Configuration of Link Aggregation ports (bonding) netdev_lag { 'ae0': links => [ 'ge-0/0/0', 'ge-1/0/0', 'ge-0/0/2', 'ge-1/0/2' ], lacp => active, minimum_links => 2 } }
The idea of the authors is that netdev_stdlib
might become a standard interface to network devices configurations, with different modules providing support for different vendors.
This approach looks definitively more vendor neutral than Cisco's one based on onePK, and has implementations from Arista Networks (https://github.com/arista-eosplus/puppet-netdev) and Mellanox (https://github.com/Mellanox/mellanox-netdev-stdlib-mlnxos).
This means that the same preceding code with netdev_*
resource types can be used on network devices from different vendors: the power of Puppet's resource abstraction model and the great work of a wonderful community.