Many of the modules found on the public Forge are of high quality and have good documentation. The modules we will cover in this section are well-documented. What we will do is use concrete examples to show how to use these modules to solve real-world problems. Though I have covered only those modules I personally found useful, there are many excellent modules that can be found on the Forge. I encourage you to have a look at them first before starting to write your own modules.
The modules that we will cover are as follows:
concat
inifile
firewall
lvm
stdlib
These modules extend Puppet with custom types and, therefore, require that pluginsync
be enabled on our nodes. pluginsync
copies Ruby libraries from the modules to /opt/puppetlabs/puppet/cache/lib/puppet
and /opt/puppetlabs/puppet/cache/lib/facter
.
When we distribute files with Puppet, we either send the whole file as is or we send over a template that has references to variables. The concat
module offers us a chance to build up a file from fragments and have it reassembled on the node. Using concat
, we can have files which live locally on the node incorporated into the final file as sections. More importantly, while working in a complex system, we can have more than one module adding sections to the file. In a simple example, we can have four modules, all of which operate on /etc/issue
. The modules are as follows:
issue
: This is the base module that puts a header on /etc/issue
issue_confidential
: This module adds a confidential warning to /etc/issue
issue_secret
: This module adds a secret level warning to /etc/issue
issue_topsecret
: This module adds a top secret level warning to /etc/issue
Using either the file or the template method to distribute the file won't work here because all of the four modules are modifying the same file. What makes this harder still is that we will have machines in our organization that require one, two, three, or all four of the modules to be applied. The concat
module allows us to solve this problem in an organized fashion (not a haphazard series of execs with awk
and sed
). To use concat
, you first define the container, which is the file that will be populated with the fragments. concat
calls the sections of the file fragments. The fragments are assembled based on their order. The order value is assigned to the fragments and should have the same number of digits, that is, if you have 100 fragments then your first fragment should have 001, and not 1, as the order value. Our first module issue will have the following init.pp
manifest file:
class issue { concat { 'issue': path => '/etc/issue', } concat::fragment {'issue_top': target => 'issue', content => "Example.com ", order => '01', } }
This defines /etc/issue
as a concat
container and also creates a fragment to be placed at the top of the file (order01
). When applied to a node, the /etc/issue
container will simply contain Example.com
.
Our next module is issue_confidential
. This includes the issue
module to ensure that the container for /etc/issue
is defined and we have our header. We then define a new fragment to contain the confidential warning, as shown in the following code:
class issue_confidential { include issue concat::fragment {'issue_confidential': target => 'issue', content => "Unauthorised access to this machine is strictly prohibited. Use of this system is limited to authorised parties only. ", order => '05', } }
This fragment has order05
, so it will always appear after the header. The next two modules are issue_secret
and issue_topsecret
. They both perform the same function as issue_confidential
but with different messages and orders, as you can see in the following code:
class issue_secret { include issue concat::fragment {'issue_secret': target => 'issue', content => "All information contained on this system is protected, no information may be removed from the system unless authorised. ", order => '10', } } class issue_topsecret { include issue concat::fragment {'issue_topsecret': target => 'issue', content => "You should forget you even know about this system. ", order => '15', } }
We'll now add all these modules to the control repository in the dist
directory. We also update the Puppetfile
to include the location of the concat
module, as shown here:
mod 'puppetlabs/concat'
We next need to update our environment.conf
file to include the dist
directory as shown here:
modulepath = modules:$basemodulepath:dist
Using our Hiera configuration from the previous chapter, we will modify the client.yaml
file to contain the issue_confidential
class as shown here:
--- welcome: 'Sample Developer made this change' classes: - issue_confidential
This configuration will cause the /etc/issue
file to contain the header and the confidential warning. To have these changes applied to our /etc/puppetlabs/code/environments
directory by r10k, we'll need to add all the files to the Git repository and push the changes, as shown here:
[samdev@stand control]$ git status # On branch production # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: Puppetfile # new file: dist/issue/manifests/init.pp # new file: dist/issue_confidential/manifests/init.pp # new file: dist/issue_secret/manifests/init.pp # new file: dist/issue_topsecret/manifests/init.pp # modified: environment.conf # modified: hieradata/hosts/client.yaml # [samdev@stand control]$ git commit -m "concat example" [production 6b3e7ae] concat example 7 files changed, 39 insertions(+), 1 deletion(-) create mode 100644 dist/issue/manifests/init.pp create mode 100644 dist/issue_confidential/manifests/init.pp create mode 100644 dist/issue_secret/manifests/init.pp create mode 100644 dist/issue_topsecret/manifests/init.pp [samdev@stand control]$ git push origin production Counting objects: 27, done. Compressing objects: 100% (11/11), done. Writing objects: 100% (20/20), 2.16 KiB | 0 bytes/s, done. Total 20 (delta 2), reused 0 (delta 0) remote: production remote: production change for samdev To /var/lib/git/control.git/ bab33bd..6b3e7ae production -> production
Since concat
was defined in our Puppetfile
, we will now see the concat
module in /etc/puppetlabs/code/environments/production/modules
as shown here:
[samdev@stand control]$ ls /etc/puppetlabs/code/environments/production/modules/ concat puppetdb stdlib
We are now ready to run Puppet agent on client, after a successful Puppet agent run we see the following while attempting to log in to the system:
Example.com Unauthorised access to this machine is strictly prohibited. Use of this system is limited to authorized parties only. client login:
Now, we will go back to our client.yaml
file and add issue_secret
, as shown in the following snippet:
--- welcome: 'Sample Developer Made this change' classes: - issue_confidential - issue_secret
After a successful Puppet run, the login looks like the following:
Example.com Unauthorised access to this machine is strictly prohibited. Use of this system is limited to authorised parties only. All information contained on this system is protected, no information may be removed from the system unless authorised. client login:
Adding the issue_topsecret
module is left as an exercise, but we can see the utility of being able to have several modules modify a file. We can also have a fragment defined from a file on the node. We'll create another module called issue_local
and add a local fragment. To specify a local file resource, we will use the source
attribute of concat::fragment
, as shown in the following code:
class issue_local { include issue concat::fragment {'issue_local': target => 'issue', source => '/etc/issue.local', order => '99', } }
Now, we add issue_local
to client.yaml
, but before we can run Puppet agent on client, we have to create /etc/issue.local
, or the catalog will fail. This is a shortcoming of the concat
module—if you specify a local path, then it has to exist. You can overcome this by having a file resource defined that creates an empty file if the local path doesn't exist, as shown in the following snippet:
file {'issue_local': path => '/etc/issue.local', ensure => 'file', }
Then, modify the concat::fragment
to require the file resource, as shown in the following snippet:
concat::fragment {'issue_local': target => 'issue', source => '/etc/issue.local', order => '99', require => File['issue_local'], }
Now, we can run Puppet agent on node1
; nothing will happen but the catalog will compile. Next, add some content to /etc/issue.local
as shown here:
node1# echo "This is an example node, avoid storing protected material here" >/etc/issue.local
Now after running Puppet, our login prompt will look like this:
Example.com Unauthorised access to this machine is strictly prohibited. Use of this system is limited to authorised parties only. All information contained on this system is protected, no information may be removed from the system unless authorised. This is an example node, avoid storing protected material here client login:
There are many places where you would like to have multiple modules modify a file. When the structure of the file isn't easily determined, concat
is the only viable solution. If the file is highly structured, then other mechanisms such as augeas
can be used. When the file has a syntax of the inifile
type, there is a module specifically made for inifiles.
The inifile
module modifies the ini-style configuration files, such as those used by Samba,
System Security Services Daemon (SSSD), yum, tuned, and many others, including Puppet. The module uses the ini_setting
type to modify settings based on their section, name, and value. We'll add inifile
to our Puppetfile
and push the change to our production branch to ensure that the inifile
module is pulled down to our client node on the next Puppet agent run. Begin by adding the inifile
to the Puppetfile
as shown here:
mod 'puppetlabs/inifile'
With the module in the Puppetfile
and pushed to the repository, r10k will download the module as we can see from the listing of the production/modules
directory:
[samdev@stand control]$ ls/etc/puppetlabs/code/environments/production/modules/ concat inifile puppetdb stdlib
To get started with inifile
, we'll look at an example in the yum.conf
configuration file (which uses ini syntax). Consider the gpgcheck
setting in the following /etc/yum.conf
file:
[main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3
As an example, we will modify that setting using puppet resource
, as shown here:
[root@client ~]# puppet resource ini_setting dummy_name path=/etc/yum.conf section=main setting=gpgcheck value=0 Notice: /Ini_setting[dummy_name]/value: value changed '1' to '0' ini_setting { 'dummy_name': ensure => 'present', value => '0', }
When we look at the file, we will see that the value was indeed changed:
[main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=0
The power of this module is the ability to change only part of a file and not clobber the work of another module. To show how this can work, we'll modify the SSSD configuration file. SSSD manages access to remote directories and authentication systems. It supports talking to multiple sources; we can exploit this to create modules that only define their own section of the configuration file. In this example, we'll assume there are production and development authentication LDAP directories called prod
and devel
. We'll create modules called sssd_prod
and sssd_devel
to modify the configuration file. Starting with sssd
, we'll create a module which creates the /etc/sssd
directory:
class sssd { file { '/etc/sssd': ensure => 'directory', mode => '0755', } }
Next we'll create sssd_prod
and add a [domain/PROD]
section to the file, as shown in the following snippet:
class sssd_prod { include sssd Ini_setting { require => File['/etc/sssd'] } ini_setting {'krb5_realm_prod': path => '/etc/sssd/sssd.conf', section => 'domain/PROD', setting => 'krb5_realm', value => 'PROD', } ini_setting {'ldap_search_base_prod': path => '/etc/sssd/sssd.conf', section => 'domain/PROD', setting => 'ldap_search_base', value => 'ou=prod,dc=example,dc=com', } ini_setting {'ldap_uri_prod': path => '/etc/sssd/sssd.conf', section => 'domain/PROD', setting => 'ldap_uri', value => 'ldaps://ldap.prod.example.com', } ini_setting {'krb5_kpasswd_prod': path => '/etc/sssd/sssd.conf', section => 'domain/PROD', setting => 'krb5_kpasswd', value => 'secret!', } ini_setting {'krb5_server_prod': path => '/etc/sssd/sssd.conf', section => 'domain/PROD', setting => 'krb5_server', value => 'kdc.prod.example.com', }
These ini_setting
resources will create five lines within the [domain/PROD]
section of the configuration file. We need to add PROD
to the list of domains; for this, we'll use ini_subsetting
as shown in the following snippet. The ini_subsetting
type allows us to add sub settings to a single setting:
ini_subsetting {'domains_prod': path => '/etc/sssd/sssd.conf', section => 'sssd', setting => 'domains', subsetting => 'PROD', }
Now, we'll add sssd_prod
to our client.yaml
file and run puppet agent
on client to see the changes, as shown here:
[root@client ~]# puppet agent -t … Info: Applying configuration version '1443519502' Notice: /Stage[main]/Sssd_prod/File[/etc/sssd]/ensure: created … Notice: /Stage[main]/Sssd_prod/Ini_setting[krb5_server_prod]/ensure: created Notice: /Stage[main]/Sssd_prod/Ini_subsetting[domains_prod]/ensure: created Notice: Applied catalog in 1.07 seconds
Now when we look at /etc/sssd/sssd.conf
, we will see the [sssd]
and [domain/PROD]
sections are created (they are incomplete for this example; you will need many more settings to make SSSD work properly), as shown in the following snippet:
[sssd] domains = PROD [domain/PROD] krb5_server = kdc.prod.example.com krb5_kpasswd = secret! ldap_search_base = ou=prod,dc=example,dc=com ldap_uri = ldaps://ldap.prod.example.com krb5_realm = PROD
Now, we can create our sssd_devel
module and add the same setting as that we did for PROD
, changing their values for DEVEL
, as shown in the following code:
class sssd_devel { include sssd Ini_setting { require => File['/etc/sssd'] } ini_setting {'krb5_realm_devel': path => '/etc/sssd/sssd.conf', section => 'domain/DEVEL', setting => 'krb5_realm', value => 'DEVEL', } ini_setting {'ldap_search_base_devel': path => '/etc/sssd/sssd.conf', section => 'domain/DEVEL', setting => 'ldap_search_base', value => 'ou=devel,dc=example,dc=com', } ini_setting {'ldap_uri_devel': path => '/etc/sssd/sssd.conf', section => 'domain/DEVEL', setting => 'ldap_uri', value => 'ldaps://ldap.devel.example.com', } ini_setting {'krb5_kpasswd_devel': path => '/etc/sssd/sssd.conf', section => 'domain/DEVEL', setting => 'krb5_kpasswd', value => 'DevelopersDevelopersDevelopers', } ini_setting {'krb5_server_devel': path => '/etc/sssd/sssd.conf', section => 'domain/DEVEL', setting => 'krb5_server', value => 'dev1.devel.example.com', }
Again, we will add DEVEL
to the list of domains using ini_subsetting
, as shown in the following code:
ini_subsetting {'domains_devel': path => '/etc/sssd/sssd.conf', section => 'sssd', setting => 'domains', subsetting => 'DEVEL', }
Now, after adding sssd_devel
to client.yaml
, we run Puppet agent on client and examine the /etc/sssd/sssd.conf
file after, which is shown in the following snippet:
[sssd] domains = PROD DEVEL [domain/PROD] krb5_server = kdc.prod.example.com krb5_kpasswd = secret! ldap_search_base = ou=prod,dc=example,dc=com ldap_uri = ldaps://ldap.prod.example.com krb5_realm = PROD [domain/DEVEL] krb5_realm = DEVEL ldap_uri = ldaps://ldap.devel.example.com ldap_search_base = ou=devel,dc=example,dc=com krb5_server = dev1.devel.example.com krb5_kpasswd = DevelopersDevelopersDevelopers
As we can see, both realms have been added to the domains
section and each realm has had its own configuration section created. To complete this example, we will need to enhance the SSSD module that each of these modules calls with include sssd
. In that module, we will define the SSSD service and have our changes send a notify
signal to the service. I would place the notify
signal in the domain's ini_subsetting
resource.
Having multiple modules work on the same files simultaneously can make your Puppet implementation a lot simpler. It's counterintuitive, but having the modules coexist means you don't need as many exceptions in your code. For example, the Samba configuration file can be managed by a Samba module, but shares can be added by other modules using inifile
and not interfere with the main Samba module.
If your organization uses host-based firewalls, filters that run on each node filtering network traffic, then the firewall
module will soon become a friend. On enterprise Linux systems, the firewall
module can be used to configure iptables
automatically. Effective use of this module requires having all your iptables rules in Puppet.
The firewall
module has some limitations—if your systems require large rulesets, your agent runs may take some time to complete. EL7 systems use firewalld
to manage iptables, firewalld is not supported by the firewall
module. Currently, this will cause execution of the following code to error on EL7 systems, but the iptables rules will be modified as expected.
The default configuration can be a little confusing; there are ordering issues that have to be dealt with while working with the firewall rules. The idea here is to ensure that there are no rules at the start. This is achieved with purge
, as shown in the following code:
resources { "firewall": purge => true }
Next, we need to make sure that any firewall rules we define are inserted after our initial configuration rules and before our final deny rule. To ensure this, we use a resource default definition. Resource defaults are made by capitalizing the resource type. In our example, firewall
becomes Firewall
, and we define the before
and require
attributes such that they point to the location where we will keep our setup rules (pre
) and our final deny
statement (post
), as shown in the following snippet:
Firewall { before => Class['example_fw::post'], require => Class['example_fw::pre'], }
Because we are referencing example_fw::pre
and example_fw::post
, we'll need to include them at this point. The module also defines a firewall
class that we should include. Rolling all that together, we have our example_fw
class as the following:
class example_fw { include example_fw::post include example_fw::pre include firewall resources { "firewall": purge => true } Firewall { before => Class['example_fw::post'], require => Class['example_fw::pre'], } }
Now we need to define our default rules to go to example_fw::pre
. We will allow all ICMP traffic, all established and related TCP traffic, and all SSH traffic. Since we are defining example_fw::pre
, we need to override our earlier require
attribute at the beginning of this class, as shown in the following code:
class example_fw::pre { Firewall { require => undef, }
Then, we can add our rules using the firewall type provided by the module. When we define the firewall resources, it is important to start the name of the resource with a number, as shown in the following snippet. The numbers are used for ordering by the firewall
module:
firewall { '000 accept all icmp': proto => 'icmp', action => 'accept', } firewall { '001 accept all to lo': proto => 'all', iniface => 'lo', action => 'accept', } firewall { '002 accept related established': proto => 'all', state => ['RELATED', 'ESTABLISHED'], action => 'accept', } firewall { '022 accept ssh': proto => 'tcp', dport => '22', action => 'accept', }
Now, if we finished at this point, our rules would be a series of allow
statements. Without a final deny
statement, everything is allowed. We need to define a drop
statement in our post
class. Again, since this is example_fw::post
, we need to override the earlier setting to before
, as shown in the following code:
class example_fw::post { firewall { '999 drop all': proto => 'all', action => 'drop', before => undef, } }
Now, we can apply this class in our node1.yaml
file and run Puppet to see the firewall rules getting rewritten by our module. The first thing we will see is the current firewall rules being purged.
Next, our pre
section will apply our initial allow
rules:
Notice: /Stage[main]/Example_fw::Pre/Firewall[002 accept related established]/ensure: created Notice: /Stage[main]/Example_fw::Pre/Firewall[000 accept all icmp]/ensure: created Notice: /Stage[main]/Example_fw::Pre/Firewall[022 accept ssh]/ensure: created Notice: /Stage[main]/Example_fw::Pre/Firewall[001 accept all to lo]/ensure: created
Finally, our post
section adds a drop
statement to the end of the rules, as shown here:
Notice: /Stage[main]/Example_fw::Post/Firewall[999 drop all]/ensure: created Notice: Finished catalog run in 5.90 seconds
Earlier versions of this module did not save the rules; you would need to execute iptables-save
after the post
section. The module now takes care of this so that when we examine /etc/sysconfig/iptables
, we see our current rules saved, as shown in the following snippet:
*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:180] -A INPUT -p icmp -m comment --comment "000 accept all icmp" -j ACCEPT -A INPUT -i lo -m comment --comment "001 accept all to lo" -j ACCEPT -A INPUT -m comment --comment "002 accept related established" -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport --dports 22 -m comment --comment "022 accept ssh" -j ACCEPT -A INPUT -m comment --comment "999 drop all" -j DROP COMMIT
Now that we have our firewall controlled by Puppet, when we apply our web module to our node, we can have it open port 80
on the node as well, as shown in the following code. Our earlier web module can just use include example_fw
and define a firewall
resource:
class web { package {'httpd': ensure => 'installed' } service {'httpd': ensure => true, enable => true, require => Package['httpd'], } include example_fw firewall {'080 web server': proto => 'tcp', port => '80', action => 'accept', } }
Now when we apply this class to an EL6 node, el6
, we will see that port 80
is applied after our SSH rule and before our deny
rule as expected:
*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:164] -A INPUT -p icmp -m comment --comment "000 accept all icmp" -j ACCEPT -A INPUT -i lo -m comment --comment "001 accept all to lo" -j ACCEPT -A INPUT -m comment --comment "002 accept related established" -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m multiport --dports 22 -m comment --comment "022 accept ssh" -j ACCEPT -A INPUT -p tcp -m multiport --dports 80 -m comment --comment "080 web server" -j ACCEPT -A INPUT -m comment --comment "999 drop all" -j DROP COMMIT
Using this module, it's possible to have very tight host-based firewalls on your systems that are flexible and easy to manage.
The logical volume manager module allows you to create volume groups, logical volumes, and filesystems with Puppet using the logical volume manager (lvm) tools in Linux.
If you are not comfortable with LVM, then I suggest you do not start with this module. This module can be of great help if you have products that require their own filesystems or auditing requirements that require application logs to be on separate filesystems. The only caveat here is that you need to know where your physical volumes reside, that is, which device contains the physical volumes for your nodes. If you are lucky and have the same disk layout for all nodes, then creating a new filesystem for your audit logs, /var/log/audit
, is very simple. Assuming that we have an empty disk at /dev/sdb
, we can create a new volume group for audit items and a logical volume to contain our filesystem. The module takes care of all the steps that have to be performed. It creates the physical volume and creates the volume group using the physical volume. Then, it creates the logical volume and creates a filesystem on that logical volume.
To show the lvm
module in action, we'll create an lvm
node that has a boot device and a second drive. On my system, the first device is /dev/sda
and the second drive is /dev/sdb
. We can see the disk layout using lsblk
as shown in the following screenshot:
We can see that /dev/sdb
is available on the system but nothing is installed on it. We'll create a new module called lvm_web
, which will create a logical volume of 4 GB, and format it with an ext4
filesystem, as shown in the following code:
class lvm_web { lvm::volume {"lv_var_www": ensure => present, vg => "vg_web", pv => "/dev/sdb", fstype => "ext4", size => "4G", } }
Now we'll create an lvm.yaml
file in hieradata/hosts/lvm.yaml
:
--- welcome: 'lvm node' classes: - lvm_web
Now when we run Puppet agent on lvm
, we will see that the vg_web
volume group is created, followed by the lv_var_www
logical volume, and the filesystem after that:
Notice: /Stage[main]/Lvm_web/Lvm::Volume[lv_var_www]/Physical_volume[/dev/sdb]/ensure: created Notice: /Stage[main]/Lvm_web/Lvm::Volume[lv_var_www]/Volume_group[vg_web]/ensure: created Notice: /Stage[main]/Lvm_web/Lvm::Volume[lv_var_www]/Logical_volume[lv_var_www]/ensure: created Notice: /Stage[main]/Lvm_web/Lvm::Volume[lv_var_www]/Filesystem[/dev/vg_web/lv_var_www]/ensure: created
Now when we run lsblk
again, we will see that the filesystem was created:
Note that the filesystem is not mounted yet, only created. To make this a fully functional class, we would need to add the mount location for the filesystem and ensure that the mount point exists, as shown in the following code:
file {'/var/www/html': ensure => 'directory', owner => '48', group => '48', mode => '0755', } mount {'lvm_web_var_www': name => '/var/www/html', ensure => 'mounted', device => "/dev/vg_web/lv_var_www", dump => '1', fstype => "ext4", options => "defaults", pass => '2', target => '/etc/fstab', require => [Lvm::Volume["lv_var_www"],File["/var/www/html"]], }
Now when we run Puppet again, we can see that the directories are created and the filesystem is mounted:
[root@lvm ~]# puppet agent -t … Info: Applying configuration version '1443524661' Notice: /Stage[main]/Lvm_web/File[/var/www/html]/ensure: created Notice: /Stage[main]/Lvm_web/Mount[lvm_web_var_www]/ensure: defined 'ensure' as 'mounted' Info: /Stage[main]/Lvm_web/Mount[lvm_web_var_www]: Scheduling refresh of Mount[lvm_web_var_www] Info: Mount[lvm_web_var_www](provider=parsed): Remounting Notice: /Stage[main]/Lvm_web/Mount[lvm_web_var_www]: Triggered 'refresh' from 1 events Info: /Stage[main]/Lvm_web/Mount[lvm_web_var_www]: Scheduling refresh of Mount[lvm_web_var_www] Notice: Finished catalog run in 1.53 seconds
Now when we run lsblk
, we see the filesystem is mounted, as shown in the following screenshot:
This module can save you a lot of time. The steps required to set up a new volume group, add a logical volume, format the filesystem correctly, and then mount the filesystem can all be reduced to including a single class on a node.
The standard library (stdlib
) is a collection of useful facts, functions, types, and providers not included with the base language. Even if you do not use the items within stdlib
directly, reading about how they are defined is useful to figure out how to write your own modules.
Several functions are provided by stdlib
; these can be found at https://forge.puppetlabs.com/puppetlabs/stdlib. Also, several string-handling functions are provided by it, such as capitalize
, chomp
, and strip
. There are functions for array manipulation and some arithmetic operations such as absolute value (abs
) and minimum (min
). When you start building complex modules, the functions provided by stdlib
can occasionally reduce your code complexity.
Many parts of stdlib
have been merged into Facter and Puppet. One useful capability originally provided by stdlib
is the ability to define custom facts based on text files or scripts on the node. This allows processes that run on nodes to supply facts to Puppet to alter the behavior of the agent. To enable this feature, we have to create a directory called /etc/facter/facts.d
(Puppet enterprise uses /etc/puppetlabs/facter/facts.d
), as shown here:
[root@client ~]# facter -p myfact [root@client ~]# mkdir -p /etc/facter/facts.d [root@client ~]# echo "myfact=myvalue" >/etc/facter/facts.d/myfact.txt [root@client ~]# facter -p myfact myvalue
The facter_dot_d
mechanism can use text files, YAML, or JSON files based on the extension, .txt
, .yaml
or .json
. If you create an executable file, then it will be executed and the results parsed for fact values as though you had a .txt
file (fact = value).
If you are using a Facter version earlier than 1.7, then you will need the facter.d
mechanism provided by stdlib
. This was removed in stdlib
version 3 and higher; the latest stable stdlib
version that provides facter.d
is 2.6.0. You will also need to enable pluginsync
on your nodes (the default setting on Puppet 2.7 and higher).
To illustrate the usefulness, we will create a fact that returns the gems installed on the system. I'll run this on a host with a few gems to illustrate the point. Place the following script in /etc/facter/facts.d/gems.sh
and make it executable (chmod +x gems.sh
):
#!/bin/bash gems=$(/usr/bin/gem list --no-versions | /bin/grep -v "^$" | /usr/bin/paste -sd ",") echo "gems=$gems"
Now make the script executable (chmod 755 /etc/facter/facts.d/gem.sh
) and run Facter to see the output from the fact:
[root@client ~]# facter -p gems bigdecimal,commander,highline,io-console,json,json_pure,psych,puppet-lint,rdoc
We can now use these gems fact in our manifests to ensure that the gems we require are available. Another use of this fact mechanism could be to obtain the version of an installed application that doesn't use normal package-management methods. We can create a script that queries the application for its installed version and returns this as a fact. We will cover this in more detail when we build our own custom facts in a later chapter.