Any resource can be exported, including defined types and your own custom types. Tags may be used to limit the set of exported resources collected by a collector. Tags may include local variables, facts, and custom facts. Using exported resources, defined types, and custom facts, it is possible to have Puppet generate complete interactions without intervention (automatically).
As an abstract example, think of any clustered service where members of a cluster need to know about the other members of the cluster. You could define a custom fact, clustername
, that defines the name of the cluster based on information either on the node or in a central Configuration Management Database (CMDB).
You would then create a cluster module, which would export firewall rules to allow access from each node. The nodes in the cluster would collect all the exported rules based on the relationship tag=="clustername"
. Without any interaction, a complex firewall rule relationship would be built up between cluster members. If a new member is added to the cluster, the rules will be exported and, with the next Puppet run, the node will be permitted access to the other cluster members.
Another useful scenario is where there are multiple slave nodes that need to be accessed by a master node, such as with backup software or a software distribution system. The master node needs the slave nodes to allow access to them. The slave nodes need to know which node is the master node. In this relationship, you would define a master and a slave module and apply them accordingly. The slave node would export its host configuration information, and the master would export both its firewall access rule and master configuration information. The master would collect all the slave configuration resources. The slaves would each collect the firewall and configuration information from the master. The great thing about this sort of configuration is that you can easily migrate the master service to a new node. As slaves check into Puppet, they will receive the new master configuration and begin pointing at the new master node.
To illustrate this concept, we will go through a DNS configuration example. We will configure a DNS server with the example::dns::server
class. We will then configure clients using a example::dns::client
class. DNS servers are configured with zone files. Zone files come in two forms: forward zones map hostnames to IP addresses and reverse zones map IP address to hostnames. To make a fully functioning DNS implementation, our clients will export a concat::fragment
resource, which will be collected on the master and used to build both the forward and reverse DNS zone files.
The following diagram outlines the process where two nodes export concat::fragment
resources that are assembled with a header into a zone file on the DNS server node:
To start, we will define two custom facts that produce the reverse of the IP address suitable for use in a DNS reverse zone, and the network in Classless Inter-Domain Routing (CIDR) notation used to define the reverse zone file, as follows:
# reverse.rb # Set a fact for the reverse lookup of the network require 'ipaddr' require 'puppet/util/ipcidr' # define 2 facts for each interface passed in def reverse(dev) # network of device ip = IPAddr.new(Facter.value("network_#{dev}")) # network in cidr notation (uuu.vvv.www.xxx/yy) nm = Puppet::Util::IPCidr.new(Facter.value("network_#{dev}")).mask(Facter.value("netmask_#{dev}")) cidr = nm.cidr # set fact for network in reverse vvv.www.uuu.in-addr.arpa Facter.add("reverse_#{dev}") do setcode do ip.reverse.to_s[2..-1] end end # set fact for network in cidr notation Facter.add("network_cidr_#{dev}") do # setcode do cidr end end end
We put these two fact definitions into a Ruby function so that we can loop through the interfaces on the machine and define the facts for each interface as follows:
# loop through the interfaces, defining the two facts for each interfaces = Facter.value('interfaces').split(',') interfaces.each do |eth| reverse(eth) end
Save this definition in example/lib/facter/reverse.rb
and then run Puppet to synchronize the fact definition down to the nodes. After the fact definition has been transferred, we can see its output for dns1
(IP address 192.168.1.54
) as follows:
[root@dns1 ~]# facter -p interfaces enp0s3,enp0s8,lo [root@dns1 ~]# facter -p ipaddress_enp0s8 192.168.1.54 [root@dns1 ~]# facter -p reverse_enp0s8network_cidr_enp0s8 network_cidr_enp0s8 => 192.168.1.0/24 reverse_enp0s8 =>1.168.192.in-addr.arpa
In our earlier custom fact example, we built a custom fact for the zone based on the IP address. We could use the fact here to generate zone-specific DNS zone files. To keep this example simple, we will skip this step. With our fact in place, we can export our client's DNS information in the form of concat::fragments
that can be picked up by our master later. To define the clients, we'll create an example::dns::client
class as follows:
class example::dns::client ( String $domain = 'example.com', String $search = prod.example.comexample.com' ) {
We start by defining the search and domain settings and providing defaults. If we need to override the settings, we can do so from Hiera. These two settings would be defined as the following in a Hiera YAML file:
--- example::dns::client::domain: 'subdomain.example.com' example::dns::client::search: 'sub.example.comprod.example.com'
We then define a concat
container for /etc/resolv.conf
as follows:
concat {'/etc/resolv.conf': mode => '0644', } # search and domain settings concat::fragment{'resolv.conf search/domain': target => '/etc/resolv.conf', content => "search $search domain $domain ", order => 07, }
The concat::fragment
will be used to populate the /etc/resolv.conf
file on the client machines. We then move on to collect the nameserver
entries, which we will later export in our example::dns::server
class using the tag 'resolv.conf'
. We use the tag to make sure we only receive fragments related to resolv.conf
as follows:
Concat::Fragment <<| tag == 'resolv.conf' |>> { target => '/etc/resolv.conf' }
We use a piece of syntax we haven't used yet for exported resources called modify on collect. With modify on collect, we override settings in the exported resource when we collect. In this case, we are utilizing modify on collect to modify the exported concat::fragment
to include a target. When we define the exported resource, we leave the target off so that we do not need to define a concat
container on the server. We'll be using this same trick when we export our DNS entries to the server.
Next we export our zone file entries as concat::fragments
and close the class definition as follows:
@@concat::fragment {"zone example $::hostname": content => "$::hostname A $::ipaddress ", order => 10, tag => 'zone.example.com', } $lastoctet = regsubst($::ipaddress_enp0s8,'^([0-9]+)[.]([0-9]+)[.]([0-9]+)[.]([0-9]+)$','4') @@concat::fragment {"zone reverse $::reverse_enp0s8 $::hostname": content => "$lastoctetPTR $::fqdn. ", order => 10, tag => "reverse.$::reverse_enp0s8", } }
In the previous code, we used the regsubst
function to grab the last octet from the nodes' IP address. We could have made another custom fact for this, but the regsubst
function is sufficient for this usage.
Now we move on to the DNS server to install and configure binds named
daemon; we need to configure the named.conf
file and the zone files. We'll define the named.conf
file from a template first as follows:
class example::dns::server { # setup bind package {'bind': } service {'named': require => Package['bind'] } # configure bind file {'/etc/named.conf': content => template('example/dns/named.conf.erb'), owner => 0, group => 'named', require => Package['bind'], notify => Service['named'] }
Next we'll define an exec that reloads named
whenever the zone files are altered as follows:
exec {'named reload': refreshonly => true, command => 'systemctl reload named', path => '/bin:/sbin', require => Package['bind'], }
At this point, we'll export an entry from the server, defining it as nameserver
as follows (we already defined the collection of this resource in the client class):
@@concat::fragment {"resolv.confnameserver $::hostname": content => "nameserver $::ipaddress ", order => 10, tag => 'resolv.conf', }
Now for the zone files; we'll define concat
containers for the forward and reverse zone files and then header fragments for each as follows:
concat {'/var/named/zone.example.com': mode => '0644', notify => Exec['named reload'], } concat {'/var/named/reverse.122.168.192.in-addr.arpa': mode => '0644', notify => Exec['named reload'], } concat::fragment {'zone.example header': target => '/var/named/zone.example.com', content => template('example/dns/zone.example.com.erb'), order => 01, } concat::fragment {'reverse.122.168.192.in-addr.arpa header': target => '/var/named/reverse.122.168.192.in-addr.arpa', content => template('example/dns/reverse.122.168.192.in-addr.arpa.erb'), order => 01, }
Our clients exported concat::fragments
for each of the previous zone files. We collect them here and use the same modify on collect syntax as we did for the client as follows:
Concat::Fragment <<| tag == "zone.example.com" |>> { target => '/var/named/zone.example.com' } Concat::Fragment <<| tag == "reverse.122.168.192.in-addr.arpa" |>> { target => '/var/named/reverse.122.168.192.in-addr.arpa' }
The server class is now defined. We only need to create the template and header files to complete our module. The named.conf.erb
template makes use of our custom facts as well, as shown in the following code:
options { listen-on port 53 { 127.0.0.1; <%= @ipaddress_enp0s8 -%>;}; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; <%- @interfaces.split(',').each do |eth| if has_variable?("network_cidr_#{eth}") then -%><%= scope.lookupvar("network_cidr_#{eth}") -%>;<%- end end -%> }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; };
This is a fairly typical DNS configuration file. The allow-query
setting makes use of the network_cidr_enp0s8
fact to allow hosts in the same subnet as the server to query the server.
The named.conf
file then includes definitions for the various zones handled by the server, as shown in the following code:
zone "." IN { type hint; file "named.ca"; }; zone "example.com" IN { type master; file "zone.example.com"; allow-update { none; }; }; zone "<%= @reverse_enp0s8 -%>" { type master; file "reverse.<%= @reverse_enp0s8 -%>"; };
The zone file headers are defined from templates that use the local time to update the zone serial number.
The zone for example.com
is as follows:
$ORIGIN example.com. $TTL1D @ IN SOA root hostmaster ( <%= Time.now.gmtime.strftime("%Y%m%d%H") %> ; serial 8H ; refresh 4H ; retry 4W ; expire 1D ) ; minimum NS ns1 MX 10 ns1 ; ; just in case someone asks for localhost.example.com localhost A 127.0.0.1 ns1 A 192.168.122.1 ; exported resources below this point
The definition of the reverse zone file template contains a similar SOA record and is defined as follows:
$ORIGIN 1.168.192.in-addr.arpa. $TTL1D @ IN SOAdns.example. hostmaster.example. ( <%= Time.now.gmtime.strftime("%Y%m%d%H") %> ; serial 28800 ; refresh (8 hours) 14400 ; retry (4 hours) 2419200 ; expire (4 weeks) 86400 ; minimum (1 day) ) NS ns.example. ; exported resources below this point
With all this in place, we only need to apply the example::dns::server
class to a machine to turn it into a DNS server for example.com
. As more and more nodes are given the example::dns::client
class, the DNS server receives their exported resources and builds up zone files. Eventually, when all the nodes have the example::dns::client
class applied, the DNS server knows about all the nodes under Puppet control within the enterprise. As shown in the following output, the DNS server is reporting our stand
node's address:
[root@stand ~]# nslookup dns1.example.com 192.168.1.54 Server: 192.168.1.54 Address: 192.168.1.54#53 Name: dns1.example.com Address: 192.168.1.54 [root@stand ~]# nslookup stand.example.com 192.168.1.54 Server: 192.168.1.54 Address: 192.168.1.54#53 Name: stand.example.com Address: 192.168.1.1
Although this is a simplified example, the usefulness of this technique is obvious; it is applicable to many situations.