Replication Rings and Directory Partitions

The knowledge consistency checker (KCC) is responsible for generating the intrasite replication topology, and the ISTG uses the KCC to generate the intersite replication topology. The KCC always configures the replication topology so that each domain controller in a site has at least two incoming connections if possible, as already discussed. The KCC also always configures intrasite replication so that each domain controller is no more than three hops from any other domain controller. This also means that maximum replication latency, the delay in replicating a change across an entire site, is approximately 45 seconds for normal replication.

When there are two domain controllers in a site, each domain controller is the replication partner of the other. When there are between three and seven domain controllers in the domain, each domain controller will have two incoming connections and two replication partners. Figure 35-6 shows the replication topology for City Power & Light's Sacramento campus. Here the network is spread over two buildings that are connected with high-speed interconnects. Because the buildings are connected over redundant high-speed links, the organization uses a single site with three domain controllers in each building. The replication topology for the six domain controllers as shown ensures that no domain controller is more than three hops from any other domain controller.

Campus replication with two buildings and three domain controllers in each building.

Figure 35-6. Campus replication with two buildings and three domain controllers in each building.

When the number of domain controllers increases beyond seven, additional connection objects are added to ensure that no domain controller is more than three hops from any other domain controller in the replication topology. To see an example of this, consider Figure 35-7. Here, City Power & Light has built a third building that connects its original buildings to form a U-shaped office complex. The administrators have placed two new domain controllers in building 3. As a result of adding the additional domain controllers, some domain controllers now have three replication partners.

Campus replication with three buildings and eight domain controllers.

Figure 35-7. Campus replication with three buildings and eight domain controllers.

At this point, you may be wondering what role, if any; directory partitions play in replication topology. After all, from previous discussions, you know that Active Directory has multiple directory partitions and that those partitions are replicated in the following ways:

  • Forest-wide basis for configuration and schema directory partitions

  • Domain-wide basis for the domain directory partition

  • Select basis for the global catalog partition or other application-specific partitions, which include special application partitions as well as the ForestDnsZones and DomainDnsZones application partitions used by DNS

In previous discussions, I didn't want to complicate things unnecessarily by adding a discussion of partition replication. From a logical perspective, partitions do play an important role in replication. Replication rings, the logical implementation of replication, are based on the types of directory partitions that are available. The KCC generates a replication ring for each kind of directory partition.

Table 35-2 details the replication partners for each kind of directory partition. Replication rings are implemented on a per-directory partition basis. There is one replication ring per directory partition type, and some rings include all the domain controllers in a forest, all the domain controllers in a domain, or only those domain controllers using application partitions.

Table 35-2. Per-Directory Partition Replication Rings

Directory Partition

Replication Partners

Configuration directory partition

All the domain controllers in the forest

Schema directory partition

All the domain controllers in the forest

Domain directory partition

All the domain controllers in a domain

Global catalog partition

All domain controllers in the forest that host global catalogs

Application directory partition

All the domain controllers using the application partition on either a forest-wide, domain-wide, or selective basis, depending on the configuration of the application partition

ForestDnsZones directory partition

All the domain controllers in the forest that host DNS

DomainDnsZones directory partition

All the domain controllers that host DNS for that domain.

When replication rings are within a site, the KCC on each domain controller is responsible for generating the replication topology and keeping it consistent. When replication rings go across site boundaries, the ISTG is responsible for generating the replication topology and keeping it consistent. Because replication rings are merely a logical representation of replication, the actual implementation of replication rings is expressed in the replication topology by using connection objects. Regardless of whether you are talking about intrasite or intersite replication, there is one connection object for each incoming connection. The KCC and ISTG do not create additional connection objects for each replication ring. Instead, they reuse connection objects for as many replication rings as possible.

When you extend the reuse of connection objects to the way intersite replication is performed, this is how multiple bridgehead servers might be designated. Typically, each site will also have a designated bridgehead server for replicating the domain, schema, and configuration directory partitions. Other types of directory partitions may be replicated between sites by domain controllers that host these partitions. For example, if two sites have multiple domain controllers and only a few have application partitions, a connection object may be created for the intersite replication of the application partition.

Figure 35-8 shows an example of how multiple bridgehead servers might be used. Here, the domain, schema, and configuration partitions are replicated from Site 1 to Site 2 and vice versa using the connection objects between DC3 and DC5. A special application partition is replicated from Site 1 to Site 2 and vice versa using the connection objects between DC2 and DC6.

Replication between sites using multiple bridgehead servers.

Figure 35-8. Replication between sites using multiple bridgehead servers.

The global catalog partition is a special exception. The global catalog is built from all the domain databases in a forest. Each designated global catalog server in a forest must get global catalog information from the domain controllers in all the domains of the forest. This means that a global catalog server must connect to a domain controller in every domain and there must be an associated connection object to do this. Because of this, global catalog servers are another reason for having more than one designated bridgehead server per site.

Figure 35-9 provides an example of how replication might work for a more complex environment that includes domain, configuration, and schema partitions as well as DNS and global catalog partitions. Here, the domain, schema, and configuration partitions are replicated from Site 1 to Site 2 and vice versa using the connection objects between DC3 and DC5. The connection objects between DC1 and DC4 are used to replicate the global catalog partition from Site 1 to Site 2 and vice versa. In addition, the connection objects between DC2 and DC6 are used to replicate the DNS partitions from Site 1 to Site 2 and vice versa.

Replication in a complex environment.

Figure 35-9. Replication in a complex environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset