Edge Server Placement

Edge Server placement is critical in a deployment to optimize media paths. The SIP signaling used for presence and IM is more tolerant of slight delays, but web conferencing and A/V traffic are sensitive to latency, so it is important to properly plan Edge Server placement.


Tip

As a rule of thumb, Edge Servers are generally deployed in any location with a Front End pool that supports remote conferencing or A/V features. This isn’t a mandate, and there are many other factors at play, but Edge Servers typically exist near a Front End pool.


For example, consider a small deployment for Company ABC, as shown in Figure 31.1, where a single Front End Server in San Francisco exists. In this deployment, only a single Edge Server is necessary to support all the remote features. Media paths are all local to San Francisco.

Image

Figure 31.1. Single Edge Server with multiple sites.

Imagine that Company ABC expands with a new office in London with a WAN link back to San Francisco and adds a new Front End pool for the London users. When London users sign in remotely and try to IM with a London user in the office, they will communicate with the Edge Server in San Francisco, which sends traffic to the Front End in San Francisco, which ultimately proxies the SIP traffic to the London user’s Front End pool in London, and finally arrives at the London office user. Although this seems like a lot of hops, it’s not going to cause an issue for the remote London user. If the user gets a presence update or an IM half a second late, he is not going to notice.

Now consider if a London user goes home for the day and tries to place an audio call to another London user still in the office. The traffic flow for the audio stream is from the remote user to the San Francisco Edge Server and then straight to the user in the London office. The media stream skips the San Francisco and London Front End hops, but the result is probably still a poor-sounding audio call. Even though the two London users are physically close in proximity, the call is “hairpinning” through the San Francisco Edge Server and possibly creating a lot of latency, or delay, on the call.

The solution in this scenario is to also deploy an Edge Server in London, which would allow for the London office user to have a more direct path to the remote London user. If Company ABC deploys an Edge Server in London, the traffic flow shown in Figure 31.1 changes to the traffic flow shown in Figure 31.2. In this case, the remote users can exchange media traffic with London users directly across the Internet with a lesser amount of latency.

Image

Figure 31.2. Multiple Edge Servers.


Tip

It isn’t necessary to deploy Edge Servers in every location with a Front End pool, but it generally results in an improved experience for the end users. Many deployments try to distribute Edge Servers to service distinct geographical boundaries such as opposite coasts or continents to limit traversing long WAN links. For example, using separate Edge Servers per continent, or on each side of a continent in North America, Europe, and Asia, is a common deployment model.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset