2.1. The Evolution of Distributed, Multitier Applications

Applications in the networked economy tend to be multitier, server-based applications, supporting interaction among a variety of systems. These applications are distributed—that is, they run on several different devices, including mainframes for data access on the backend, servers for Web support and transaction monitoring in the middle tier, and various client devices to give users access to applications. Clients can include thick clients—stand-alone applications on the desktop—and thin clients, such as applications running in a browser on the desktop, applications running in personal digital assistants, even cell phones and other personal communications devices. For business-to-business applications, distributed computing involves peer-to-peer connections among dispersed server systems.

The proliferation of systems and devices and the extension of the services provided by the server have increased the complexity of designing, developing, and deploying distributed applications. Distributed applications are increasingly called on to integrate existing infrastructure, including database management systems, enterprise information systems, and legacy applications and data, and to project these resources into an evolving environment of diverse clients in diverse locations.

To help you understand the issues involved in developing these applications, here's a look at some typical multitier application scenarios.

The earliest distributed applications were client-server applications running on time-sharing computing systems (see Figure 2.1).A mainframe computer containing data and data management software was connected to a number of terminals, which could be distributed as widely as the technology allowed. The networks used were slow; the client systems were called dumb terminals for good reason. But these client-server systems were easy to develop and maintain because all applications lived on the mainframe.

Figure 2.1. Pure Client-Server Application Architecture


With the arrival of high-speed networks and smart PC-based clients with elaborate graphical user interfaces, applications moved from the mainframe to the desktop. This meant more processing power for each user, but less control for IT departments. The application-development process was simplified with a variety of visual tools and other programming aids, but application deployment in this multitiered environment became a problem with so many desktop machines and configurations (see Figure 2.2).

Figure 2.2. PC-Based Client-Server Application Architecture


Browser-based applications on the Internet or intranets are a variation on this model. A browser running on a desktop PC provides access to the server. Applications run on Web servers, providing all the business logic and state maintenance. Using this configuration, applications can provide everything from simple page lookup and navigation to more complex processes that perform custom operations and maintain state information. The technologies supporting this application architecture include plug-ins and applets on the client side, and Common Gateway Interface (CGI) scripts and other mechanisms on the server side. The problem with adding functionality in this environment is that there is no single standard for clients or servers, and the applications assembled in this way are hard to develop and maintain.

While the architecture of multier applications has evolved, new capabilities have been added to the mix. A pure client-server architecture is viable for a tightly controlled environment, with one type of client and one backend server providing some business logic and access to data. But the real world soon became more complicated. Eventually, organizations wanted to connect multiple backend systems—for example, to connect a warehouse inventory system to a customer billing system. Another example would be companies that merge and need ways to integrate the computing capabilities they inherit.

These requirements led to the evolution of the middle tier in enterprise computing in the nineties. In this configuration, the business logic of an application moves onto a centralized, more tightly controlled system. Transaction monitors in the middle tier are capable of integrating disparate data sources with a single transaction mechanism. With this technology, traditionally disconnected systems could become connected (see Figure 2.3).

Figure 2.3. Multitier Application Architecture with Distributed Transactions


In addition to the need to have multiple databases communicating, the need to have multiple applications interacting soon became an issue. With millions of lines of code and the corresponding development and debugging time investment in legacy applications, organizations wanted ways to reuse the capabilities of existing applications, and to get time-proven systems communicating in new ways. Among the solutions proposed, the CORBA standard achieved success by allowing modules in various programs to communicate with one another. This helped support a new era in distributed computing (See Figure 2.4).

Figure 2.4. Multitier Application Architecture with Multiple Servers and CORBA Interoperability


All these configurations have proven useful in the enterprise computing world. However, each has its drawbacks. Primarily, the lack of widely accepted standards means no unified programming model—diverse skills (and often deep skills, at that) are required to do the complex programming required to make applications work together. And in many cases, each vendor of the technologies involved—Web servers, transaction processors, database management systems—provides its own proprietary programming models and APIs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset