Chapter 9. Lifecycle Management

by Dietmar Aust

Every application changes over time. Keeping track of different software versions in multiple environments (development, test, and production) quickly becomes a nontrivial task which every developer has to confront.

In this chapter we will detail best practices acquired from developing and maintaining multiple real-world projects for customers over the years. These procedures have proven to be very effective and reliable, significantly improving the quality of the delivered applications. They originated, for the most part, from the experiences of our project team at the German Telecom Shops, where we developed and maintained an APEX application over the last four years.

To successfully deliver high-quality software on time, we have learned that in a shared environment, you should not focus solely on source code–related issues. You also need to answer questions related to project management and documentation to support your daily activities and stay on track. To establish procedures that work reliably even under pressure, we have learned to focus on the following:

  • Simplicity: Implement only a few easy to follow rules (the KISS principle).

  • Transparency: Establish clear and concise naming conventions, which helps to implicitly document what you are doing.

  • Safety: Implement a strict system of templates, checklists, and rules to follow so that you will make fewer mistakes.

We have developed and continue to evolve procedures for requirements management, a proven file-system layout, automated DDL extraction, a completely script-based deployment approach, and integration with Subversion. All of these support the long-term maintainability of the software.

Interestingly enough, we couldn't find any established best practices for source code management in the Oracle ecosystem; in the Java community, however, most people seem to have agreed on Apache Maven (http://maven.apache.org). Apache Maven is a software tool for project management and build automation which has established conventions on how to structure the file system for a project, naming conventions for files, and how to do things in a project. It was born from the very practical desire to make several projects at Apache work in the same way. The intent of the Maven project was to enable developers to freely move between projects; by understanding how one of them worked, they would understand how they all worked. If a developer spends time understanding how one project is built, they should not have to go through the same process again on the next project. The same idea extends to testing, generating documentation, generating metrics and reports, testing, and deploying.

Challenges

We will start by looking at the typical challenges faced by a team of developers working on an APEX development project. You will have to face some of these challenges even when you work on the source code all by yourself and not within a team.

Working Concurrently on the Same Source Code

A team of developers working on the same piece of software has to take care of concurrency issues. The Oracle database itself does not have any version control software built in, so it is easy to overwrite another member of the team's changes. For example, two developers load the source of the same database package (Package X) into their editors. Developer A adds a function to the package and compiles it. Now the change by Developer A is stored in the database and active. At the same time, Developer B adds a different function to the same package but compiles it a bit later than Developer A. When the change made by Developer B is stored in the database, the change from Developer A is lost. You have to deal with this kind of problem when working on procedures, functions, packages, triggers, views, object types, and the like, concurrently.

APEX does not include a version control system, either. Nevertheless, when you modify an APEX application, you can at least set explicit locks on one or more pages you are working on. After you have acquired a lock on these pages, other developers cannot change them until you unlock them again.

Even if you don't set explicit locks on the pages you modify, you will still be protected from overwriting other developer's changes. APEX uses a mechanism called optimistic locking to prevent that. Both Developer A and Developer B will be able to begin changing a page. When Developer A saves the change, it will go through. When Developer B tries to save her changes as well, she will receive an error message that the database state has been modified in the meantime. This holds true for all parts of the application, such as pages, regions, computations, branches, lists, and breadcrumbs.

Propagating All Required Changes for a New Release

Once you are done implementing all features for the next release of your software, you need to install the updated software on the test or production environment. You have to make sure to propagate all the required changes. They can include

  • DDL statements for the creation or modification of database objects (users, tablespaces, grants, tables, views, etc.)

  • DML statements for the manipulation of data (insert, update, delete)

  • The APEX applications

  • Files uploaded to the workspace

  • Files stored on the application server in the file system

  • Changes in the configuration of the application server (virtual directories, compression, URL rewrites, etc.)

To deploy a release, you need to make sure that all altered objects (tables, packages, etc.) together with the relevant data changes (insert, update, delete) are propagated from the development environment to the test environment. You don't want to do it manually, since manual operations are error prone.

This is why we favor an approach that is almost completely based on scripts to deploy a new release of our software. In large corporations this is quite often the only way to deliver a new software release. Because the developers don't have direct access to the production systems, administrators or technical support personnel will perform the installation for them.

Parallel Development on Different Application Versions

Sometimes it is necessary to work on different versions of an application at the same time. Typically this is required when some developers are fixing urgent issues while the rest of the team is already working on the next release, whether issues happen in production or during the test phase.

How can you make sure that they don't interfere with each other? Regarding the objects in the application schema, you could just create another schema as a duplicate. You could modify the objects in the two schemas independently of each other using the respective DDL statements (e.g., alter table x, add column (y number)) and store them in the file system. Using a version control system, we use standard techniques like branching and merging to manage different versions of the source code.

For an APEX application, this standard approach is a lot more difficult. In Listing 9-1 you can see the definition of a list of values in APEX from an application export file.

Example 9-1. Definition of a List of Values from an APEX Application Export File

wwv_flow_api.create_list_of_values (
  p_id       => 5017504088986621 + wwv_flow_api.g_id_offset,
  p_flow_id  => wwv_flow.g_flow_id,
  p_lov_name => '1_0',
  p_lov_query=> '.'||to_char(5017504088986621 + wwv_flow_api.g_id_offset)||'.'),

APEX uses a metadata approach which is heavily dependent on numerical IDs for referencing related objects. The most obvious solution would be to make a copy of the application and modify both independently of each other. Since APEX will generate the object IDs for new objects automatically, you will quickly have to deal with conflicting IDs (the same ID for different objects in the two applications), which will make it impossible to merge the applications later.

What Is the Current Status?

When you maintain an application for many months or years you will often have to figure out the current status of the different servers, database schemas, APEX applications, and so forth. You need versioning information in all relevant elements to make sure you can immediately determine which versions are installed where. This includes the installation scripts for the database objects and all relevant APEX files (application export, Javascript, and CSS files).

In addition, you need to record all scripts that have been executed in the database schema (either DML or DDL) as well as the currently installed version number of the database back end.

It is really helpful to understand when the application or the data has been changed, especially when you discover a problem in the production system. For example, you see an erroneous row in a table which is a result of a known problem that has been fixed in a subsequent release. If you know exactly when you installed the fix in the production system you can determine whether the current problem is related to that old problem or if the fix didn't work as expected.

Which Requirements Were Implemented in This Release?

No developer likes to write documentation, but for each new release you have to compile a list of changes (either newly implemented functionality or bug fixes) for your client and users. You especially need to communicate the changes when you collaborate with a test team to thoroughly test the new release. This is not only true for the finally shipped release but also for each incremental release provided to the test team before shipping the new release. Thus we want to make sure that the list can be generated easily and will be included in the release notes for the current iteration or the final release.

The Sample Application

In the following sections we will discuss our approach based on a simple sample application. It is a facility management application in which you can schedule and book resources (computers, rooms, cars, etc.) at different locations.

The tables are stored in the Oracle database in the schema FM. All tables are prefixed with the application prefix FM_ and written in plural (FM_LOCATIONS instead of FM_LOCATION). You will find a short description of the data model in Table 9-1 and the physical data model in Figure 9-1.

Within APEX we will use the application "Facility Management," having the application id 100.

Table 9-1. Tables of the Sample Application

Table

Description

FM_BOOKINGS

The actual bookings of resources at the different locations

FM_COUNTRIES

Countries in which the company operates

FM_LOCATIONS

Locations at which the company operates

FM_RESOURCE_TYPES

Resource types, e.g., computer, vehicle, room

FM_RESOURCES

Actual resources (computers, cars, rooms) that can be booked

FM_USERS

Table to store the user accounts which are administered in the application

Data model of the sample application

Figure 9-1. Data model of the sample application

The Approach

Which elements did we consider for this approach? In order to be truly effective, we felt that we needed a broader scope than just looking at purely source code–related issues. Thus, we developed solutions for managing the source code (files, folders, database objects, APEX application components) as well as the documentation and requirements.

Version Numbering Scheme and the Overall Delivery Process

As applications change over time, you have to implement and deliver new versions of the software to your clients.

To understand the functional as well as the technical differences of each release you need a means of communicating these differences. To accomplish that, use a specific numbering scheme to keep track of different versions of the software. Many different approaches are possible; a good overview can be found at http://en.wikipedia.org/wiki/Software_versioning.

Use the following convention to assign a version number to your software:

<Major Release>.<Minor Release>.<Patch>.<Revision>
  • Major Release: This digit is increased when there are significant jumps in functionality, fundamentally changing application concepts' interfaces.

  • Minor Release: Whenever you only have a functional change to the application, increase the minor release digit of the version number.

  • Patch: The patch digit is increased whenever you ship only bug fixes (error corrections) or really minimal functional changes.

  • Revision: This is the build counter for each build of the software. In our context this is a number indicating the internal revision with the test team.

To illustrate the actual usage of such numbers let's take a look at one of our current projects. For this customer there are four regular releases of the software each year. Therefore you increment the major release digit once per year and ship four releases all having different minor release digits (e.g., 1.0.0, 1.1.0, 1.2.0, 1.3.0 in 2010 and 2.0.0, 2.1.0, 2.2.0, and 2.3.0 in 2011).

The revision number is typically not communicated to the end users of the application, but is used mainly to support the incremental and iterative collaboration with the test team (see Figure 9-2).

Releases and internal revisions

Figure 9-2. Releases and internal revisions

Whenever you install a new internal revision on the test environment (e.g., 1.0.0.1, 1.0.0.2, or 1.1.0.3), it is tested by the test team. The internal revision numbers help you to communicate issues in each delivered internal revision so that defects can be identified and retested in the following internal revision. This process should be supported by the use of a bug or issue tracking software like Atlassian Jira, Bugzilla, or others.

Storing Files on the Application Server

We strongly recommend not storing any files for your application in the virtual directory /i/. It is used for APEX internal files and images. Once you upgrade your APEX version to a new release this will be problematic in two areas.

Warning

Don't store any files in the virtual directory /i/!

You might experience naming conflicts if your files are also used by APEX itself in a later release. For example, APEX 3.0 was shipped with 18 themes included (theme_1 ... theme_18). You could have created the directory images/themes/theme_20 for your local theme. In APEX 4.0 this would cause a naming conflict since the directory theme_20 is now used by an internal APEX theme.

Even if you don't run into any naming conflicts, this approach is opaque and difficult to maintain. The difficulty is identifying your files compared to the files provided by APEX itself. You need to make sure to copy all of your files to the new directory tree once you upgrade the APEX release. Being a manual process, this approach is inherently error prone. Also, you will have to remember that this is a required step during an APEX upgrade; you cannot simply follow the upgrade instructions shipped with APEX.

It is better to create a separate virtual directory to store application-specific files, for example, /apex_custom/.

There you have subdirectories for the themes and the application-specific files. Make sure to include all files that are required for the application. We even recommend storing a possibly shared library like jquery multiple times in the application-specific directories, which adds transparency and a higher level of control to the overall process. The main intention is to make it easy to move an application from one server to another. What are the required files and libraries that you need to propagate? Usually this is not easy to figure out.

Requirements for Designing the Server Environments

Before you can design and implement a version control strategy for team development in an APEX project, you have to consider some important technical restrictions.

How many different server environments should you set up to support the regular development process? You should have at least three different APEX environments for development, test, and production. They should be installed in separate database instances (see Figure 9-3).

The regular development instance setup

Figure 9-3. The regular development instance setup

Here you can see that you propagate the changes from the development environment first to the test environment and from there to the production environment. You do this by creating a patch script based on the objects in the development instance (e.g., scripts for DDL, DML, and APEX application export files). Figure 9-3 represents the flow of events; you test the application before you use it in production. Nevertheless, for the production install you use the exact same scripts from the development environment that you used for the test installation.

You can consolidate the environments, but at a bare minimum, the production system should be physically separated from the development and test environments.

Tip

Keep the workspace name, workspace id, application id, and parsing schema identical in all the environments.

You typically use an identical setup in all three environments: all environments use the exact same workspace name, workspace id, and application id, as well as the parsing schema. This approach was mandatory to support a script-based deployment in all releases prior to APEX 4.0.

The reason for that can be seen in an APEX application export file; the relevant commands are outlined in Listing 9-2. In order to install the application into another APEX instance you can either go through the APEX Application Builder wizard or run the application export file from SQL*Plus on the command line. For the latter approach you have to connect to the database as the owner of the application (parsing schema) or as the owner of APEX itself (e.g., APEX_030200). As you can see, the workspace id and the application id as well as the parsing schema are hard-coded in the application file.

Example 9-2. APEX Application Export File

-- Import:
--   Using application builder
--   or
--   Using SQL*Plus as the Oracle user APEX_030200 or as the owner (parsing schema)
--   of the application.

prompt  Set Credentials...
wwv_flow_api.set_security_group_id(p_security_group_id => 1635506190835543);

-- SET APPLICATION ID
wwv_flow.g_flow_id := 100;
wwv_flow_api.g_id_offset := 0;

-- Remove Application
wwv_flow_api.remove_flow(100);

--application/create_application
  wwv_flow_api.create_flow(
  p_id    => 100,
  p_display_id=> 100,
  p_owner => 'FM',
  p_name  => 'Application Lifecycle Management Sample (v2.0.0.0)',
  p_alias => 'F104116_1',
  ...
);

There are situations where you would want to change some of these parameters. Your setup might be different in the different environments (development, test, production) or you might want to set up an identical training environment for multiple users. Starting with APEX 4.0 the new package APEX_APPLICATION_INSTALL has been added for that purpose.

Thus you could use the following source code in Listing 9-3 to install the facility management application in version 2.0.0.0 in a different system, using alternative values for these parameters.

Example 9-3. Override Values in the Application Export Using APEX_APPLICATION_INSTALL

declare
    l_workspace_id number;
begin
    -- determine the workspace id for the workspace FM in the target system
select workspace_id into l_workspace_id
      from apex_workspaces
     where workspace = 'FM';

    -- set the context for the target workspace
    apex_application_install.set_workspace_id( l_workspace_id );

    -- override the original application id 100 with a different value
    apex_application_install.set_application_id( 200 );

    -- generate a new offset to ensure that the metadata for the Application
    -- Express application definition does not collide with other metadata on
    -- the instance
    apex_application_install.generate_offset;

    -- override the original parsing schema FM with a different value
    apex_application_install.set_schema( 'FM_TEST' );

    -- set a different application alias, it should be unique within an APEX instance
    apex_application_install.set_application_alias( 'FM_TEST' );
end;
/
-- install the original application export, now the values for the workspace id,
-- application id, parsing schema and application alias will be overridden
@f100_facility_management_v2.0.0.0.sql

Warning

Whenever you change the application id, the privately stored interactive reports are lost!

One issue you might not be aware of can really annoy end users. The problem in a nutshell is that the users will lose their privately stored interactive reports once the application id is changed. This is due to the way these user preferences are stored within the APEX metadata. (A detailed discussion of this issue can be found on the Internet at http://dpeake.blogspot.com/2009/01/preserving-user-saved-interactive.html and http://joelkallman.blogspot.com/2010/07/where-did-my-saved-interactive-reports.html.)

Although you could work around this issue using the new package APEX_APPLICATION_INSTALL, it is preferable to establish procedures that let you forget about it and require no fix.

This issue has largely determined our deployment strategy. In an effort to make our end users happy we make sure they don't lose their privately stored reports. The real restriction is on identical application IDs in the development and production environments.

Hotfixes

While working on the next release of the application (e.g., 2.0.0), you might detect a problem in the current production release (e.g., 1.3.0.5). This problem has to be corrected, resulting in another patch release, 1.3.1.

Unfortunately, the development environment is currently blocked because the team is already working on release 2.0.0 in the regular development environment. You cannot develop the patch release (also known as a hotfix) there, but need another development environment for your production issues.

Since you typically want to separate the development environment from the test and production environments, you will need another database instance running the hotfix test environment, as shown in Figure 9-4.

Adding a hotfix environment to the setup

Figure 9-4. Adding a hotfix environment to the setup

This way you can make sure that the hotfix development environment can also have the same application id as the production system, e.g., 100. This will preserve the privately-stored interactive reports once you install the hotfix in the production environment.

Consolidating Environments

Using five different environments to cleanly support the development process is not always a feasible option, especially for smaller development projects.

What is the minimum number of environments required to support this approach effectively? You could get by with only three different APEX installations if you followed the suggested structure shown in Figure 9-5

Consolidating the environments

Figure 9-5. Consolidating the environments

  • Regular Development/Test: On this machine both the regular development environment and the test environment are installed. It is important that the application id (e.g., 100) of the development version matches the application id in production. The parsing schema for the development environment (e.g., FM) is also identical to the production environment. The application will be made available to the test team on the development server, only in a different database schema (e.g., FM_TEST) using the application id 200. Once tested, the development version is installed in production.

  • Hotfix Development/Test: This environment works analogously to the regular test/development environment, only that it is used for hotfix development and testing production issues.

  • Production: This is the regular production environment.

If you implemented the setup as suggested, you might still have to find a way to deal with different configuration settings on the application server side for the development and test environments. For example, suppose you are using css, image, and Javascript files to support the application, and the Javascript file script1.js needs to be modified. You need to be able to modify this file while the same file remains unchanged for the version in the test environment (application id 200).

The easiest and most flexible approach is to store these files within the APEX workspace. You can upload the files to the workspace (to be referenced in the application as #WORKSPACE_IMAGES#script1.js#) or to each application in the workspace individually (to be referenced in the application as #APP_IMAGES#script1.js#). The latter approach allows you to have two different versions of script1.js within the same workspace.

Nevertheless, there are at least two good reasons to store the files on the HTTP server. If you have many concurrent users accessing the application, your performance will suffer and you should avoid hitting the database for each image or css file. Typically, HTTP Servers provide features such as file caching or compression to improve the performance when serving static files.

Also, if your static files reference each other (e.g., the instruction { background-image: url(smallPic.jpg); } in a CSS file) or you need to organize the files in a file system structure, you cannot upload them to the workspace but have to use them on the HTTP Server instead.

If you store the file script1.js on the HTTP server, how could you support two different versions for http://<server name>:<port>/apex_custom/script1.js in the development and test environment through the same HTTP server?

In order to support separate mappings of the virtual directory /apex_custom/ when running application 100 vs. running application 200, you could use virtual hosting directives which are a feature of the HTTP server. Virtual hosting allows you to use different configurations based on either the hostname or the port. While your physical server IP address might be 192.168.0.8, you could have two different entries in your DNS service resolving both dev.opal-consulting.de and test.opal-consulting.de to the same physical address. Thus all of the following URL requests will be served by the same HTTP server: http://192.168.0.8, http://dev.opal-consulting.de and http://test.opal-consulting.de. You could then distinguish between different versions on the application server by using different URLs to call the script: (http://dev.opal-consulting.de/apex_custom/script1.js or http://test.opal-consulting.de/apex_custom/script2.js) as seen in Listing 9-4 (configuration example specific to the Apache HTTP Server).

Example 9-4. Virtual Hosting (Name Based) with Apache

<VirtualHost *>
    ServerName  dev.opal-consulting.de

    Alias /apex_custom/ "/ag/apex_custom_dev/"

</VirtualHost>

<VirtualHost *>
    ServerName test.opal-consulting.de

    Alias /apex_custom/ "/ag/apex_custom_test/"

</VirtualHost>

An alternative configuration is shown in Listing 9-5. It uses a port-based virtual hosting in order to provide the URLs http://dev.opal-consulting.de:80/apex_custom/script1.js and http://dev.opal-consulting.de:81/apex_custom/script2.js.

Example 9-5. Virtual Hosting (Port Based) with Apache

<VirtualHost *:80>
    ServerName  dev.opal-consulting.de

    Alias /apex_custom/ "/ag/apex_custom_dev/"

</VirtualHost>
<VirtualHost *:81>
    ServerName dev.opal-consulting.de

    Alias /apex_custom/ "/ag/apex_custom_test/"

</VirtualHost>

Project Management Related Aspects

Although this chapter isn't about project management, we want to briefly cover some project management aspects to highlight their importance for successfully delivering APEX applications.

We have developed a lightweight project management application to support our development process. And certainly, we used APEX to build the application. It is not a large application and is intended to be a practical tool for developers to support their daily activities.

We have learned to value the following key features:

  • We use it to manage the scope of the application, this way we can easily answer the following questions:

    • Which requirements have to be implemented in this release?

    • Which developer is working on which requirement?

    • Are there any open questions regarding a specific requirement? This information is recorded together with the requirement and a complete list can easily be exported and taken to the next meeting with the clients.

    • Which features have been rolled out in this release or internal revision?

  • We even record new or changed requirements during the meeting with the clients.

  • Budget, estimates, and timesheets are integrated. This way we can easily see whenever we might overrun the budget.

It doesn't really matter which tool you use, just make sure that you use one.

Version Control

As developers you typically use a version control system to manage and track the changes to an entire directory tree of source code and documents which are all related to your application. It is most relevant when a team of developers is working together, but is also highly valuable when you are working on the source code all by yourself.

In this section we will discuss the particularities when applying traditional techniques for source code management to an APEX application.

Shared Instance vs. Local Sandbox

Version control is typically applied to software projects by using the concept of a local sandbox. The development team uses a central version control repository to exchange updated versions of the application. Every developer has their own working copy of the application source code or documents locally installed on their machine. All changes are done locally to that set of files and in isolation from the other team members, hence the name sandbox.

Once the developer is done with the modifications, the changes are checked into the shared version control system. If any other developer has changed parts of the modified sources, they are typically merged by the version control system. If the version control system cannot merge the changes automatically, the developer will have to resolve the conflicts manually. This process is usually supported by editing tools to display the differences and merge them semi-automatically.

This typical approach doesn't work well in a database development project. The source code part is easy, because you can completely replace a version of a package, trigger, view, or procedure with the modified version using a simple copy command.

It gets really difficult with tables, though. When you modify a table you cannot just drop and re-create the table. You might have referencing constraints and, most of all, you don't want to lose the stored data.

In order to sync the local databases, each developer needs to supply a synchronizing script for all changed objects in her local change set so that the other developers can synchronize their local database with the changes. This is really tedious. Also, some change propagation might not work. For example, you might have added a new row to the table FM_COUNTRIES and supplied a synchronizing DML (data manipulation language) script:

Insert into FM_COUNTRIES (CON_TITLE, CON_COUNTRY_CODE) Values ('France', '33'),

What happens if you had already dropped the column CON_COUNTRY_CODE from this table in the meantime? Not only the country code for France would be lost in your local table FM_COUNTRIES, but the statement would fail and the data for France would not be in your local table at all! Very quickly, the different local databases would be out of sync. Another serious problem is when you are working on a project where you have to handle large amounts of data, which is common for database developers.

This is why we advocate against the typical approach of using a local sandbox which includes a copy of the database on the local machine. Instead, the suggested technique is to use a local sandbox only for all files that are part of the project. For all database and APEX related development, use a shared APEX instance, as shown in Figure 9-6.

Shared APEX instance

Figure 9-6. Shared APEX instance

Subversion and TortoiseSVN

We have selected the open source software Subversion (http://subversion.apache.org/, http://en.wikipedia.org/wiki/Apache_Subversion) since this is the most widely-used option available. Founded in 2000 by CollabNet, Inc., the Subversion project and software have seen incredible success over the past decade. Subversion has enjoyed and continues to enjoy widespread adoption in both the open source arena and the corporate world. It is available for free and is widely supported by many development tools, including Oracle SQL Developer, Quest Toad, Oracle JDeveloper, Eclipse, and others.

Subversion itself is considered server software (like a database) providing access to its repository. Thus you need some sort of subversion client software to communicate with the server and use its services. There are many clients freely available and many different IDEs support Subversion natively.

TortoiseSVN (http://tortoisesvn.net) is a Subversion client, implemented as a Microsoft Windows shell extension. It is free software released under the GNU General Public License. As a shell extension it provides a tight integration with the Windows Explorer (e.g. icon overlays to display the status of any file or folder,) as shown in Figure 9-7:

  • Not modified (green checkmark)

  • Modified (red exclamation mark)

  • Unversioned (blue question mark)

Additionally, you can execute all available commands by right-clicking on a file or folder.

TortoiseSVN icon overlays and right-click context menu

Figure 9-7. TortoiseSVN icon overlays and right-click context menu

Version Control of Database Objects

In order to successfully manage the changes of database objects using a version control system you have to deal with the inherent duality of the physical object in the database and the corresponding file to create it.

Version control strategies can only be applied to files in the filesystem. The creation or modification of any database object can be represented by a DDL (data definition language) statement and thus be placed in a file (CREATE OR REPLACE PACKAGE FM_BOOKING ... or ALTER TABLE FM_COUNTRIES ADD (CON_COUNTRY_CODE VARCHAR2 (5))).

These files can be put under source code control. In order to modify any database object you would have to perform two steps: the modification of the DDL statement in the file as well as running the file against the database.

However, when you modify the file you are isolated from other developers' changes. This is, once again, the concept of using local working copies (the sandbox). Since you have to use a shared database instance, as explained above, you don't have that level of isolation there. As a result, you have to deal with the problem that two developers can concurrently work on the same package and overwrite each other's changes.

The only solution is to set an explicit lock on the object that you want to modify. While you proceed with the changes, no other developer should be capable of modifying the object at the same time.

We have chosen Quest Toad for Oracle as our IDE (integrated development environment) for working with the database. So far, Quest Toad seems to be the only development tool to natively support this approach.

As soon as you have installed Toad's server side objects, the team coding feature can be enabled. Once the team coding support is enabled, the modification of database objects is no longer allowed through the GUI (Graphical User Interface). The relevant buttons (e.g., compile view or package) and menu items are disabled in the GUI. Once you explicitly check out any database object (e.g., table, view, package, procedure, function, etc.) the buttons and menu items are enabled again and you can modify all objects you have explicitly checked out.

At this time no other developer (even using Quest Toad) is capable of modifying the database object directly. Once you are done with the required changes, you should release the lock on the checked out objects, so that your fellow team members can work on these objects, too. Once you close the application, Quest Toad will prompt you to possibly release your checked out objects. This is usually how we do it.

It is a simple mechanism (checkin/checkout) based on a single table recording which object is locked by whom. The important part is that the GUI is disabled for modification if you don't have an explicit lock on the object.

The only downside to this approach is that you are required to install objects on the server side using DBA privileges to support the team coding feature. Furthermore, all developers in the team have to use Quest Toad for Oracle.

Different approaches are possible to implement version control when working with an Oracle database. One could be to use a plain file system approach. It would require you to set explicit locks on the files representing the package. This approach is heavily dependent on all developers following the same convention of first setting an explicit lock in the file system before running the script against the database.

We feel that this approach is often problematic because it requires discipline. We would rather choose an approach which leaves no room for human errors. This is why we have implemented our approach to version control using the simple checkin/checkout mechanism in Toad. The most important aspect here is that the GUI is disabled to modify database objects unless they are explicitly checked out for modification.

Parallel Development (Branching and Merging)

Creating a hotfix for a current problem in the production environment is one typical case where parallel development is required. You would implement a correction of the current production version while developing the next release in the regular development environment. This typically requires a small change to the production version.

The suggested approach for hotfixes has already been detailed. It involves making changes to a copy of the production system on a separate APEX instance. Once the hotfix is tested and installed on the production system, these small changes need to be manually integrated into the current stream of development.

Another typical use case occurs at the end of a development cycle. Usually only some of the team members do the bug fixing and the iterations with the test team, and the rest of the development team will start to work on the next release.

How can that parallel development be supported? We recommend manual branching and merging procedures. For example, as seen in Figure 9-8, the application version in the current development cycle has the application ID 100 (application version 1.0.0.0). If you want to branch the development to start the development of new features in version 2.0.0.0 you simply create a copy and use the application ID 200 for the branched version. Perhaps you might even create a separate database schema FM_BRANCH in which you would install a copy of the current 1.0.0.0 release.

Then you make changes to version 1.0.0.0 and iterate to internal revisions 1.0.0.1, 1.0.0.2, and 1.0.0.3 for application ID 100. At the same time the development of version 2.0.0.0 already started using application ID 200.

Branching and merging

Figure 9-8. Branching and merging

Typically, you would add new functionality in the next release, so it is fair to say that (for the most part) you will add new pages to the application. Once version 1.0.0 has reached the final iteration (1.0.0.2) and is installed in production, you can merge the branch with the main development again, resulting in version 2.0.0.2. Merging the sources for the database packages is supported by Subversion. It provides tools for displaying differences as well as semi-automatically merging them into a consolidated file. The new pages from the APEX branch (2.0.0.1) will have to be copied over manually to the main version 1.0.0.2.

You have to take care of one important aspect. Once you branch off your regular development trunk, you have to make sure that you diligently preserve page ranges for each branch. This is really important! Otherwise the manual merge procedure will be quite tedious because you could have page id conflicts which you would have to resolve manually.

Naming Conventions

We have learned that the consistent use of clear and simple rules for naming any kind of element has a huge impact on the quality of the software and the productivity of the developers. Next, we will cover a suggested naming scheme for naming the database objects as well as the files and folders.

Naming Conventions for Database Objects

There are many legitimate approaches for naming the database objects. You don't have to follow our suggestion, just make sure that you document your conventions and follow them consistently.

Oracle has a 30-character limit on object names. It quickly becomes a challenging task when you try to use expressive table and column names. This makes a consistent use of concise rules even more crucial.

We will explain our preferred naming conventions using the sample application described before.

The overall naming scheme for any database object follows this generic structure:

<application prefix>_<object name>_<functional area>_<object type>
  • Application Prefix: The application prefix describes the application, e.g., FM for our facility management application. You could use the application prefix also for including external or reusable software packages. For example, we use the prefix WF for our workflow engine and XLIB for a collection of packages we tend to reuse in our projects.

  • Object Name: This is the object name describing the entity or technical relationship, e.g., LOCATIONS.

  • Functional Area: This is the optional functional area, e.g., staging area (STA). This is used to differentiate between the regular table FM_LOCATIONS and the staging area FM_LOCATIONS_STA to load the data from an external source.

  • Object Type: The object type is appended at the end to differentiate between the different object types. In Table 9-2 you can see a list of objects and their suffixes.

Table 9-2. Object Type Suffixes

Object Type

Object Type Suffix

Example

Comment

Constraint (Check)

_CHK

Check that book_booked_until > book_booked_from:

BOOK_PERIOD_POSITIVE_CHK
 

Constraint (Unique)

_UC

Unique key on the login name:

USR_LOGIN_NAME_UC
 

Function

<no suffix>

Look up the user id for a given login name:

GET_USR_ID(p_login_name)

For true API interfaces (functions, procedures, packages) we don't use a suffix.

Index

_IDX

Regular index on the column BOOK_BOOKED_FROM in the table FM_BOOKINGS:

BOOK_BOOKED_FROM_IDX
 

Index (Function Based)

_FIDX

Function based index (upper(con_title)) on the column CON_TITLE in the table FM_COUNTRIES:

CON_TITLE_FIDX
 

Materialized View

_MV

Materialized view on the bookings to aggregate some results:

FM_BOOKINGS_MV
 

Package

<no suffix>

Interface to create and update bookings:

FM_BOOKING

For true API interfaces (functions, procedures, packages) we don't use a suffix.

Primary Key

_PK

FM_USERS_PK

 

Procedure

<no suffix>

Custom function to write logging information:

XLOG

For true API interfaces (functions, procedures, packages) we don't use a suffix.

Sequence

_SEQ

FM_ID_SEQ

FM_LOG_ID_SEQ

 

Synonym

<no suffix>

 

Synonyms don't use a type suffix; their names are identical to the objects they refer to.

Table

<no suffix>

The bookings table:

FM_BOOKINGS

Tables use no explicit suffix; their names are written in plural.

Trigger

_<Trigger Details>_TRG

FM_USERS_BIU_TRG (before insert or update),

FM_USERS_AIUD_TRG (after insert, update, delete)

or

USR_AIUD_TRG (after insert, update, delete)

 

Type

_T

New type, varchar2 array:

XLIB_VC2_ARRAY_T
 

View

_V

View on the bookings:

FM_BOOKINGS_V
 

All database objects typically have their respective object type appended as a suffix with the exception of tables and true API interfaces such as packages, procedures, functions, and synonyms.

The package names are always in singular form, for example, table FM_BOOKINGS and package FM_BOOKING. This is a personal preference and makes the code a little bit more readable and elegant, as you can see in Listing 9-7 as opposed to Listing 9-6.

Example 9-6. Strict Enforcements of Naming Conventions

DECLARE
   l_res_id   NUMBER := 100;
BEGIN
   ----------------------------------------------------------
   -- cancel all bookings for the selected ressource
   ----------------------------------------------------------
   FOR cur IN (SELECT book_id
                 FROM fm_booking_tab
                WHERE book_res_id = l_res_id
                    AND book_booked_from > SYSDATE)
   LOOP
      fm_booking_pck.cancel_booking_prc (p_book_id => cur.book_id);
   END LOOP;
END;

Example 9-7. Naming Conventions with Exceptions

DECLARE
   l_res_id   NUMBER := 100;
BEGIN
   ----------------------------------------------------------
   -- cancel all bookings for the selected ressource
   ----------------------------------------------------------
FOR cur IN (SELECT book_id
                 FROM fm_bookings
                WHERE book_res_id = l_res_id
                    AND book_booked_from > SYSDATE)
   LOOP
      fm_booking.cancel_booking (p_book_id => cur.book_id);
   END LOOP;
END;

In order to achieve a high degree of consistency we have learned not only to create standards for purely syntactical concerns but also for the semantics. If you look at all your projects you have previously worked in, was your use of procedure and function names consistent at all times?

For creating a new object of any sort, you will find procedures or functions with names like add, create, insert, or new. The same is true for the removal (remove, delete) or the modification (modify, update, change, set). Thus we have standardized the naming of operations as well as the naming of table columns in order to enforce a consistent usage.

The CRUD convention (Create, Retrieve, Update, Delete) is well known in the database community for specifying privileges in order to access data. We chose to use GET instead of RETRIEVE, as it appeared more natural to us since it is commonly used in many application programming languages. Thus for naming procedures and functions we have agreed on the so called CGUD convention:

  • C: create

  • G: get

  • U: update

  • D: delete

One example for implementing this convention is shown in Figure 9-9.

Naming procedures and functions

Figure 9-9. Naming procedures and functions

We also tend to separate the operations on the back-end data model from the front-end application technology. In the example in Figure 9-9 the package FM_BOOKING implements the operations on the data model which are not related to any specific front-end application technology. These operations can be used by any programming environment, such as PL/SQL, Oracle Forms, Java, C++, and .Net.

Aside from that you will always need operations that are specific to the front-end technology, which might even need the specific application context to work properly (e.g., current session context in a web environment). In our APEX applications, we implement certain validations (which use session variables like v('APP_USER') or return a generated error message for the display in the page) in the user interface (UI) package, for example, FM_BOOKING_UI. Operations on APEX collections, the generation of navigational links to another page, or any other generated HTML snippets will be implemented in the UI package as well.

This separation makes it easier to use automated testing tools on the pure back-end packages since creating a web application context for a specific testing session is not directly supported by APEX. Also, moving to a different front-end technology in later years will become easier to do. Let's face it, the applications come and go, but the data model in the back end is likely to stay for a long time. This approach helps us to focus on the maintainability of the code.

In order to make the purpose of the used columns as transparent as possible (and flexible with regard to their specific implementation) we don't use the data type as part of the column name any more. It is quite common to see columns like CREATE_DATE, VALID_DATE, APPROVAL_BOOL, etc. We tend to use a more natural way of naming the columns where you can guess the data type by reading its column name. Let's consider the following examples:

  • DATE: CREATED_ON (when was this record created), UPDATED_ON (when was this record updated), VALID_FROM (the record is valid from), VALID_UNTIL (the record is valid until)

  • BOOLEAN: IS_VALID (is this a valid record), IS_APPROVED (has this record been approved), IS_DELETED (has the record been deleted)

  • VARCHAR: CREATED_BY (who created this record), UPDATED_BY (who modified this record)

  • NUMBER: PRS_ID, ITEM_NO (item number), STREET_NO (street number)

Once we started focusing on clarity by eliminating ambiguity we had to question many of our previous habits on naming columns. We even started to remove columns that were called NAME. These columns are highly ambiguous, because we either refer to a (internal and unchangeable) code or a (possibly to be changed) title. Whether you agree with this specific example or not, my advice is to resolve the ambiguity as much as possible.

Prefix Notation for Column Names

Although we don't want to enforce any specific naming conventions, please consider the following suggestion, it helped to increase the transparency of our code manifold.

In order to troubleshoot any user interface dialog, you will need to understand where the data comes from and where the data is written to. You will have to dig through possibly multiple layers of views and materialized views to figure out which field in the user interface dialog or report maps to which field in the database.

Let's consider the following data model to highlight some of the challenges that typically arise. This data model is implemented using a classical approach for naming the database objects in contrast to our suggested approach (see Figure 9-10).

Classical naming convention

Figure 9-10. Classical naming convention

In the classical approach, all table names are in singular form and the table alias is not used as a prefix for all columns. The data model is fine —it just lacks some of the advantages of a prefix notation.

Based on this data model you want to create an APEX page displaying all details for a specific booking. For that purpose, first create a view to join the relevant tables and display all relevant details, as shown in Listing 9-8

Example 9-8. View ODT_BOOKING_V

CREATE OR REPLACE FORCE VIEW ODT_BOOKING_V
AS
SELECT BOOKING.ID,
       BOOKING.RESOURCE_ID,
       BOOKING.LOCATION_ID,
       RES.RESOURCE_NAME,
RESTYPE.DESCRIPTION,
       LOC.CITY,
       COUNTRY.COUNTRY,
       COUNTRY.ID COUNTRY_ID,
       USR.LAST_NAME,
       BOOKING.PERIOD_START,
       BOOKING.PERIOD_END
  FROM ODT_BOOKING BOOKING
       INNER JOIN ODT_RESOURCE RES
          ON (RES.ID = BOOKING.RESOURCE_ID)
       INNER JOIN ODT_LOCATION LOC
          ON (LOC.ID = BOOKING.LOCATION_ID)
       INNER JOIN ODT_RESOURCE_TYPE RESTYPE
          ON (RESTYPE.ID = RES.RESOURCE_TYPE_ID)
       INNER JOIN ODT_COUNTRY COUNTRY
          ON (COUNTRY.ID = LOC.COUNTRY_ID)
       INNER JOIN ODT_USER USR
          ON (USR.ID = BOOKING.USER_ID);

When you describe this view in your development environment you will see a definition like Figure 9-11 (unless you look at the code behind the view).

View definition (classical approach)

Figure 9-11. View definition (classical approach)

Using the APEX wizards to generate a page on this view will result in the following items being created automatically (assuming the new page number is 8): P8_ID, P8_RESOURCE_ID, P8_LOCATION_ID, P8_RESOURCE_ID, P8_RESOURCE_NAME, P8_DESCRIPTION, P8_CITY, P8_COUNTRY, P8_COUNTRY_ID, P8_LAST_NAME, P8_PERIOD_START, P8_PERIOD_END, P8_MODIFIED_BY, and P8_MODIFIED_DATE.

The columns P8_ID, P8_PERIOD_START, P8_PERIOD_END, P8_MODIFIED_BY, and P8_MODIFIED_DATE are especially ambiguous, since they can originate from one or more source tables. You can only be sure if you look at the code of the view ODT_BOOKING_V.

Now, implement the same features using the suggested naming convention. The view FM_BOOKING_V will look like Listing 9-9.

Example 9-9. View FM_BOOKINGS_V

CREATE OR REPLACE FORCE VIEW FM_BOOKINGS_V
AS
   SELECT BOOK.BOOK_ID,
          BOOK.BOOK_RES_ID,
          BOOK.BOOK_LOC_ID,
          RES.RES_TITLE,
          REST.REST_TITLE,
          LOC.LOC_CITY,
          CON.CON_TITLE,
          CON.CON_ID,
          USR.USR_LAST_NAME,
          BOOK.BOOK_BOOKED_FROM,
          BOOK.BOOK_BOOKED_UNTIL,
          BOOK.BOOK_UPDATED_BY,
          BOOK.BOOK_UPDATED_ON
     FROM FM_BOOKINGS BOOK
          INNER JOIN FM_RESOURCES RES
             ON (RES.RES_ID = BOOK.BOOK_RES_ID)
          INNER JOIN FM_LOCATIONS LOC
             ON (LOC.LOC_ID = BOOK.BOOK_LOC_ID)
          INNER JOIN FM_RESOURCE_TYPES REST
             ON (REST.REST_ID = RES.RES_REST_ID)
          INNER JOIN FM_COUNTRIES CON
             ON (CON.CON_ID = LOC.LOC_CON_ID)
          INNER JOIN FM_USERS USR
             ON (USR.USR_ID = BOOK.BOOK_USR_ID);

When you describe this view in your development environment you will see a definition like Figure 9-12 (unless you look at the code behind the view):

View definition (data model 2)

Figure 9-12. View definition (data model 2)

It is immediately clear where each column originates from since each column name is unique within all tables of the data model.

Using the APEX wizards to generate a page on this view will result in the following items being created automatically (assuming the new page number is 8): P8_BOOK_ID, P8_BOOK_RES_ID, P8_BOOK_LOC_ID, P8_RES_TITLE, P8_REST_TITLE, P8_LOC_CITY, P8_CON_TITLE, P8_CON_ID, P8_USR_LAST_NAME, P8_BOOK_BOOKED_FROM, P8_BOOK_BOOKED_UNTIL, P8_BOOK_UPDATED_BY, P8_BOOK_UPDATED_ON.

This time it is immediately obvious that P8_BOOK_ID refers to a row in the table FM_BOOKINGS. Also, P8_BOOK_UPDATED_BY and P8_BOOK_UPDATED_ON refer to auditing information within the same table.

Using the same column name consistently throughout the data model has another advantage. If you change the contents of a specific column, you can easily find all occurrences of this column in your data model and in your APEX application.

Naming Conventions for the File System

To organize the file system, first we start with a few general rules. They were established for a single purpose: we didn't want to think about how to name and structure things over and over again, but wanted to follow simple rules once agreed upon. This is just one way to do it, but the gain in transparency and productivity is significant.

  • Folder names in singular form: Most of the time the folders in your file system will be in a mix of singular form and plural form. We have agreed to write all folder names in singular form. This is neither good nor bad, just consistent and easy to follow.

  • Organization by source type, then by module: In the source tree we always start with the source type (sql, java, apex, etc.). Within each type we will organize the code by their respective modules, e.g., hierarchically organized Java packages. For the SQL source part (which also includes PLSQL) we organize the files by their respective schema. This makes it easier for the development tools to work with this layout.

The different elements of the suggested file system layout are described in more detail in Table 9-3.

Table 9-3. File System Layout Structure

Directory

Description

src

This is the top level of the source tree.

src/apex

This folder contains all APEX related files which will be uploaded into the workspace.

src/apex/app_image

In this folder we store all files (*.css, *.js, *.jpg, *.png, etc.) which will be uploaded into the workspace (dependent on a specific application). They will be referenced in your application using the #APP_IMAGES#<filename> syntax.

src/apex/plugin

If our application uses any plug-ins, we will store them here. Once you install a plug-in into your application, it will become part of the application. You don't have to deploy it on the target system because it will be included in the application export automatically.

src/apex/workspace_image

In this folder we store all files (*.css, *.js, *.jpg, *.png, etc.) which will be uploaded into the workspace (independent of a specific application). They will be referenced in your application using the #WORKSPACE_IMAGES#<filename> syntax.

src/apex_custom

This is for the virtual directory on the application server, in the web server you will create a virtual directory for that: http://<server>:<port>/apex_custom/

src/apex_custom/<application name>

All files that are referenced by the application are stored here (*.css, *.js, *.jpg, *.png, etc.).

src/apex_custom/<application name>/lib

We even store all explicitly referenced libraries here, even if this means that we will have multiple copies on the application server. The rationale behind this decision is to easily see all dependencies of the application. This way an application can be moved easily from one server to another, you will know which files are relevant and which not.

src/apex_custom/<application name>/<theme name>

Here we store the files that are relevant for our custom theme. You could share the theme among all applications, then you should store the theme here: src/apex_custom/<theme name>

Nevertheless, we store the theme locally with the application, even in a redundant fashion. It gives you more control and you are isolated from other developer's modifications on the shared theme. Somebody might break your application when the shared theme is modified.

src/java

All Java source code files, if required.

src/jasper

Any Jasper Report definition files, if required.

src/excel

Any relevant Excel files or templates, if required.

src/sql

Here we store all files that create or modify objects in the database. Typically, this is PL/SQL code as well as SQL scripts.

src/sql/_utils

The scripts to extract the current sources of the database objects are stored here. They can be downloaded from http://www.opal-consulting.de/tools.

src/sql/<db schema 1>

All DDL scripts that we don't want to have automatically generated or which cannot be automatically generated will be placed here. All other statements will be automatically extracted by the scripts (located in src/sql/_utils) and copied to the directory src/sql/<db schema 1> _generated.

src/sql/<db schema 1>_generated

The scripts to automatically extract all sources from the database (located in src/sql/_utils) will copy the generated sources into this directory.

src/...

Any other source files in any technology, if required. This approach is extensible.

test

We are not yet using automated unit and regression testing on our applications, but typically we would place all relevant files for the test in this directory and possible subdirectories.

patch

This is the top-level directory for all patches. A patch is typically an incremental change to a specific version. The first patch indicates the base release.

patch/<version>

Patch for a specific version, e.g. 1.0.0.4.

patch/<version>/apex

All subdirectories for the patch are structured in the same way as the source tree.

patch/<version>/apex/workspace_image

 

patch/<version>/apex/app_image

 

patch/<version>/apex/plugin

 

patch/<version>/apex_custom

 

patch/<version>/sql

 

patch/<version>/sql/<db schema 1>

 

patch/<version>/sql/<db schema 2>

 

After explaining the overall structure of the file system we will have a more detailed look at the sql scripts in their respective directories.

In the directory patch/<version>/sql we will find a patch install script for every database schema we install a patch in (e.g., _patch_<db schema 1>.sql or in terms of our sample application _patch_fm.sql).

These scripts can be run in sqlplus. The sql files in the directory patch/<version>/sql/<db schema 1> have to adhere to a naming scheme as well:

  • All files in lower case: All of the file names have to be in lower case; upper case or mixed case is not allowed. The main reason for this is safety. Microsoft Windows environments are often used by developers. Since Microsoft Windows is not case sensitive you are allowed to call another sql script in the wrong case; for example, if the file name is Script1.sql then invoking this file @script1.sql or @ScRiPt1.sql in another sqlplus script is a completely valid statement. When you want to run these scripts on a Linux or UNIX platform, they will not work since these environments are case sensitive.

  • Separate Files for Each Object and DML Statements: The DDL statements for any database object will be placed in a separate file. This holds true for the DML statements as well. In Table 9-4 the different file types are detailed and placed in the correct order in which they are run in the _patch_<db schema>.sql script.

  • Consolidation: We will deviate from the above principle only for very few exceptions; for example, all grants and all synonyms will each be placed into a single file.

Table 9-4. File Types and Names

Category

Convention

Description

Sequences

<sequence name>.sql

For example, the file fm_id_seq.sql will contain the statement

CREATE SEQUENCE FM_ID_SEQ;

Synonyms (consuming)

_synonyms_consuming.sql

These are the synonyms that our application uses to address objects in a different database schema.

Types

<type name>.sql

Here we only create the type specification; the body implementation follows later.

Table DDL statements

<table name>.sql

Examples:

fm_bookings.sql
fm_countries.sql

This file contains all relevant DDL statements for the table except referential constraints, grants, and triggers:

  • create table

  • create index

  • create constraint (primary key, unique key, check constraint)

Table referential constraints

<table name>_ref.sql

Examples:

fm_bookings_ref.sql
fm_countries_ref.sql

Once all tables have been installed, we can safely install the foreign key constraints between the relevant tables.

Views

<view name>.sql

Examples:

fm_bookings_v.sql
 

Procedures

<procedure name>.sql

 

Functions

<function name>.sql

 

Package headers

<package name>.pks

The package specification and the package body are separated; this way we get fewer compile-time errors due to dependency issues.

Package bodies

<package name>.pkb

The package specification and the package body are separated; this way we get fewer compile-time errors due to dependency issues.

Type bodies

<type name>.tb

The type specifications and the type bodies are separated; this way we get fewer compile-time errors due to dependency issues.

Trigger

<table name>_trg.sql

Examples:

fm_bookings_trg.sql
fm_countries_trg.sql

The triggers might reference other functions or tables but cannot be referenced themselves. Thus we will get fewer compile-time errors when we install the triggers at the end of the script. Furthermore, the triggers can be automatically extracted and copied into the patch directory.

Data

<table name>_data.sql

Examples:

fm_bookings_data.sql
fm_countries_data.sql

Here we store the relevant DML scripts to insert, update, or delete data from the table.

Scripts

script_<name>.sql

Examples:

script_copy_bookings.sql
script_install_workflow.sql

All relevant custom scripts. We try to name them in the same way we would name a function or procedure.

Grants

_grants.sql

All grants (giving other users access to our data) will be placed in a single file.

This way it is a lot easier to understand the security implications of a new software version.

Synonyms (providing)

_synonyms_providing.sql

These are the private or public synonyms that our application provides.

Drop scripts

<object name>_drop.sql

For the sake of transparency, we separate the drop scripts from the creation scripts of all objects.

The patch script _patch_<db schema>.sql is the script, where all other scripts are registered and run in the correct order. For example, in the patch script in Listing 9-10 we add a column to the table FM_BOOKINGS, add the same column to the view FM_BOOKINGS_V, and finally update the source code in the package FM_BOOKING.

Example 9-10. Patch Script _patch_fm.sql

/*=========================================================================
  $Id: _patch.sql 701 2011-03-06 11:09:48Z aust.dietmar $

  Purpose  : Patch script to install a release for the application
             facility management

  $LastChangedDate: 2011-03-06 12:09:48 +0100 (Su, 06 Mar 2011) $
  $LastChangedBy: aust.dietmar $

  Date        Author          Comment
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  06.03.2011  D. Aust         Initial creation

=========================================================================*/
set define '^'
set timing on
set pagesize 50000
set linesize 80
set serveroutput on size unlimited

--#########################
define VERSION=1.0.0.3
--#########################
spool _patch_fm_v^VERSION..log
@@lib/_require_user FM
@@lib/_patch_start

-----------------------------------------------------------------------------
set define off

prompt **********************************************************************
prompt ** Sequences
prompt **********************************************************************
prompt **********************************************************************
prompt ** Synonyms (consuming)
prompt **********************************************************************

--prompt *** Synonyms (consuming)
--@@fm/_synonyms_consuming.sql
--@@lib/_pause

prompt **********************************************************************
prompt ** Types
prompt **********************************************************************

prompt **********************************************************************
prompt ** Tables
prompt **********************************************************************

prompt *** FM_BOOKINGS
@@fm/fm_bookings.sql
@@lib/_pause

prompt **********************************************************************
prompt ** Foreign Keys
prompt **********************************************************************

prompt **********************************************************************
prompt ** Views
prompt **********************************************************************

prompt *** FM_BOOKINGS_V
@@fm/fm_bookings_v.sql
@@lib/_pause

prompt **********************************************************************
prompt ** Procedures
prompt **********************************************************************

prompt **********************************************************************
prompt ** Functions
prompt **********************************************************************

prompt **********************************************************************
prompt ** Package Headers
prompt **********************************************************************

prompt *** FM_BOOKING
@@fm/fm_booking.pks
@@lib/_pause

prompt **********************************************************************
prompt ** Package Bodies
prompt **********************************************************************
prompt *** FM_BOOKING
@@fm/fm_booking.pkb
@@lib/_pause

prompt **********************************************************************
prompt ** Type Bodies
prompt **********************************************************************

prompt **********************************************************************
prompt ** Trigger
prompt **********************************************************************

prompt **********************************************************************
prompt ** Data
prompt **********************************************************************

prompt **********************************************************************
prompt ** Scripts
prompt **********************************************************************

prompt **********************************************************************
prompt ** Grants
prompt **********************************************************************

--prompt *** GRANTS
--@@fm/_grants.sql
--@@lib/_pause

prompt **********************************************************************
prompt ** Synonyms (providing)
prompt **********************************************************************

--prompt *** Synonyms (providing)
--@@fm/_synonyms_providing.sql
--@@lib/_pause

prompt **********************************************************************
prompt ** Drop scripts for all objects
prompt **********************************************************************

prompt **********************************************************************
prompt ** Important notes after the installation (post install instructions)
prompt ** (if required)
prompt **********************************************************************

----------------------------------------------------------------------------

@@lib/_patch_end

set define '^'
host find "ORA-" _patch_fm_v^VERSION..log
host find "SP2-" _patch_ fm_v^VERSION..log
host grep "ORA-" _patch_ fm_v^VERSION..log
host grep "SP2-" _patch_ fm_v^VERSION..log

spool off
exit

To make the installation foolproof, we make sure the script is executed in the database by the intended user. We ensure that by running the script @@lib/_require_user <database user>.

PROMPT *** This script must be run as ^1.  This script will exit
PROMPT *** below if run as any other user.
set verify off;
whenever sqlerror exit;
select 'User is ^1' check_user from dual
where 1 = decode(USER,'^1',1,'NOT'),
whenever sqlerror continue;
set verify on;

This script will error out when it is run by the wrong user.

In the scripts @@lib/_patch_start and @@lib/_patch_end we will register in the database the application version as well as information about when this script was run. Also, here we need to make sure to stop and later enable the database jobs or advanced queuing processes if required. Once again, these are safety measures; we cannot forget that when installing in production. Also, here we will set the application offline, so that the application is no longer available to end users.

Rollout Procedures

Putting all the different pieces together, we have established procedures for how to make changes to the application. They consist of the following elements (see Figure 9-13):

  • Multiple Internal Revisions: Here you iterate multiple times with the customer and the test team. Once you pass the acceptance test, you can move the new release to production.

  • Initialize Patch: First you need to set up everything to work on the new patch, then create the new directory structure in the file system and set the new version number in the patch script as well as in the APEX application.

    • Make Changes: All relevant changes to database objects (DDL or DML statements) must be registered in the patch script.

    • Finalize the Patch: Spool the sources for all database objects again into the file system and copy the modified files (since the last patch) to the patch directory. We use Subversion and Tortoise SVN to highlight the changes and make sure our patch script is complete. Export the current APEX application file and copy this and all other relevant files to the patch directory.

    • Install in TEST: Install the new version in the test environment multiple times until all problems with the script are fixed. Use Oracle flashback to revert to the state of the production version before running the patch script again after fixing it.

    • Update Documentation: For the current release, we have already specified and documented the new or changed features. Now you need to update the operation guide as well as the complete documentation for the whole application.

    • Install the Release in the Production Environment: This covers the installation process itself.

Rollout procedures overview

Figure 9-13. Rollout procedures overview

Initialize Patch

The whole process starts with the initialization of the patch. You therefore create a new patch version for this patch in the directory patch, e.g. patch1.0.0.3. In order to quickly create the required subdirectories, use a preconfigured directory tree in a zip file (template.zip), as shown in Figure 9-14. You could also use an ANT script (http://ant.apache.org/) to create all required directories on the fly.

Directory structure for the new patch

Figure 9-14. Directory structure for the new patch

Now you can modify the patch script. First of all, update the new patch version in the script, for example:

define VERSION=1.0.0.3

Next, increase the version number in the APEX application (installed in the development environment). One of our basic principles dictates that everything needs some sort of versioning information. For that purpose, we will write the version information into the APEX application into two locations, as Figure 9-15 shows.

Application Properties – Enter version information

Figure 9-15. Application Properties – Enter version information

You enter the version information as part of the application name as well as directly in the version field. This way you can see the version information immediately when you log on to the application builder in the workspace (see Figure 9-16).

Version information visible in the application builder

Figure 9-16. Version information visible in the application builder

To use the VERSION field in the application properties you first need to modify the page template to reference the placeholder #APP_VERSION#, which can only be used in the section FOOTER (see Figure 9-17).

Referencing the placeholder #APP_VERSION# in the page template footer

Figure 9-17. Referencing the placeholder #APP_VERSION# in the page template footer

After this change, the version information is now visible on every page (except on the login page), as shown in Figure 9-18.

Version information displayed on every page

Figure 9-18. Version information displayed on every page

Now everything is set up to work on any change. You can immediately see that this is a new version.

Making Changes

The next step is to make actual changes to the application. The key is the _patch_<db schema>.sql file. All script files need to be registered there.

We differentiate between two types of modifications:

  • Replacement Change: A replacement change constitutes a change that completely replaces a previous version. For example, you can install a new version of a package, procedure, or view. Everything you need is the current complete version of it.

  • Incremental Change: An incremental change requires writing a modification statement, which is specific to a certain version. For example, suppose you want to add another column to an existing table in your schema. You cannot completely replace the previous version of the table with the new version of the table, because this would imply dropping and recreating the table. But then you would lose the existing data in the current table.

Propagating the replacement changes is quite easy. You only need to make a note of all objects that have been modified in the current patch and install those objects in the target system. Create an empty script for all packages, views, trigger, etc., and register them in the _patch_<db schema>.sql script. When the patch is finalized, all sources in the database schema are spooled into the file system and can be included in the patch.

To illustrate the concept, we'll revisit the patch script from Listing 9-10. For version 1.0.0.3, you modified the package FM_BOOKING. The package consists of a package header as well as a package body. Thus, you create two empty files with the names fm_booking.pks and fm_booking.pkb and store them in the directory patch1.0.0.3sqlfm.

Next, register the two scripts in the corresponding sections of your patch script: patch1.0.0.3sql\_patch_fm.sql:

prompt **********************************************************************
prompt ** Package Headers
prompt **********************************************************************

prompt *** FM_BOOKING
@@fm/fm_booking.pks
@@lib/_pause

prompt **********************************************************************
prompt ** Package Bodies
prompt **********************************************************************

prompt *** FM_BOOKING
@@fm/fm_booking.pkb
@@lib/_pause

When you finalize the patch later, these files will be generated from the database and copied here. At this point you only need to register the package as being part of the patch. You can keep modifying it until you are done; you don't have to store all intermediate versions here.

The incremental changes are more difficult. You need a script which implements the incremental change. You can either write it manually or have it generated by the development environment. For example, when you add the column BOOK_CAN_BE_CANCELED_UNTIL to the table FM_BOOKINGS you can modify the table (after exclusively locking it) directly using Quest Toad (or many other IDEs) and then have the corresponding statement generated by pressing the button Show SQL:

ALTER TABLE FM_BOOKINGS
 ADD (BOOK_CAN_BE_CANCELED_UNTIL  DATE)
/

Now you can save that statement in the file patch1.0.0.3sqlfmfm_bookings.sql. Next, register the script in the corresponding section of your patch script patch1.0.0.3sql\_patch_fm.sql:

prompt **********************************************************************
prompt ** Tables
prompt **********************************************************************

prompt *** FM_BOOKINGS
@@fm/fm_bookings.sql
@@lib/_pause

Note

All script files are registered manually in the _patch_<db schema>.sql script.

Finalizing the Patch

The finalization of the patch is for the most part just the extraction of the current sources from the database in order to include them in the patch script and make sure that you have all changes. You can extract the sources manually using development environments like Quest Toad, Oracle SQL Developer, or the PL/SQL Developer from Allround Automations. They enable you to spool all sources from a certain database schema into the file system.

Another option is to extract the sources using custom scripts based on the Oracle standard package DBMS_METADATA. This is the approach we have taken.

First, extract all DDL statements from all objects in the specified schema into the table XLIB_DDL_STATEMENTS. In Listing 9-11 you can see parts of the script; the full script can be downloaded at http://www.opal-consulting.de/tools.

Example 9-11. Script to Extract the DDL Statements for All Database Objects

TRUNCATE TABLE xlib_ddl_statements;

col CURR_USER new_value USR
select user CURR_USER from dual;

SET timing off

SET LINESIZE 132 PAGESIZE 0 FEEDBACK off VERIFY off TRIMSPOOL on LONG 1000000
set sqlblanklines on
COLUMN ddl_string FORMAT A100 WORD_WRAP
COLUMN row_order FORMAT 999 NOPRINT

EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'PRETTY',true);
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR',
Script to Extract the DDL Statements for All Database Objects
true); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'REF_CONSTRAINTS',
Script to Extract the DDL Statements for All Database Objects
false); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,
Script to Extract the DDL Statements for All Database Objects
'CONSTRAINTS_AS_ALTER', true); alter session set NLS_NUMERIC_CHARACTERS='.,'; ------------------------------------------------------------------------------ PROMPT -- PROFILES ------------------------------------------------------------------------------ INSERT INTO xlib_ddl_statements (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text) SELECT PROFILE, 'PROFILE', 'PROFILE', DBMS_METADATA.get_ddl ('PROFILE', profile) ddl_string FROM (SELECT PROFILE FROM dba_users WHERE username = '&&USR' AND profile != 'DEFAULT') pr /
------------------------------------------------------------------------------
PROMPT -- CREATE USER
------------------------------------------------------------------------------
INSERT INTO xlib_ddl_statements
            (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text)
   SELECT 'USER', 'USER', 'USER', DBMS_METADATA.get_ddl ('USER', '&&USR') DDL
     FROM dual;

...

------------------------------------------------------------------------------
PROMPT -- FUNCTIONS
------------------------------------------------------------------------------
INSERT INTO xlib_ddl_statements
            (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text)
   SELECT NAME, 'FUNCTION', NAME,
          DBMS_METADATA.get_ddl ('FUNCTION', NAME, '&&USR') ddl_string
     FROM (SELECT DISTINCT NAME
                      FROM user_source
                     WHERE TYPE = 'FUNCTION'),

------------------------------------------------------------------------------
PROMPT -- TRIGGERS
------------------------------------------------------------------------------
INSERT INTO xlib_ddl_statements
            (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text)
   SELECT trigger_name, 'TRIGGER', table_name,
          DBMS_METADATA.get_ddl ('TRIGGER', trigger_name, '&&USR') ddl_string
     FROM (SELECT trigger_name, table_name
             FROM user_triggers
            WHERE trigger_name NOT LIKE 'BIN$%'),

------------------------------------------------------------------------------
PROMPT -- PACKAGES
------------------------------------------------------------------------------
INSERT INTO xlib_ddl_statements
            (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text)
   SELECT NAME, 'PACKAGE_SPEC', NAME,
          DBMS_METADATA.get_ddl ('PACKAGE_SPEC', NAME, '&&USR') ddl_string
     FROM (SELECT DISTINCT NAME
                      FROM user_source
                     WHERE TYPE = 'PACKAGE'),

INSERT INTO xlib_ddl_statements
            (ddl_object_name, ddl_object_type, ddl_base_object, ddl_text)
   SELECT NAME, 'PACKAGE_BODY', NAME,
          DBMS_METADATA.get_ddl ('PACKAGE_BODY', NAME, '&&USR') ddl_string
     FROM (SELECT DISTINCT NAME
                      FROM user_source
                     WHERE TYPE = 'PACKAGE BODY'),

...

Once the sources are extracted into the table XLIB_DDL_STATEMENTS you can remove all objects that are irrelevant by deleting the respective rows from the table. Next, you can spool all relevant objects into the file system with another set of scripts.

Using this script-based approach, you have a lot of flexibility on how to structure the file system as well as which statements will be grouped together into which file. Also, you can much better control the naming of the files. This is clearly an advantage over the built-in methods which the development environments provide out of the box. Furthermore, you don't have to make any manual adjustments to the export, which is always error prone.

Once the sources are extracted from the database and spooled into the file system, you may have new or changed objects. Subversion will tell you exactly which database objects have been modified or newly created, as shown in Figure 9-19.

Changes highlighted by TortoiseSVN/Subversion

Figure 9-19. Changes highlighted by TortoiseSVN/Subversion

All the changed files can then be copied into the current patch directory. This way you have all the scripts for the relevant database objects included in your patch.

After that, you still need to export the current APEX application into the directory patch/<version>/apex. The file has the structure f<application id>_<application name>_v<version>.sql, for example, f20100629_alm_demo_v1.0.0.3. You will need all relevant CSS, Javascript, and other files as well.

Installation in the Test Environment

To make the application available to the test team and the business users you have to install it on the test system. You want to make the overall approach as safe as possible by minimizing room for errors. For this reason, we advocate an almost completely script-driven approach.

How can you make sure your scripts run flawlessly in production? You have to make sure that you resolve all issues when installing on the test system. The problem is, once you run any script for the first time, the state of the database has changed. Even if you fix the issues in the script, you can never be sure that it will run flawlessly in production too.

Once the scripts have been fixed and all issues resolved, you need another final test on a clone of the production system. This test must run flawlessly. To accomplish that, you have to reset the state in the database and undo the changes caused by running the patch script. You could restore a previous backup from the database, but that might be quite problematic and simply take a lot of time. Often you don't even have the privileges to restore a backup on the test system and have to wait for the DBAs to do it.

The most convenient way is to use the Flashback feature of the Oracle Database. Once configured, you can easily set a restore point:

create restore point BEFORE_REL_2_3_0_1;

You can list all restore points by running the command:

SELECT name FROM v$restore_point;

After running the patch script and fixing all issues, you can easily reset the database state by issuing the following commands as the user SYS:

shutdown immediate;
startup mount;
flashback database to restore point BEFORE_REL_2_3_0_1;
alter database open resetlogs;

Repeat this procedure with running the scripts, fixing the scripts, and resetting the database until the scripts install without any problem .

The last step in this procedure is to commit the modified files into Subversion, so that the changes are visible in the next patch.

Update the Documentation

At this point you can update the documentation. Up until now all documentation regarding the specific release only described the differences (added, removed, or changed functionality) with respect to the previous release. In this step you will have to incorporate the delta documentation into the overall documentation. This is the only way to keep the overall system documentation current with the complete description of use cases, function points, data model, system design, and the like.

The release history will be recorded in the operation guide. There, you enter the exact version and the date the version was installed in the production system. We have included an example (see Table 9-5); already you can see that we have started on 11.11.2010 to deploy a second application (Application Y) in production that is directly dependant on version 3.2.0 of Application X.

Table 9-5. Release Version Overview

Application X

Application Y

Installed in Production

3.2.0

1.0.0

11.11.2010

3.1.0

 

28.06.2010

3.0.0

 

19.04.2010

2.3.1

 

19.01.2010

2.3.0

 

13.11.2009

2.2.0

 

28.10.2009

2.1.0

 

31.08.2009

2.0.0

 

12.06.2009

1.1.0.4

 

11.06.2008

1.0.0

 

16.08.2007

0.6

 

14.06.2007

This simple table will give you enough information to understand the dependencies and which features were rolled out when. Also listed in a different section are the major new features that were rolled out in each specific release.

From our point of view the technical documentation should be generated completely, the data model from the data dictionary (comments on tables, views, columns, etc., should be used), the APEX application from the APEX dictionary and the database packages perhaps using pl/doc or a similar package.

Installation in the Production Environment

In the final step the new release will be installed on the production system. First of all, you need a backup of the production system if anything goes wrong.

There are multiple ways to perform a database backup. You should discuss the best approach with the responsible database administrator (DBA) for the production system. It will vary depending on the size of the application, dependency on other schemas, and other factors.

When you run the _patch_<db schema>.sql install script, the first step is to make the application unavailable to end users. We don't use the built-in mechanisms of APEX—we have rolled our own. This gives us a great deal of flexibility.

Making the application unavailable is no longer a manual process but is done automatically in the _patch_<db schema>.sql install script by changing a configuration parameter in the application schema.

If you used the normal availability options in the application preferences you would also have to remember to set the newly installed application to unavailable, too. In many cases you want to use the maintenance time to install and test the application before the application will be released to end users. Also, the developers that installed the application in the production workspace should be allowed to run the application at all times. In addition to the developers, you want to enable very specific end users to make a final test on the production system.

To set this up, use a configuration table and a few parameters regarding this restricted mode as shown in Table 9-6.

Table 9-6. Configuration Parameters for the Restricted Mode

Parameter

Value

Description

RESTRICTED_MODE

1

0=application is not in restricted mode

1=application is in restricted mode

RESTRICTED_USERS

192.168.0.10,192.168.0.20

List of IP addresses, which are allowed to access the application while in restricted mode

RESTRICTED_MESSAGE

The application is currently unavailable due to maintenance activities.

Message to be displayed when an unauthorized user calls the application

In your APEX application you will have to create an application process (process point before header) to check the availability of the application at all times, as shown in Listing 9-12.

Example 9-12. After Header Application Process to Check the Application Availability

BEGIN
   IF XLIB_CONF.GET_VALUE ('RESTRICTED_MODE') = '1'
   THEN
      IF IS_RUN_IN_BUILDER = FALSE
         AND INSTR (NVL (XLIB_CONF.GET_VALUE ('RESTRICTED_USERS'), '#'),
                    OWA_UTIL.GET_CGI_ENV ('REMOTE_ADDR')) = 0
      THEN
         HTP.P (XLIB_CONF.GET_VALUE ('RESTRICTED_MESSAGE'));
         HTP.P (
               '<br /><span style="color:white;>"'
            || OWA_UTIL.GET_CGI_ENV (REMOTE_ADDR)
            || '</span>'),

         APEX_APPLICATION.G_UNRECOVERABLE_ERROR := TRUE;
      END IF;
   END IF;
END;

The function xlib_conf.get_value will return the current value of the configuration parameter (i.e.. RESTRICTED_MODE) from the configuration table.

Using the function is_run_in_builder (see Listing 9-13) you can determine whether the application is run from within the application builder having developer privileges in the current workspace.

Example 9-13. Function is_run_in_builder

FUNCTION is_run_in_builder
   RETURN BOOLEAN
IS
BEGIN
   RETURN APEX_APPLICATION.g_edit_cookie_session_id IS NOT NULL;
END;

You can determine your current IP address by calling the function OWA_UTIL.GET_CGI_ENV ('REMOTE_ADDR') to check it against the list of privileged IP addresses in restricted mode. After the final successful testing you can release the application and simply set the configuration parameter RESTRICTED_MODE to 0.

Summary

In this chapter we detailed our best practices for configuration and lifecycle management of an APEX application. You have seen the components that make up our proposed solution.

As you have seen, the typical approach to version control of your sources using a local sandbox approach doesn't work well in a database environment. Although our approach does have a downside—you cannot tell exactly which developer made which changes to the source code, for example—in a typical in-house software development project this is not the biggest concern. But we certainly do have problems when working in an Oracle instance with a team of developers. We might overwrite each other's changes when working on the same packages concurrently. Furthermore, we have to make sure to propagate all relevant changes, from the development environment to the test and production environment. Since we might have to propagate 100 or more changes within a single patch set, we need an approach which is consistent and safe. We were clearly focusing on these real-world problems when we designed this approach.

Our solution still contains some manual procedures (like registering the scripts and creating the files manually), but we continue to work on automating some of these currently manual procedures. You can find the ongoing development here: http://www.opal-consulting.de/tools.

Still, we have learned to value this approach: it has proven its usefulness in multiple real-world projects and has definitely improved the quality and maintainability of our applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset