CHAPTER 9

image

Lifecycle Management

by Nick Buytaert

The practice of software development these days includes much more than just writing lines of code. To successfully deliver high-quality applications on time and within budget, modern development teams rely on areas in the field of application lifecycle management (ALM). ALM encompasses the coordination of a software product from its initial planning through retirement and includes all sorts of practices and techniques, such as the following:

  • Project and requirements management
  • Software development
  • Test management and quality assurance
  • Build automation and deployment
  • Release management
  • Operations

ALM is thus a broad term that spans the full range of activities occurring throughout a project’s life. The idea for a new application forms the starting point in the ALM process, while the lifecycle ends once the application loses its business value and is no longer used. ALM is therefore a continuous process, which can be divided into three distinct aspects. You typically maximize the business value of your software by performing well in all three aspects.

  • Governance: Make decisions about the project and ensure maximum business value realization.
  • Development: Create the software product and maintain it further.
  • Operations: Run, manage, and monitor the application.

I will not cover the entire ALM stack because that would take us too far afield. This chapter will mainly focus on technical topics with the intention to facilitate the overall APEX development and deployment process. I will introduce, for example, several build automation techniques that can significantly improve the quality and efficiency of your day-to-day development activities. The main part of this chapter guides you through the steps of incorporating a set of powerful tools and practices that take APEX development to the next level.

Challenges

Oracle Application Express is an easy-to-use development framework intended to rapidly create database-centric web applications on top of the Oracle Database. Developing applications with APEX is no rocket science, but that does not mean there are no challenges associated with it. The following subsections describe the typical challenges faced by a team of developers working on an APEX project. It might be interesting to note that some of these challenges even apply when working on a single-person project.

Deploying Database Changes

Oracle APEX development is tightly coupled to database development since you typically work with the concept of a thick database. The thick database approach is considered a general best practice and refers to putting as much code as possible in the database layer. You should treat APEX as the front-end or presentation layer, meaning that it should not contain any form of business logic.

As a result of the thick database approach, many different database objects are created and manipulated during the development phase: tables, views, packages, triggers, sequences, indexes, and so on. The difficulty, however, arises when a new version of the application has to be deployed. This inevitable and critical moment requires you to collect all database object changes since the previous release.

The task of collecting these changes is often laborious and error-prone, especially when you did not explicitly track any of your database changes during development. This sort of manual operation as part of the deployment process should be avoided at all times. Fortunately, there are ways to drastically minimize the risk of deploying database changes, which you will read about later in this chapter.

Collaborative Development

It is common for a development team to run into concurrency issues when working in a shared development environment, which happens to be the most widely used strategy in APEX projects. In a shared setup, developers work simultaneously on the same application and source code. A classic example of the concurrency problematic in APEX occurs when one of the developers unintentionally invalidates a widely used PL/SQL package. As a result of this action, all database objects and APEX pages that depend on that package will be invalidated too. It is therefore likely that fellow team members will suffer consequences from this situation, resulting in unnecessary frustrating moments and a loss of time. Another example is accidentally overwriting someone else’s changes when working on the same piece of code in the database.

The use of local development environments overcomes the shortcomings of the shared development strategy. In a local development setup, each developer has its own separated work area, rendering it impossible to run into concurrency issues. This is a common practice in most mainstream development technologies. Oracle APEX, however, is a different story since it is designed to behave better in a shared setup than in a local one. The main reason for this argument is the incapability of the APEX application export file to be easily merged in version control systems.

As you probably know, an application export file is nothing more than a series of black-box PL/SQL calls that rebuild all pieces of the exported application in a target workspace. In a local development setup, developers would have to include the large application export file with almost every commit to source control in order to share their individual changes to the APEX application with the rest of the team. With this approach, you will quickly find yourself having to manually merge changes from other developers in the export file. The task of resolving these so-called merge conflicts is time-consuming and error-prone, simply because APEX export files are not supposed to be edited manually. Given this restriction, most development teams settled on using a shared setup, taking into account the attached downsides.

Another argument in favor of using a shared development environment is the APEX application builder, which includes several helpful features that facilitate team development. You can, for example, lock pages to restrict other developers from modifying the pages you are currently working on. The application builder also applies optimistic locking to prevent developers from overwriting each other’s changes. Figure 9-1 shows an example error message generated by the application builder when running into concurrency issues.

9781484204856_Fig09-01.jpg

Figure 9-1. Optimistic locking error message

Parallel Development

In the early stages of a project, the development team typically follows a linear progression in which each new version of the application is based on the prior one. Parallel development occurs as soon as you branch off the main development line for an extra development path. Such an extra path might be necessary, for example, when having to fix critical bugs in the production environment while the rest of the team has already started working on the next release.

Fixing these bugs directly in production is definitely not a good idea because there is a chance you make things even worse with a direct impact on the end users. The far more appropriate solution consists of checking out the tagged production version from source control, fixing the bugs in a separate development environment, and finally propagating the changes to production.

Unfortunately, you will run into trouble when trying to merge the changes made on your branch back into the master branch. In other words, something is preventing you from easily incorporating the production bug fixes into your main line of development. Again, as you might have guessed, it is the application export file that spoils the merge process between the two branches. APEX projects therefore typically follow a linear approach, in which parallel development is avoided as much as possible.

An underutilized feature in the area of parallel development is build options, which is part of your application’s shared components. Build options allow you to enable or disable various application components at runtime or during application export. The APEX development itself uses build options extensively, especially when building early-adopter releases where certain components may not be ready for release yet. Build options have two possible values: include or exclude. If you specify a component as being included, then the APEX engine considers it part of the application definition. Conversely, if you specify a component as being excluded, then the APEX engine treats it as if it did not exist.

The drawback of build options is the level of granularity. Not all components, let alone attributes, in the application builder can have a build option assigned. This shortcoming is probably a reason why build options is not the most commonly used feature. However, build options can come in handy when dealing with new separate features that you want to exclude in an upcoming release.

Enterprise APEX Development

Creating applications with APEX is by no means affiliated with enterprise web development. Oracle itself positions Application Express as a so-called Rapid Application Development (RAD) tool but recommends other technologies as soon as project complexity and size increases. There are limitations, of course, but I do believe that the RAD concept can be taken a step further in a way that APEX is capable of dealing with more complex projects.

However, enterprise-level development with APEX requires you to use a more sophisticated development approach and tool set. This chapter covers a solution that has proven to be effective in real-world APEX projects. The solution combines APEX with a set of powerful third-party tools, resulting in a robust and agile development technique. The main part of this chapter will focus on how one can take advantage of these tools to turn the APEX development and delivery process into a well-oiled machine.

The Demo Project

I have set up a demo project on GitHub, which I will use for reference in the upcoming sections. It is publicly available at the following URL:

https://github.com/nbuytaert1/orclapex-maven-demo

The demo project includes a simple APEX 5.0 application built on top of two tables you should be familiar with: EMP and DEPT. The project has been configured with several third-party tools, which make it possible to deploy all the application’s components simply by executing a single command.

Make sure you have access to an Oracle Database running APEX 5.0 or higher, if you want to try the technique from the demo project yourself. It is perfectly possible, however, to apply the same techniques to older versions of Application Express. Just keep in mind that you cannot import an APEX 5.0 application export into an APEX 4.x instance. Figure 9-2 shows the demo project’s data model.

9781484204856_Fig09-02.jpg

Figure 9-2. The demo application’s data model

Image Note  Please be aware that it is not possible to use Oracle’s apex.oracle.com evaluation service in combination with the techniques described in this chapter.

Extending the APEX Development and Deployment Process

The objective of this section of the chapter is to set up and configure an automated build system for APEX applications that empowers both the development and deployment processes. This approach will allow you to receive instant feedback on the stability of the project’s code base, and it makes application deployment a breeze. However, before getting to that point, it is important to start with the basics and think of what a standard APEX application consists of. The best way to identify the main building blocks is by going through the APEX packaging process.

APEX Application Packaging

Packaging an application in APEX is mainly taken care of in the Supporting Objects section. This utility makes it possible to easily define the steps required to successfully deploy an application. Prior to APEX 5.0, the Supporting Objects section could be used for the deployment of two different types of objects.

  • Database scripts (DDL and DML)
  • Static files (CSS, images, JavaScript, and so on)

Starting from APEX 5.0, the Supporting Objects section has been limited to database scripts only. Static files are now automatically part of the application export file, something that was not the case in previous versions of Application Express. During the application export wizard, there is an option that determines whether you want to export the Supporting Objects as well. Figure 9-3 shows this setting on the Export Application page.

9781484204856_Fig09-03.jpg

Figure 9-3. The preference that determines whether to export supporting object definitions with your application

Setting this option to Yes or Yes and Install on Import Automatically results in a single export file containing all required application components and database code for deployment. The Supporting Objects can then be imported during the application import wizard or at a later time. This deployment technique, where everything is packaged into a single SQL file, is what APEX defines as a custom packaged application.

Database Scripts

Deploying an APEX application involves nearly always the execution of database scripts to manage schema objects on which the APEX application is built. Data Definition Language (DDL) statements allow you to define the structure of database objects. Data Manipulation Language (DML) statements, on the other hand, control the data within objects.

APEX 5.0 introduces an interesting new feature to simplify DDL script management. When creating an installation script in Supporting Objects, you have the ability to associate a script with one or more database objects, as shown in Figure 9-4.

9781484204856_Fig09-04.jpg

Figure 9-4. The Create from Database Object installation script method

The install script will then automatically contain the DDL statements for all associated database objects. Furthermore, the script remembers the database objects to which it has been linked, making it possible to synchronize the script’s content at a later time. This can be achieved by checking the appropriate scripts in the Installation Scripts overview, followed by clicking the Refresh Checked button, as shown in Figure 9-5. Please note that other script types cannot be refreshed this way.

9781484204856_Fig09-05.jpg

Figure 9-5. Refresh checked scripts

This feature has certainly improved the ease of use and maintainability of the database scripts in Supporting Objects. Dealing with the project’s database changes through the Supporting Objects mechanism is therefore a valid solution, on the condition that the number of database objects is not too large. As soon as you start working with the thick database approach, I recommend you manage the database changes through a database migration tool. Liquibase (www.liquibase.org) is my personal favorite in this area and will be discussed later in the chapter.

Static Files

Prior to APEX 5.0, the Supporting Objects section was also capable of including static files in the form of installation scripts. Static files are defined in the Files section under Shared Components. The tricky thing about these files was that they were not automatically part of the application export file, not even if you explicitly associated the file with the application you were exporting. Only the static files included in Supporting Objects were part of the export file.

You also had to keep in mind that there was no actual relation between the file in Shared Components and its associated script in Supporting Objects. Any changes made to the files were not reflected in the scripts. Forgetting to update the scripts in Supporting Objects, after making changes to the files in Shared Components, was a common mistake during the deployment process. This was especially the case when you had to deal with a fair number of files.

The decision to disconnect static files from Supporting Objects and making them automatically part of the application export has positively influenced application deployment. You no longer have to worry about the question of whether all installation scripts for static files have been updated correctly before deployment. There have also been made a number of improvements to the Files section under Shared Components in APEX 5.0. The zip upload and download feature and the ability to organize files into a directory hierarchy have been valuable additions, making the Files section only more interesting to use.

As shown in Figure 9-6, I have set up a directory hierarchy containing five different folders: css, images, images/contribute, images/oracle, and js.

9781484204856_Fig09-06.jpg

Figure 9-6. An example directory hierarchy in the Files section

One culprit in the past for static files in Shared Components was browser caching. Prior to APEX 5.0, any static file you referenced in your APEX application had to be downloaded to the client’s browser by calling the get_file database procedure. This approach made browser caching unreliable, leading to slower-loading pages and unnecessary extra load on the database. It was therefore a common practice to store the application’s static files on the web server, making them easier to cache. In APEX 5.0, however, web browsers can seamlessly cache files uploaded into Shared Components.

As you can see in Listing 9-1, there is a clear difference in file referencing between APEX 4.2 and APEX 5.0. Version 4.2 uses the get_file procedure, while 5.0 takes advantage of relative file URLs.

Applicability

I have found that the packaged application concept works well for simple and small-sized applications, but it can become difficult for mid- to large-scale applications that require many different scripts to be managed. Also, keep in mind that small applications can quickly evolve into larger ones over time. Taking into account these considerations, I prefer to leave aside the Supporting Objects section whenever I can.

Not using the Supporting Objects section requires you to look for an alternative approach that simplifies and, while at it, enhances application deployment. You are in need of a solution that is able to manage all parts of a packaged APEX application. After going through the Supporting Objects mechanism, you can conclude that a packaged application consists of the following three main parts:

  • The APEX application itself (in the form of a SQL export file)
  • Database scripts
  • Static files (part of the application export since APEX 5.0)

The alternative approach that I will introduce in the next sections not only focuses on the deployment part but also improves the overall development strategy in an aim to tackle some of the challenges you saw at the beginning of the chapter. To accomplish this, you will be using a collection of third-party tools in combination with Application Express. All tools covered in this chapter are open source and completely free to download. The first one to discuss is Apache Maven because it plays a central role in the tool set.

Apache Maven

Maven is a software project management and comprehension tool designed to automate a project’s build process. It is important to fully understand what the term build process is all about. In general, it refers to the steps required to construct something that has an observable and tangible result. A process is typically characterized by its ability to be automated. In other words, when I talk about a build process in terms of software development, I am talking about the—ideally automated—tasks required to put together the parts of a software product. This process is not only about code compilation and deployment, as you might think now. It also includes tasks such as running tests, generating documentation, and reporting. All these different tasks can be managed through Maven from one central piece of information.

Maven’s primary goal is to allow a developer to comprehend the complete state of a development effort in the shortest period of time. To attain this goal, Maven attempts to deal with several areas of concern.

  • Making the build process easy
  • Providing a uniform build system
  • Providing quality project information
  • Providing guidelines for best-practices development

The best way to think of the build process in the context of an APEX application is by going through the parts of a packaged application, as you did in the previous section. Thus, the basic build process will consist of importing the APEX application export file into a target workspace, executing all appropriate database scripts, and optionally uploading static files to the web server in case you are not using the Files section under Shared Components.

Maven is primarily used in the Java world, but nothing stops you from using it in APEX projects. You will use only a subset of Maven’s features because some of them are applicable only to Java-based projects. The primary task of Maven in this case is the automation of the APEX application build process. First, let’s take a look at the Maven installation procedure.

Installation

Maven requires the Java Development Kit (JDK) to be installed on your machine.

  1. Download the latest JDK release from the Oracle web site.
  2. Run the JDK installer and follow the instructions.
  3. Create the JAVA_HOME environment variable by pointing to the JDK folder.
    usr/jdk/jdk1.7.0
  4. Update the PATH environment variable by pointing to the JDK’s bin folder.
    $JAVA_HOME/bin:$PATH
  • Validate the JDK installation.
    $ java -version
    java version "1.7.0_07"
    Java(TM) SE Runtime Environment (build 1.7.0_07-b10)
    Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode)

Now install the latest version of Apache Maven.

  1. Download the latest version of Maven at http://maven.apache.org.
  2. Extract the downloaded archive file into the directory where you want to install Maven. I have put mine in the /Users/Nick folder.
  3. Set the MAVEN_HOME environment variable.
    /Users/Nick/apache-maven-3.1.1
  • Update the PATH environment variable.
    $MAVEN_HOME/bin:$PATH
  • Validate the Maven installation.
    $ mvn --version
    Apache Maven 3.1.1 (0728685237757ffbf44136acec0402957f723d9a; 2013-09-17 17:22:22+0200)
    Maven home: /Users/Nick/apache-maven-3.1.1
    Java version: 1.7.0_07, vendor: Oracle Corporation
    Java home: /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: "mac os x", version: "10.10.1", arch: "x86_64", family: "mac"

Project Directory Layout

One of the ideas behind Maven is to provide guidelines for best-practices development in Java-based projects. It is not possible for APEX projects to comply with these conventions, simply because there are practically no similarities between Java and APEX development. Luckily, Maven leaves enough room for deviations from these conventions.

Maven allows you to define your own best practices on how to manage an APEX project. Having these rules and uniformly applying them allows developers to freely move between different projects that use Maven and follow the same or similar best practices. By understanding how one project works, they will understand how all of them worked. This approach can save developers a lot of time when switching between projects.

An important aspect in Maven’s guidelines for best-practices development is the project directory layout. It determines how the project-related files should be organized on the file system. This is actually how you will get to the point of creating the project’s code repository. All files required to build the project must be present in the code repository. This is an absolute necessity since you want Maven to be able to build all parts of the application to a target environment. Another important aspect of having a code repository is the ability to apply version control on it. The use of a version control system is a topic that I will discuss later in this chapter.

Listing 9-2 shows an overview of the most important top-level directories that I am using in the demo project’s code repository. I will discuss each one of the directories in more detail later. You can of course implement your own preferred project directory layout.

Image Note  The src and main directories are part of Maven’s common directory layout. The src directory contains all source code for building the project. Its subdirectory main includes the source code for building the main build artifact. The src/test directory, for example, would contain only test sources.

The Project Object Model (POM) File

The POM file is the fundamental unit of work in Maven. It is a configuration file in XML format that contains the majority of information required to build the project. In other words, this is the place where the project’s build process is being automated. The standard name for the POM file is pom.xml, and it is always located under the project’s root directory. Listing 9-3 shows how the demo project’s POM file is organized.

The project element is the root node in the POM file and declares several attributes to which the document can be validated. The modelVersion value is 4.0.0, which is currently the only supported POM version for both Maven 2 and 3.

A Maven project is uniquely identified by its coordinate elements: groupId, artifactId, and version. All three items are required fields. These coordinate elements, however, have a useful meaning only in the context of Java-based projects. For you, it does not really matter what you fill in here.

The plugins element, under the build element, contains the tasks that must be executed to successfully build the project. Defining these tasks is achieved by adding plugin elements, which contain configuration details to fulfill a certain build task. In the next section, you will take a closer look into the plugin element definition for each task that is required to successfully deploy an APEX application. The first task or plugin you will execute during the build process is Liquibase.

Liquibase

The thick database approach that I talked about earlier, in combination with the code repository concept, introduces an extra challenge: all database scripts that are part of the APEX application must be stored in the code repository in the form of SQL files. Storing these files in the code repository makes it possible for Maven’s automated build to execute all appropriate database scripts to successfully migrate a database schema to a specific version. This stage of the build process is where most things can go wrong because of the inherent complexity of deploying database changes. That is why you will take advantage of Liquibase, a powerful database migration tool that will effectively help you in organizing and executing database scripts.

Organizing Database Scripts

You are completely free in how you organize the directory structure for your project’s database scripts. However, I recommend you to come up with a well-thought-out and unambiguous approach, which leaves little or no room for interpretation. Make sure you can immediately identify the appropriate folder to locate a specific database script. Having a logical and organized directory structure will save you a lot of time in the long run, and it makes life just a little easier.

The directory structure that I typically use is the one I have applied in the demo project. It is based on the different types of database objects that exist in the Oracle Database. Listing 9-4 shows you what the directory structure looks like under the src/main/database directory.

As illustrated in Figure 9-7, I first differentiate between DML and DDL scripts: DML scripts always go into the data folder, while DDL scripts get a place in either the install folder or the latest folder. In the case of DDL, you need to ask yourself the question of whether the script contains a replaceable or nonreplaceable database object. The following is an explanation of these two database object categories:

  • A replaceable object is characterized by its ability to override an already existing object definition with a new one. Objects that can be compiled using the or replace clause fall in this category. Executing a replaceable script constitutes a change that completely replaces the previous version of the object, if it already existed. These sorts of scripts should go into one of the subdirectories of the latest folder. The name of the latest folder simply refers to the fact that the scripts in the subdirectories always contain the latest version of a database object. Examples are packages, triggers, views, and so on.
  • A nonreplaceable object cannot be created with the or replace clause. Modifying the definition of a nonreplaceable object requires you to write an alter statement. These sorts of scripts, also known as incremental scripts, should go into one of the subdirectories of the install folder. It is important to realize that scripts in the install folder can be executed only once, whereas replaceable scripts can be executed over and over again. An example of a nonreplaceable object is a table. Suppose you want to add an extra column to an already existing table. It would not be possible to replace the previous version of the table with a new one because this would imply dropping and re-creating the table, resulting in the loss of data.

9781484204856_Fig09-07.jpg

Figure 9-7. Database script file organization

The actual content of the files includes nothing more than plain SQL or PL/SQL code. There are three different kinds of database statements that can be executed through these files.

  • DML statements in the data folder
  • DDL statements in the install and latest folders
  • PL/SQL anonymous blocks in all folders

You should think of each SQL file as an atomic change to the database schema. Atomicity is part of the ACID model (Atomicity, Consistency, Isolation, and Durability) and states that database transactions must follow an all-or-nothing rule, meaning that all statements within a transaction are either executed successfully or not executed at all. Liquibase treats each SQL file in the code repository as a single transaction.

That last sentence is actually not entirely true since transaction control is not defined at the file level. Liquibase executes the SQL files as part of so-called changeSets, which are part of changeLog files. Each changeSet gets executed in a separate transaction that is committed at the end, or rolled back if an error occurred. Figure 9-8 shows a visual representation of where transaction control is applied in Liquibase.

9781484204856_Fig09-08.jpg

Figure 9-8. The relation between changeLogs, changeSets, and SQL files

It is possible to reference multiple files in one changeSet, which implies that multiple SQL files can be executed within the same transaction. The concept of changeSets and changeLogs will be explained in more detail in the next two subsections. But for now, just keep in mind that every SQL file will be executed in a separate transaction.

It is important to fully understand the transaction control behavior in Liquibase because it helps to answer the question of whether it is advisable to group multiple statements in a single file. The answer depends on the sort of statements you are dealing with. DML statements in the data folder, for example, are typically grouped in one file to guarantee the atomicity of the transaction. As an illustration, the data script insert_dept_demo_data.sql in Listing 9-5 contains insert statements for the DEPT table.

There is no need to include a commit statement at the end of the file because Liquibase will automatically commit the database transaction after the corresponding changeSet terminates successfully. A rollback statement will be performed if the script produces an error during its execution, leaving the database in a consistent state.

DDL statements in the install and latest folders are preferably limited to one per file. Imagine a situation wherein you include multiple CREATE TABLE statements in one file and let it execute through Liquibase. If for some reason the second statement fails, Liquibase would not be able to roll back the first statement because the Oracle Database automatically commits DDL statements. Running Liquibase a second time will now undoubtedly fail because the first statement in the file will be executed again, resulting in the following error message: ORA-00955: name is already used by an existing object. The only way to get past this error is to manually drop the table from the first statement and fix the problem with the second statement. Manually dropping one table is not a big deal, but what if you try to execute a SQL file that includes ten DDL statements and the last one fails?

Therefore, always remember that DDL statements get automatically committed and that it is better to prevent transaction mismatches between Liquibase and the Oracle Database. Restricting DDL statements to one per file also greatly improves the maintainability of the code repository. The script create_dept.sql in the src/main/database/install/table folder, for example, contains the nonreplaceable DDL code to create the table DEPT. Listing 9-6 shows the content of the create_dept.sql file.

On a side note, it might be interesting to outline the steps needed to alter the DEPT table from the previous code fragment. A common mistake when starting with Liquibase in a situation like this is to modify the previous CREATE TABLE statement in order to alter the structure of the DEPT table. However, this is not how Liquibase works. It is not allowed to modify the content of the create_dept.sql script, if the file has already been executed through Liquibase. The following actions should be taken to correctly alter the DEPT table:

  1. Create a new SQL file, alter_dept_add_col.sql for example, and save it to the following location: src/main/database/install/table.
  2. Add the ALTER TABLE statement to the SQL file from step 1.
  3. Open the src/main/database/changelog/install/1.0.xml changeLog file and append a changeSet that references the alter_dept_add_col.sql file.

Another example script is the br_iu_dept.sql file in the src/main/database/latest/trigger directory. As you can see in Listing 9-7, the file contains the replaceable DDL code to create or alter the BR_IU_DEPT trigger.

Making a change to a replaceable database object is much easier since you can simply overwrite the existing DDL code in the SQL file. Liquibase will pick up the changes made to the content of the file and will reexecute the script on the next Liquibase run. More in-depth information on this behavior will be given in the following subsection.

The following are other folders under the src/main/database directory I have not yet discussed:

  • changelog
  • model
  • post-build
  • technical-docs

The changelog folder contains the Liquibase changeLog XML files. These files describe what, when, and how database scripts should be executed during the database migration process. More information on changeLogs and changeSets will be given in the following subsection.

The model folder simply contains the entity-relationship model (ERD) for the database schema. Oracle SQL Developer Data Modeler has been used to create the demo project’s ERD.

The post-build directory includes scripts that will be executed at the end of the database migration. The compile_schema.sql script is unarguably the most essential post-build script. It is the last script to be executed by Liquibase, and its task is to scan the schema for invalid database objects. The script throws an error if one or more invalid objects are found. The names of all invalid objects are printed out in the build log, and the build itself terminates with a “build failed” message. The compile_schema.sql script, shown in Listing 9-8, plays an important role during the build process because it gives the project team invaluable feedback on the stability of the database code.

The last folder under the src/main/database directory is technical-docs. In here, you will automatically generate technical documentation based on comments in your SQL scripts.

ChangeLog Files

Liquibase uses changeLog files to describe what database scripts should be executed to successfully migrate a target database schema to a desired version. ChangeLog files also define the order of script execution and include other relevant meta-information. Let’s first take a look at how the master changeLog file looks because it defines the main order of execution for your different types of scripts. Listing 9-9 shows the content of the master.xml file.

The master changeLog file is merely a collection of other changeLog files that in their turn include so-called changeSets. Before explaining what changeSets are, let me clarify the order of execution. It is actually all common sense.

  1. The first scripts to be executed through Liquibase are those in the install folder. That is because other types of scripts typically depend on objects in the install folder. As an illustration, you cannot run a data script that inserts rows in the DEPT table before the DEPT table actually exists.
  2. Scripts in the latest folder are executed afterward. These replaceable objects also depend on objects in the install folder, but that is not the main reason why they are being executed at this moment. Unlike nonreplaceable objects, Liquibase does not halt the build when it compiles invalid replaceable objects. The reason behind this is caused by the behavior of the underlying OJDBC driver, which Liquibase uses to compile the database scripts to the Oracle Database. However, it is possible that data scripts depend on objects in the latest folder. Take, for instance, a series of insert statements that depend on a before row insert trigger to assign a sequence generated primary key value.
  3. After compiling all database objects, run the DML scripts in the data folder.
  4. Finally, run any post-build scripts. This is where the compile_schema.sql script is being executed to validate whether the schema contains invalid objects. If it does, halt the build and print out the names of all invalid objects.

The most common way to organize your changeLogs is by major release and database script type. As you can see in the demo project, the src/main/database/changelog folder includes a subfolder for every type of database script: install, latest, data, and post-build. Each of the four subfolders contains a 1.0.xml changeLog file in which the database scripts of the corresponding type are grouped together. Please note that you are completely free in how you organize the changeLog files for your project. You can, for example, use a single changeLog for all the different types of database scripts. This, however, quickly results in a large and unorganized changeLog file, especially when your project counts many database objects. Listing 9-10 shows the contents of the changelog folder.

ChangeLog files are composed of changeSet elements, which represent isolated transactions through which database statements can be executed. In the demo project, you will quickly notice that each changeSet points to a SQL script in one of subdirectories of the src/main/database directory. Listing 9-11 shows the content of the changeLog file for nonreplaceable objects.

As you can clearly see, a changeLog file is yet another XML file filled with changeSet elements. Liquibase sequentially reads out the changeLog files, starting from the master changeLog, and determines what changeSets must be executed to successfully migrate a target database schema to the desired version. The decision of whether to execute a changeSet is based on two aspects.

First, you should be aware that Liquibase keeps track of all previously executed changeSets within a given schema. On its first execution in a schema, Liquibase creates the DATABASECHANGELOG and DATABASECHANGELOGLOCK tables. The latter is a simple single-row table that prevents multiple Liquibase executions from running at the same time. The other table contains all changeSets that have already been run against the given database schema. The DATABASECHANGELOG table even calculates an MD5 checksum for each entry, based on the content of the changeSet’s SQL script or scripts. This checksum allows Liquibase to detect differences between the changeSets you are trying to execute and what was actually ran against the database. This stateful concept is essential when having to decide whether a script should or should not be executed during a database migration in a target schema.

Second, changeSet attributes are used to define script execution behavior. These attributes allow you to easily differentiate between replaceable and nonreplaceable scripts. Table 9-1 gives an overview of the most important changeSet attributes.

Table 9-1. Most Frequently Used changeSet Attributes

Tab9-1

The runAlways and runOnChange attributes accept a Boolean value and determine whether a script should be executed once or repeatedly. Omitting these two attributes results in false values, indicating that a changeSet may be executed only once per schema. Thus, both changeSet definitions in Listing 9-12 are identical.

This changeSet definition is typically used for nonreplaceable objects in the install folder and for data scripts in the data folder. Setting both attribute values to true causes a changeSet to always execute when Liquibase runs. This execution behavior is ideal for replaceable objects in the latest folder. The src/main/database/changelog/latest/1.0.xml file contains the changeSets for the replaceable objects. Listing 9-13 includes an example of a changeSet definition that compiles the BR_IU_DEPT trigger.

It is also possible to only set the runOnChange attribute to true, resulting in a changeSet that executes only when the content of the referenced script was changed since its last execution. Also notice the splitStatements attribute on the sqlFile element, which has been given the Boolean value false. The default value for this attribute is true, meaning that Liquibase splits multiple statements on the semicolon character. This is the desired behavior for most nonreplaceable and data scripts. Replaceable scripts, on the other hand, should not interpret a semicolon as a statement separator.

The changeSet for a database package typically includes two sqlFile elements. As you might have guessed, you need one for the package specification and one for the package body. Remember that a changeSet demarks one database transaction in Liquibase. The changeSet for the database package will be executed in two transactions, though. The reason is that both the package specification and body scripts contain DDL, which gets automatically committed by the Oracle Database. Listing 9-14 shows an example changeSet that compiles the DAO_DEPT package.

The Liquibase Maven Plugin

It is possible to use Liquibase in a stand-alone fashion, but what you will actually do is execute it as part of your Maven build. Integrating Liquibase with Maven is easily accomplished by adding the Liquibase Maven plugin to the pom.xml file. Simply append the plugin element from Listing 9-15 as a child to the plugins element.

The plugin element contains all configuration details for the Liquibase Maven plugin. The groupId, artifactId, and version tags uniquely identify the Maven plugin that should be executed as part of the build. It is also required to define the Oracle JDBC driver as a plugin dependency because Liquibase depends on it to execute the SQL scripts.

The executions element configures the execution details of a plugin’s goal. Every execution gets a unique ID assigned and is linked to a phase in Maven’s build lifecycle. Without getting into too much detail here, think of the Maven build lifecycle as a series of stages to which you can link a plugin’s execution. In the demo project, all plugin executions are linked to the compile phase. The goal element contains the name of the goal you want the plugin to execute. Every plugin defines its own goals. In case of the Liquibase Maven plugin, you will execute the update goal, whose task is to apply a changeLog file to a target database schema.

The configuration element allows you to specify goal specific properties. The update goal requires you to provide several database connection parameters to successfully log on as the parsing schema of the APEX application you are deploying. You must also reference the path to the master changeLog file so that Liquibase can determine what subchangeLog files must be read out. The following is an overview of the specified properties:

  • driver: The fully qualified name of the driver class that is used when connecting to the database. The value for this property will always be oracle.jdbc.driver.OracleDriver in combination with an Oracle Database.
  • url: The JDBC connection URL of the target database where Liquibase will be executed.
  • username: The parsing schema of the APEX application.
  • password: The database password of the parsing schema.
  • changeLogFile: The path to the changeLog file that must be applied to the database schema.
  • promptOnNonLocalDatabase: Controls the prompting of users as to whether they really want to run the changes on a database that is not local to the machine that the user is current executing the plugin on.
  • verbose: Controls the verbosity of the output.

A great thing about the Liquibase Maven plugin is that it is available from a public Maven repository. Maven itself will download and install the required library files into the local Maven repository to get Liquibase up and running. There is just one catch, and that is the Oracle JDBC driver, which you will have to manually download and install because of Oracle license restrictions. Follow these steps:

  1. Download the appropriate OJDBC driver from the Oracle web site. Make sure to select the driver that is compatible with your specific Oracle Database version.
  2. Open a command-line window and change the directory to the location where you downloaded the OJDBC driver.
  3. Run the Maven install command.
    mvn install:install-file -Dfile=ojdbc7.jar -DgroupId=com.oracle -DartifactId=ojdbc7 -Dversion=12.1.0.2.0 -Dpackaging=jar

Image Note  The version numbers in the previous install command may vary depending on the version you have downloaded.

After adding the Liquibase plugin to the pom.xml file and installing the appropriate OJDBC driver, you are ready to run Maven for the first time. Simply open a command-line window and change the directory to the root of the project. Execute the mvn compile command to start the build. Maven will automatically look for the pom.xml file in the current directory and will try to run all plugin executions that are part of the compile phase. Listing 9-16 shows the build log produced by Maven.

I have removed some of the lines in this build log to make the output not too overwhelming. The “build success” message at the end of the log indicates that all build actions have been executed without errors. In this case, it means that the update goal of the Liquibase plugin successfully executed all appropriate SQL scripts in the target database schema.

Running Maven a second time with the same mvn compile command will start the build again but produce a different build log. That is perfectly normal since Liquibase will reexecute only the changeSets that have been assigned the value true for the runAlways attribute. The nonreplaceable objects from the first Liquibase run will not be executed anymore.

The last changeSet executed by Liquibase is the compile_schema.sql script. Its job was to look for invalid objects within the currently connected database schema. The script has executed successfully in the previous example, which means that all database objects are valid. This approach greatly benefits the overall stability of the project because it feeds the development team with constant feedback on the source code’s health.

Let’s take a look at how the build reacts when you introduce an invalid object in the code repository. As an example, I have deliberately invalidated the bl_user_registration package body. Firing up the Maven build with the same mvn compile command will now result in an unsuccessful execution, as shown in Listing 9-17.

This time, the compile_schema.sql script execution fails, resulting in the “build failure” message. A list of invalid objects in the schema is printed out in the build log. This makes it easy for the development team to quickly identify and fix the database objects that cause the build to fail. Note that the changeSet for the bl_user_registration package executed successfully, even though the package body includes a compilation error. The reason for this behavior is the underlying OJDBC driver, which does not return an error when it compiles an invalid replaceable object. Nonreplaceable scripts that return an error during their execution will immediately halt the build. Executing the ALTER TABLE script from Listing 9-18 as part of the alter_emp_add_birthdate changeSet, for example, will result in an output as in Listing 9-19.

The alter_emp_add_birthdate changeSet executes with an Oracle error code and automatically terminates the build. As a result of the error in the ALTER TABLE script execution, the changeSet will not be written to the DATABASECHANGELOG table and will therefore be executed again on the next Liquibase run.

Other Features

With the adoption of Liquibase in your project’s workflow, a continuous connection is made between database development and deployment. Each change to the database should be captured immediately as a Liquibase changeSet to define and control its execution through the build process. Database changes can also be tested more often and in smaller parts, resulting in faster problem detection and resolution.

So far in this chapter, I have given a detailed explanation on the core features of Liquibase. However, it is certainly worth mentioning that there are still many features left undiscussed. Next up is a listing of some of these features, without getting into too much detail.

ChangeSets can accept rollback elements to undo database changes. A rollback element typically performs the opposite action of the changeSet it is attached to. A changeSet that creates the EMP table, for example, will drop that same table in its rollback element, as shown in Listing 9-20.

You can specify what changes to roll back in three ways.

  • Tag: A changeSet has the ability to tag the database. Such a tag can then be specified to roll back all changeSets that were executed after the given tag was applied.
  • Number of changeSets: Specify the total number of changeSets to roll back.
  • Date: Choose a specific date to roll back to. The following is an example that undoes all changeSets that were executed after January 1, 2015.
    mvn liquibase:rollback -Dliquibase.rollbackDate="JAN 1, 2015"

Another powerful Liquibase feature is the concept of preconditions, which can be used to conditionally execute a changeLog or changeSet based on the state of the database. Suppose you want to insert demo data in the EMP table only when the table is empty. The changeSet definition in this case would look like the one in Listing 9-21.

Yet another technique to conditionally execute changeSets is the context mechanism. It is possible to associate a changeSet with one or more contexts, while the update goal of the Liquibase Maven plugin lets you set one or more contexts as part of the execution. Only changeSets marked with the passed contexts will be executed. This is extremely helpful when having to run scripts that are environment dependent. It is highly unlikely, for example, to write demo data into production tables. Thus, only the development and test environment should execute the insert_emp_demo_data changeSet.

The insert_emp_demo_data changeSet from Listing 9-22 will be executed only when the context DEV or TST is included in the Liquibase Maven plugin configuration section. If the PRD context is passed, for example, the changeSet will be skipped. Listing 9-23 shows how to include the contexts parameter.

You can find more information on these and other Liquibase features at Liquibase’s official documentation guide:

www.liquibase.org/documentation/

The Oracle APEX Maven Plugin

Plugins are essential when it comes to defining a project’s build process in Maven. A plugin is merely a collection of goals with a general common purpose. The purpose of the Oracle APEX Maven plugin is to facilitate the APEX development and deployment process. Each plugin goal is responsible for performing a specific action. The Liquibase Maven plugin, for example, included the update goal, which allowed you to apply a changeLog to a target database schema.

Unlike the Liquibase plugin, the Oracle APEX Maven plugin is not publicly available in the Maven repository. This means you will have to manually download and install the plugin yourself.

  1. Go to https://github.com/nbuytaert1/orclapex-maven-plugin and download the latest stable release.
  2. Unzip the downloaded archive file.
  3. Open a terminal window and change directory to the unzipped orclapex-maven-plugin folder.
  4. Install the plugin in your local Maven repository with the following command:
    mvn install:install-file -Dfile=orclapex-maven-plugin-1.0.jar -DpomFile=orclapex-maven-plugin-1.0-pom.xml

Image Note  The version numbers in the previous install command may vary depending on the version you have downloaded.

The import Goal

The most essential goal of the Oracle APEX Maven plugin is the import goal, whose task is fairly simple: import the SQL application export files into a target APEX workspace. The technical implementation and complexity of a plugin’s goal is hidden for the developer in the form of a Maven old Java object (MOJO). A MOJO is a Java class that specifies metadata about its goal, such as the goal name and the parameters it is expecting.

The import goal requires one extra piece of software on your machine to successfully execute its task: SQL*Plus. That is because an APEX application export file is filled with SQL*Plus commands. This means that the import goal has no other option than executing the export file with SQL*Plus. OJDBC would have been an alternative if the export files were just plain SQL. But since that is not the case, make sure you have SQL*Plus installed on your machine.

Adding the Oracle APEX Maven plugin and the import goal to the POM file is pretty straightforward. Simply append the plugin element from Listing 9-24 to the plugins tag, after the Liquibase Maven plugin. That way, database code gets compiled first, followed by the APEX application import.

In this example, the import goal is tied to the compile phase, just like the update goal of the Liquibase Maven plugin. Goals tied to the same phase in the POM are executed in order of occurrence. The configuration element contains the import goal parameters. These parameters allow you to specify how SQL*Plus should execute the SQL export file into a target workspace. Here is a short explanation on the used parameters:

  • connectionString: The database connection string used in the SQL*Plus login argument.
  • username: The database username used to log in with SQL*Plus.
  • password: The database user’s password.
  • sqlplusCmd: The command to start the SQL*Plus executable. The default value is sqlplus if omitted.
  • appExportLocation: The relative path to the folder containing the application export files.
  • workspaceName: The APEX workspace in which you want to import the application.
  • appId: The ID for the application to be imported. Omit this parameter to import the application with its original ID.

The import goal allows you to set the same application attributes as the APEX_APPLICATION_INSTALL API package. The appId parameter, for example, uses the set_application_id procedure in the background to set the ID of the application to be imported. The following application attributes can be set through the import goal parameters:

  • appId: The application ID
  • appAlias: The application alias
  • appName: The application name
  • appParsingSchema: The application parsing schema
  • appImagePrefix: The application image prefix
  • appProxy: The proxy server attributes
  • appOffset: The offset value for the application import

Image Note  More information on these application attributes is available at the official documentation page of the APEX_APPLICATION_INSTALL API package.

After adding the Oracle APEX Maven plugin to the pom.xml file, you should be able to import the application export as part of the Maven build. The command to run the Maven build is still mvn compile. Maven will first run Liquibase, followed by the application import. If something goes wrong with the update goal of Liquibase, the application import will not take place. This is the desired behavior since you do not want to import an application that would work on a previous version of the database schema. Listing 9-25 shows the Maven build log after executing the mvn compile command.

The SQL*Plus prompt commands from the application export file are printed out in the Maven build log. The output also informs you on the configuration parameters that have been set through the plugin’s configuration properties. As you can see in the previous example, the application has been imported in the TST_APEX_MAVEN_DEMO workspace with application ID 1010.

The run-natural-docs Goal

It is also worth mentioning that the Oracle APEX Maven plugin includes the run-natural-docs goal. Natural Docs is an open source technical documentation generator. It scans the source code for Javadoc-like comments and builds high-quality HTML documentation from it. All package specifications in the demo project have been documented with these comments. Take, for example, the bl_user_registration package specification, shown in Listing 9-26, which contains the validate_password_strength function.

Adding these comments to the source code is sufficient for Natural Docs to start generating technical documentation. All you have to do is add another execution element to the Oracle APEX Maven plugin in the pom.xml file, as shown in Listing 9-27. Please refer to the demo project on GitHub to see the entire content of the pom.xml file.

The following is a short explanation on the parameters used in the configuration element:

  • naturalDocsHome: The path to the folder containing the Natural Docs executable.
  • inputSourceDirectories: Natural Docs will build the documentation from the files in these directories and all its subdirectories. It is possible to specify multiple directories.
  • outputFormat: The output format. The supported formats are HTML and FramedHTML.
  • outputDirectory: The folder in which Natural Docs will generate the technical documentation.
  • projectDirectory: Natural Docs needs a place to store the project’s specific configuration and data files.

Image Note  You need Perl 5.8 or newer installed on your machine to run Natural Docs. Mac and Linux users have Perl installed by default. If you are using Windows and have not installed it yet, you can get ActiveState’s ActivePerl for free at www.activestate.com/activeperl.

Running the mvn compile command will now generate technical documentation. Listing 9-28 shows the part of the Maven build log generated by the run-natural-docs goal.

The resulting HTML files are generated in the outputDirectory specified in the plugin’s configuration section. Figure 9-9 shows the technical documentation for the BL_USER_REGISTRATION package.

9781484204856_Fig09-09.jpg

Figure 9-9. The technical documentation for the BL_USER_REGISTRATION package

Deploying Static Files

By now, the Maven build successfully covers two of the three parts for the deployment of a basic APEX application. Liquibase tackles the task of migrating database changes, and the APEX application import is taken care of by the import goal of the Oracle APEX Maven plugin. The last part you have to deal with is the deployment of static files. As already mentioned, APEX 5.0 includes several serious improvements that simplified static file deployment. The APEX development team could not have made it any easier for you by automatically exporting the static files as part of the application export.

You can safely skip this section if you decide to take advantage of the enhanced Files section under Shared Components. In that case, the Maven build as described in the previous sections is capable of deploying a basic APEX application with just a single command. For those who prefer to upload their static files to the server’s file system, there is just one more task left.

Image Note  Storing static files on the server’s file system makes it possible to automate repetitive tasks related to static files. A CSS preprocessor, for example, can be used to enhance the writing of CSS. You can also add utilities to your workflow that validate and minify CSS and JavaScript files.

The steps required to perform the task of copying static files to the web server are in essence fairly simple:

  1. Access the server’s file system with a privileged OS user.
  2. Remove any existing static files.
  3. Copy the static files from the code repository to the web server.

The AntRun Plugin

The AntRun plugin provides the ability to execute Ant tasks from within a Maven build. Apache Ant is also a build automation tool and is considered a predecessor of Maven. I will spare you any further details on this matter, but it turns out that Ant is better suited for the task of copying files to a server’s file system. Simply append the code fragment from Listing 9-29 to the plugins element in the pom.xml file.

The AntRun plugin requires the ant-jsch dependency because it includes the sshexec and scp tasks, which you need to successfully copy the static files. You do not need to manually install the Maven AntRun plugin or its dependencies because these are publicly available from the Maven repository.

The target element in the configuration section includes two tasks that will be executed by Ant. The sshexec task connects to a remote host with a specified user and executes two OS commands to clean up the existing static files directory. Files removed from the code repository this way will also be removed from the server’s file system after the copy.

The scp task is responsible for the actual copy of the src/main/web-files folder content to the server’s file system. The target location in the previous example is /opt/apex/images/apex_custom/tst_demo. Running the Maven build with the mvn compile command will now perform the file copy. Listing 9-30 shows the build log generated by the AntRun plugin.

Image Note  Secure Copy (SCP) is a protocol that securely transfers computer files between two hosts. It is based on the Secure Shell (SSH) protocol. Other protocols, such as FTP, are also supported by the Maven AntRun plugin. Please refer to the plugin documentation for more information on other protocols.

Multi-environment Setup

So far in this chapter, you have constantly been building to the same environment when executing the mvn compile command. That is because the pom.xml file includes the build and configuration details only for the test environment. However, a project setup should count at least three separate environments before being able to support an application’s development and deployment lifecycle in a proper way. Figure 9-10 shows these three environments.

9781484204856_Fig09-10.jpg

Figure 9-10. A basic build environment setup

  • The development environment is where the actual development takes place. This environment always contains the latest version of the application because it is the place where new features and improvements are being implemented by the development team.
  • In the test environment, the complete code base is merged together into a single product. This allows the development team to verify whether deployment has been successful and whether the application works as expected.
  • The production environment contains the actual live application. It is recommended to install the runtime-only configuration of APEX in this environment. This will prohibit developers from accessing the application builder and SQL workshop, making it impossible to directly modify the live application.

More environments can be added to the previous setup, depending on the demands of the project. An acceptance environment, for example, is often introduced when business users are required to test the application before it gets deployed to production. In this way, business users are more involved because they can verify whether newly developed features or improvements meet the proposed business requirements. This approach separates user acceptance testing from the test environment, which is typically the subject of more technical tests. Figure 9-11 shows the environments within a DTAP street.

9781484204856_Fig09-11.jpg

Figure 9-11. A DTAP cycle or street

The concept of an environment in terms of APEX development usually consists of three parts.

  • An APEX workspace in which one or more applications will be imported
  • A database schema to hold the database objects on which the APEX applications depend
  • Optionally, a location on the server’s file system to store the application’s static files

As you have probably noticed, a clear mapping can be made between the Maven build tasks and the environment you have to deploy to.

  • The import goal of the Oracle APEX Maven plugin is capable of importing an application export file into a target APEX workspace.
  • The Liquibase Maven plugin takes care of database script execution.
  • The AntRun plugin is able to copy local files to a location on the server’s file system. You do not need this task during the build process in case you are using the Files section under Shared Components.

Building with Maven to Multiple Environments

The introduction of the multi-environment setup requires you to rethink the POM file in a way that it is capable of building to different environments in an organized way. Luckily, Maven includes several helpful features that allow you to come up with an elegant solution for this requirement. The two main elements on which the solution is built are inheritance and build profiles.

Inheritance

POM files in Maven have the ability to inherit from a parent POM, which defines global-level elements that can be shared across its child POMs. When associating a child with its parent, all parent elements become automatically part of the child. The demo project applies inheritance and includes a parent and child POM in the project’s root folder: parent_pom.xml and multi_env_pom.xml. Creating a parent POM is easily achieved by assigning the value pom to the packaging coordinate element, as shown in Listing 9-31.

The child POM multi_env_pom.xml inherits parent_pom.xml by referencing the file in the parent element. All three coordinate elements must have the same value as defined in the parent POM. Listing 9-32 shows an example parent element.

The goal of inheritance here is to standardize the build configuration across multiple environments. All Maven plugin definitions are moved to a central place in the parent POM under the pluginManagement element. The child POM will then be able to determine what and how plugins should be executed for each environment. This approach prevents the duplication of plugin definitions and improves reusability and maintainability. Take, for example, the Liquibase plugin definition, shown in Listing 9-33, which includes the same configuration properties as in the pom.xml file. The only difference is that most property values now contain variable substitution strings.

The ${property.name} syntax is a reference to a property defined in the child POM. These properties make it possible to dynamically reference plugins with different configuration settings. The technique of replacing substitution strings at runtime is called interpolation in Maven terms.

Build Profiles

The child POM includes a build profile for each environment in the project setup you have to build to. Build profiles divide a POM in distinct parts, making it possible to specify environment-specific properties and references to plugins defined in the parent POM. The technique of build profiles allows you to build the project to different environments from a single POM file. Listing 9-34 shows the definition of the dev build profile.

Every build profile gets an ID assigned, which is typically named after the environment it represents. You will use these IDs in combination with the –P command-line option. As an example, consider the following Maven command:

mvn -f multi_env_pom.xml -P tst compile

This command will activate the tst build profile in the multi_env_pom.xml POM file and will execute all plugin goals that fall under the compile phase. Entering a comma-delimited list of IDs can activate multiple build profiles. Setting the activeByDefault element to true within a build profile automatically activates the profile when the –P option is not provided.

The properties section defines the substitution strings that you have already seen in the parent POM’s pluginManagement section. The plugin configuration properties in the parent POM will be replaced at runtime by the property values defined in the activated build profile.

The build element references the plugins from the parent POM based on groupId and artifactId. These references avoid duplication and keep plugin definitions centralized in one place. Build profiles do not have to execute the same plugins. Table 9-2 shows what build tasks are being executed per environment.

Table 9-2. An Overview of What Build Tasks Are Being Executed per Environment

Tab9-2

The Liquibase Maven plugin is not being referenced in one of the build profiles in the multi_env_pom.xml file. Instead, it gets referenced in the build element of the child POM itself. Plugins defined at this level will be executed before the plugins from the build profiles that belong to the same phase. Thus, Liquibase will always be executed first, followed by any of the plugins defined in the build profiles. Listing 9-35 shows how to reference the parent POM’s Liquibase Maven plugin.

One Code Base to Rule Them All

All environments are being provisioned from a central code repository. It is the single source of truth for the development team and contains, at a minimum, all the artifacts required to completely build and deploy the software product. The code repository concept enables you to take advantage of version control, something that should be part of every software development project, even if you are working all by yourself. Source control products include several essential features that are of great help for any developer.

  • Track file changes and history
  • Revert to an earlier version
  • File merging capabilities
  • The ability to branch and tag versions of the software product
  • Backup and restore
  • Easy to share the project’s source code and configuration with other developers

Even though the APEX application export file is not source control friendly, it is still worth the investment to keep your projects under version control for other types of files. The thick database approach, for example, will inevitably lead to a significant amount of SQL files, which you will want to keep track of. It is also a good idea to put your static files under source control, especially when they are stored on the web server.

The most popular version control system these days is Git. It is a free and open source product designed to handle everything from small to large projects. The basic usage of Git is pretty straightforward, and some online services such as GitHub or BitBucket make it easy accessible and understandable. The demo project, for example, is hosted on GitHub, a web-based Git repository hosting service that makes it really simple to get started with Git thanks to a user-friendly graphical interface. Figure 9-12 shows the desktop version of GitHub for Mac.

9781484204856_Fig09-12.jpg

Figure 9-12. The desktop version of GitHub for Mac

It is also a good idea to directly work on the files in the code repository. Komodo Edit, Atom, or SublimeText are examples of advanced text editors that can boost your development productivity.

Jenkins

Jenkins is an open source continuous integration (CI) tool in which you can define and monitor repeated jobs, such as building a software product. Continuous integration refers to the practice of frequently pushing small amounts of development effort to the main development line, resulting in a fully automated and self-testing software build. This technique helps the development team to quickly identify problems with the code base because every commit to version control triggers a build that verifies that nothing has been broken. CI is broadly adopted in most mainstream development technologies, but it has never been fully explored in Oracle APEX development.

Before starting with Jenkins, you need to create an isolated environment where you can build the project an unlimited amount of times without any limitations. That is why you will introduce the CI environment to the project’s environment setup. The multi_env_pom.xml file includes the ci build profile, shown in Listing 9-36, which will be activated when building to the CI environment.

The build profile does not include a build element because you intend to execute only Liquibase, which is defined in the main build element of the POM itself. Other Maven plugins are unnecessary in this environment since they cannot give you any important feedback. The import of the APEX application export file, for example, either works or does not work. Only a configuration problem can cause the import to fail. The purpose of the CI build is to frequently integrate small pieces of development work in an aim to receive constant feedback on the code base toward the project team.

Installation

The installation process for Jenkins is relatively simple. Simply navigate to http://jenkins-ci.org/ and download the latest release for your specific operating system. Then follow the appropriate installation guidelines to get Jenkins up and running. Jenkins is a Java web application that runs on port 8080 by default. The home page is accessible at http://servername:8080 once Jenkins is successfully running.

Image Note  It is possible to change Jenkins’ default port number. Please review Jenkins’ documentation for any further details on this matter.

If you are using Git as version control system, you will need to install the Git Plugin for Jenkins. Follow these steps to do so:

  1. Open the Manage Jenkins section via the Jenkins’ home page.
  2. Select Manage Plugins.
  3. Click on the Available tab.
  4. Enter the search term git plugin in the Filter field.
  5. Install the Git Plugin.

Creating a Build

In Jenkins, you will create a build for every environment in your project’s setup. The configuration of such a Jenkins build is just a matter of correctly setting a series of properties. To get a better view on what sort of properties you have to set, it is important to understand what happens behind the scenes of a Jenkins build.

  1. The first step is to locate the project’s source code. It is possible to point to a directory on the local file system if you are not using any version control system. For the demo project, however, you will connect each Jenkins build to the public GitHub repository.
  2. The Jenkins build will then locally check out a specific branch from the version control system.
  3. Jenkins executes Maven to start the build and lets you specify the appropriate POM file and options.

The following steps outline how to create the Jenkins build for the CI environment:

  1. Click the New Item link on the Jenkins home page.
  2. Enter a name for the build in the Item Name input field.
  3. Select Maven project as project type.
  4. Click the OK button, which will take you to the build’s configuration page.
  5. Select and configure the appropriate version control system under the Source Code Management section for your project. Select None if you have no source control applied. Here is an overview of the configuration properties for the demo project, which uses Git as VCS:
    1. Repository URL: https://github.com/nbuytaert1/orclapex-maven-demo
    2. Credentials: nickbuytaert/******
    3. Branch Specifier: */master
  6. Under the Build Triggers section, select the Poll SCM option and enter */2 * * * * in the Schedule item to check for changes every two minutes.
  7. Under the Build section, enter the Root POM by pointing to the multi_env_pom.xml file, and specify the following Goals and Options: -P ci compile.
  8. Optionally, enable E-mail Notification under Build Settings.

The SCM polling setting will cause the CI build to automatically run whenever new commits have been pushed to the version control system. The build executes Maven and activates the ci build profile in the multi_env_pom.xml file. This results in Liquibase being executed in order to verify whether anything has broken the build. Figure 9-13 shows the Jenkins home page after creating a build for every environment.

9781484204856_Fig09-13.jpg

Figure 9-13. The Jenkins home page with an overview of all builds

The build configuration for the other environments is similar to the CI build configuration. There are just three differences you have to keep in mind.

  • The Branch Specifier under Source Code Management for the production build will not be the master branch. You typically specify a branch or tag that has been declared production-ready.
  • Do not define a build trigger since all other builds will be executed manually in Jenkins.
  • Change the build profile in the Goals and Options setting according to the environment you want to build.

Summary

Lifecycle management in Oracle Application Express is definitely a challenging exercise. This chapter has introduced a solution that combines a number of open source products in an aim to gain more control over the development and deployment lifecycle. You have to take into account some limitations, but I do believe that the described solution can boost the quality and efficiency in APEX projects.

Keep in mind that this chapter has covered the build automation tasks for only a basic APEX application. Additional build tasks can quickly make their way into a project’s build flow. Including automated tests, for example, is an excellent way to catch bugs faster and more efficiently. Luckily, you can take advantage of the central Maven repository to easily integrate new tasks in the build process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset