This chapter is a bit of a catch-all. It covers some of the common issues you might encounter when running your applications in a production environment. You'll find a number of techniques and features that can make managing your applications a bit easier.
One very commonly recurring question regarding APEX is how you can provide a "nice" URL for your end users to access your application. In other words, rather than the user needing to type the following to access the home page (page 1) of application 101:
http://yourdomain.com:7777/pls/apex/f?p=101:1
you can give them a URL like this:
http://yourdomain.com/accounts.html
or perhaps the following:
http://accounts.yourdomain.com
As you should expect by now, there are actually a few different ways to achieve this goal:
Use a location redirect
Use frames
With Apache mod_rewrite
By proxying requests
The following sections discuss each of these techniques.
You can create an HTML file that contains a redirect to your APEX application. For example, suppose we create a file called buglist.html
with the following content:
<META HTTP-EQUIV="Refresh" CONTENT="0; URL=/pls/htmldb/f?p=101:1">
This code causes the browser to redirect the specified URL (our APEX application) after the specified period, which in this case is 0 seconds, or immediately.
One thing to keep in mind when using the redirection method is that if the user presses the browser's Back button immediately after being redirected, he will return to the page containing the redirect. That page will send the user back to the APEX application. We will review some more elegant solutions to URL management shortly.
You need to place the buglist.html
file somewhere that will be accessible by the web server. You can place it anywhere you like, but we're going to put it in the root location of the web server, which you can find by looking for the DocumentRoot
directive in your Apache configuration (the httpd.conf
file), as shown in Listing 15-1.
Example 15-1. Searching for the DocumentRoot Directive
[jes@pb ~]$ cd $ORACLE_HOME [jes@pb ohs]$ cd Apache/Apache/conf [jes@pb conf]$ grep DocumentRoot httpd.conf DocumentRoot "/Users/oracle/service/u01/app/oracle/product/OHS/Apache/Apache/htdocs"
Here, we changed the current working directory to the value of the ORACLE_HOME
. Remember that this is the ORACLE_HOME
assigned to your OHS or Internet Application Server (IAS) installation, not the ORACLE_HOME
for your database. Then we changed to the configuration directory for the Apache installation and used the grep
command to search for the DocumentRoot
directive in the httpd.conf file
. In our case, the document root is actually $ORACLE_HOME/Apache/Apache/htdocs
; your installation might differ.
After you copy the buglist.html
file in this directory, you can use the following URL to access the file from your browser:
http://127.0.0.1:7780/buglist.html
As soon as you do this, the URL in your browser will be changed to http://127.0.0.1:7780/pls/htmldb/f?p=108:1
as a result of the redirect.
Thus, we have provided the end user with a nicer shortcut to use for the URL to access the application; however, the typical APEX-style URL will still be displayed in the browser's address bar.
In HTML, a frame is an element that allows you to include content from another HTML document inside the main HTML document, effectively letting you break up the main document into separate areas (frames) that receive their source from different HTML documents.
So, how can you use frames to give the end users a nicer URL? Let's look at an example. Suppose you modified the buglist.html
file with the code shown in Listing 15-2.
Example 15-2. Contents of the buglist.html File
<html> <head> <title>Buglist Application</title> </head> <frameset rows="100%,*" border="0"> <frame src="http://192.168.56.101:8080/pls/apex/f?p=108:1" frameborder="0" /> <frame frameborder="0" noresize /> </frameset> </html>
You should now see the application as though you had used an APEX-style URL to access it, as shown in Figure 15-1.
Notice how the address bar in Figure 15-1 no longer shows the APEX-style URL. Even if you navigate to different pages using the tabs and links, the URL shown in the browser will not change. Depending on your exact requirements, you might find this a slight disadvantage of using frames, since even if you follow a link in your application to another web site, the address bar will still display the same URL—there is no indication in the address bar that the user has moved away from that URL.
In this example, we have Apache 2.2.14 running on our local machine on port 80. Your browser will look at the web server's listen port and if it sees 80 or 443 (standard for HTTP and HTTPS respectively) it does not show the port number in the URL. The URL shown in our browser, http://localhost/buglist.htm
, tells you that Apache is running on our local machine at port 80 and we're accessing a file called "buglist.htm
." The Buglist application could be anywhere. When you use another port, like 7777, the browser will show the port number.
You can also make the URL a bit more traditional. Rather than have the buglist.html
file in the document root, you can create a subdirectory called buglist
, move the file into that subdirectory, and rename it to index.html
, as shown in Listing 15-3.
Example 15-3. Creating a Subdirectory in htdocs
[jes@pb htdocs]$ mkdir buglist [jes@pb htdocs]$ mv buglist.html buglist/index.html
Now you can access the application by using the following URL:
http://127.0.0.1/buglist/
You can use this URL because the web server automatically uses the index.html
file in that directory as the default file to serve if no file is specified. In other words, this URL is equivalent to the following:
http://127.0.0.1/buglist/index.html
However, you might find the previous URL looks a bit more pleasing to the eye than needing to specify the index.html
file.
Using Apache mod_write
is definitely a bit more complex than the previous two methods, but it also gives you a lot more control and flexibility. This method relies on the mod_rewrite
module for Apache to rewrite the incoming URL and modify it dynamically to point to your application.
To use this method, you must first ensure that the mod_rewrite
module is included in your Apache configuration and is also enabled. Check that you have a line similar to the following in your main Apache configuration file (typically $ORACLE_HOME/Apache/Apache/conf/httpd.conf
):
LoadModule rewrite_module libexec/mod_rewrite.so
By default, the mod_rewrite
module is shipped with the standard OHS distribution, so you should be able to simply add this line to your configuration. If you are using Oracle Database 11g or Oracle XE, and you're using the embedded PL/SQL gateway rather than an external HTTP server, you can still use this method by using an external HTTP server and proxying requests to the gateway (as discussed in the next section, "Proxying Requests").
You can enable the mod_rewrite
module by including the following line in the same configuration file:
RewriteEngine On
Now you can add some extra directives to the configuration file to perform the actual rewrite logic, as in this example:
RewriteRule ^/buglist$ http://127.0.0.1:7780/pls/apex/f?p=108:1 [R=301]
This should all be on one line, but has been broken up to make it more readable. You will need to restart the web server before this rule will take effect. After you restart the web server, you will be able to use the following URL to access the application:
http://127.0.0.1:7780/buglist
Note that the rewrite rule will not work if you specify a trailing slash (as in http://127.0.0.1:7780/buglist/
) because the rewrite rule uses a regular expression to match the requested URL: ^/buglist$
. This means that the URL needs to begin with a forward slash followed by the exact text phrase buglist
. If you wanted to make the trailing slash mandatory, you would modify the rewrite rule so that it reads as follows:
RewriteRule ^/buglist/$ http://127.0.0.1:7780/pls/apex/f?p=108:1 [R=301]
The rewrite rule has the following format:
RewriteRule url-pattern new-url [[flag, ...]]
The first parameter after the RewriteRule
directive is a regular expression to match against incoming requested URLs. The second parameter is the new URL that should be used if the url-pattern
parameter matches the requested URL. You can also pass a number of flags to the rule; in this example, we are using R=301
. The purpose of the R=301
flag is to make the web server return the HTTP-301 response code to the browser, which is the code used for a permanent redirect, causing the browser to redirect to the new URL. Using this permanent redirect code should (in theory) enable robots and web spiders to learn that the new URL should be your APEX URL, rather than the shortened URL (however, our experience has been that this is not always as straightforward as it should be).
Rather than using a permanent redirect, you might choose to cause a temporary redirect using the HTTP-302 response code, by changing the [R=301]
to [R=302]
. You would do this if you were temporarily changing the page that the url-pattern
should redirect to, rather than making a permanent change. This is a good way to temporarily take your applications offline for routine maintenance, for example.
An extension to this method is modifying the rewrite rule so that you can handle pointing a different domain name at your application. For example, rather than using http://127.0.0.1:7780
, you might want to use http://buglist.yourdomain.com
. In this case, you first must make sure that the Domain Name System (DNS) records for the domain (buglist.yourdomain.com
) are correctly set up so that the domain points to the machine that is running your web server. In other words, when the user enters http://buglist.yourdomain.com
into a browser, the user's machine will perform a DNS query to see to which web server (that is, which IP address) the request needs to be sent.
In most medium-sized companies, someone will have the responsibility of configuring the DNS settings for your domain. If you are using this technique for a domain that you have registered on the Internet, your name registrar will usually provide a web control panel where you can configure the DNS settings for the domain.
As an example, we have made some changes to our DNS configuration so that the buglist.localdomain
address points to the IP address 192.168.1.7, which is on our local test network. We have used the Unix dig
command to verify that the domain name is resolvable via DNS and also that it points to the correct IP address, as shown in Listing 15-4.
Example 15-4. Using dig to Verify the DNS Configuration
[jes@pb ~]$ dig buglist.localdomain ; <<>> DiG 9.3.4 <<>> buglist.localdomain ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1122 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;buglist.localdomain. IN A ;; ANSWER SECTION: buglist.localdomain. 86400 IN A 192.168.1.7 ;; Query time: 51 msec ;; SERVER: 192.168.1.1#53(192.168.1.1) ;; MSG SIZE rcvd: 49
Next, we need to modify the httpd.conf
file to enable a virtual host for that domain name. A virtual host is a way of serving multiple domain names (and corresponding web sites) from a single Apache configuration. There are different ways to configure virtual hosts, depending on whether you have a single IP address or multiple IP addresses available. For this example, we are using a single IP address and will have multiple virtual hosts listening on the same IP address. Listing 15-5 shows the VirtualHost
entry in our Apache configuration for this domain.
Example 15-5. Using a VirtualHost Entry
<VirtualHost *> ServerName buglist.localdomain ServerAlias buglist.localdomain RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=108:1 [R=301] </VirtualHost>
We have removed the previous RewriteEngine
On directive from the main body of the httpd.conf
file, since we just need to locally enable it within this new VirtualHost
section. We also moved the RewriteRule
directive so that it is now contained within the VirtualHost
section.
If a request is now made to the web server where the requested domain name is buglist.localdomain
, the directives within this VirtualHost
section will be used. If the domain name does not match the settings in this VirtualHost
section, they will not be used.
If we now enter http://buglist.localdomain
into a browser's address bar, the request will be sent to the web server (since the DNS resolves this domain name to the IP address of the web server). The Apache web server will then recognize that the domain name matches this VirtualHost
section and will apply the RewriteRule
directive, which states that if the URL matches ^/$
(nothing other than the trailing slash on the end of the domain name), the web server should issue an HTTP-301 redirect to the browser to point to our APEX application.
One of the benefits of this method is that you can include many other directives in the VirtualHost
section to process different domains separately. For example, if you want to place static files (images, CSS, and so on) on the web server file system, rather than storing them in the default location specified by the DocumentRoot
directive you saw earlier, you can assign a new DocumentRoot
for each VirtualHost
section, like the following:
<VirtualHost *> ServerName buglist.localdomain ServerAlias buglist.localdomain DocumentRoot /www/buglist RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=108:1 [R=301] </VirtualHost>
Here, we have specified that the root directory for the buglist.localdomain
virtual host is the /www/buglist
directory on the web server file system (or possibly on networked storage). This means we could create a subdirectory in the /www/buglist
directory called images
and upload a file called logo.gif
. We would then be able to refer to the logo.gif
file via a relative URL within the application, as follows:
<img src="/images/logo.gif"></img>
Using this approach makes it easy to maintain static files and resources on a domain basis. For example, if you are in the process of testing a whole redesign of your web site with new images (and perhaps JavaScript and CSS), rather than having to copy all those new files over the old ones, you could create a new virtual host for your beta test, modifying the DocumentRoot
directive to point to the directory containing all your new files, like the following:
<VirtualHost *> ServerName beta.buglist.localdomain ServerAlias beta.buglist.localdomain DocumentRoot /www/buglist/beta RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=108:1 [R=301] </VirtualHost>
This would enable you to easily test your application with the new files without needing to modify the old version of the application.
There are many more Apache directives that you might find useful to include in your VirtualHost
sections. We encourage you to look through the Apache documentation.
In the previous section, we discussed how you can use the mod_rewrite
module to rewrite the URL. However, you can do much more than just rewriting the URL. You can actually request the content from another web server, and then pass that content back to the user's browser as though it had come from the original web server. This could be useful in a number of scenarios, such as the following:
Your end users cannot directly access your OHS/IAS for networking reasons.
You are using the built-in embedded PL/SQL gateway in Oracle XE or Oracle Database 11g, but require more advanced features than Apache provides.
You wish to make your application available on the Internet, but you do not want to move your OHS/IAS so that it is visible from the Internet.
You are using the embedded PL/SQL gateway, but your security policy does not allow you to open a firewall route from the outside to the database.
In these cases, you can install a stand-alone Apache server and make it the one that resolves requests. This Apache server will then proxy requests to the OHS/IAS or embedded PL/SQL gateway by using some special Apache directives.
As an example, consider the scenario shown in Figure 15-2. Here, the end users are accessing the application over the Internet, but we wish to hide the database and OHS from direct access, so they are behind the firewall. We have situated the proxy Apache server outside the firewall, but have configured the firewall to allow the proxy Apache server to talk to OHS. (The details of how you would secure a configuration like this are up to your network administrator, DBA, and everyone else concerned.) We are going to allow requests to be made via the proxying server to the OHS and have the responses sent back again.
This technique relies on the mod_proxy
module, so you will need to include that module in your Apache configuration, in the same way that you include the mod_rewrite
module. You can either compile the module into your Apache binary file or include it as a module. In this example, we use the module method, so we add the following line to the httpd.conf
file:
LoadModule proxy_module libexec/mod_proxy.so
Now we can create a VirtualHost
section, as we did in the previous section, as shown in Listing 15-6.
Example 15-6. VirtualHost Entry with Proxying Directives
<VirtualHost *> ServerName buglist.foo.com ServerAlias buglist.foo.com DocumentRoot /www/buglist RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=108:1 [R=301] ProxyPass /pls/apex http://ohs:7777/pls/apex ProxyPassReverse /pls/apex http://ohs:7777/pls/apex ProxyPass /i http://ohs:7777/i ProxyPassReverse /i http://ohs:7777/i </VirtualHost>
First, we have the directives that define which domain name and URL the VirtualHost
section applies to, and also the DocumentRoot
for this virtual host.
ServerName buglist.foo.com ServerAlias buglist.foo.com DocumentRoot /www/buglist
Next, we have the main rewrite rules, as before.
RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=101:1 [R=301]
Any requests made to http://buglist.foo.com
will be redirected to http://buglist.foo.com/pls/apex/f?p=101:1
. This redirect will still be handled by the proxying web server. This is within the same domain, so the DNS will still resolve to the IP address for the proxying Apache server.
Finally, we have the proxying directives:
ProxyPass /pls/apex http://ohs:7777/pls/apex ProxyPassReverse /pls/apex http://ohs:7777/pls/apex ProxyPass /i http://ohs:7777/i ProxyPassReverse /i http://ohs:7777/i
You can see that there are actually two different directives: ProxyPass
and ProxyPassReverse
. As the names suggest, these directives are responsible for the proxying in different directions.
The first directive checks to see if the URL contains the /pls/apex
location. If it does, the request is passed to http://ohs:7777/pls/apex
. This is actually a URL that corresponds to the OHS web server. In other words, if we make a request to http://buglist.foo.com/pls/apex/f?p=108:1
, then the proxying server makes this same request to the OHS web server and uses the content returned from the OHS to pass back to the user's browser.
The second directive, ProxyPassReverse
, is specified in a similar way. However, the purpose of this directive is to hide the fact that the proxy was used from the user's browser. If we did not perform this step, the content might contain references to URLs from the OHS rather than the proxying Apache server. For example, returned URLs cannot contain the OHS hostname, since that hostname is not resolvable from the user's machine (because it is on our internal network). Also the port number used for the OHS (port 7777) needs to be hidden; otherwise, the user's browser would try to connect to the proxying web server on port 7777. In other words, these directives allow the proxying server to act as a gateway or conduit between the user's browser and the OHS. However, you can also use directives such as DocumentRoot
so you can store static files on the proxying web server rather than the OHS.
One thing you need to watch out for when you perform proxying like this is that as far as the OHS/IAS/embedded gateway is concerned, all of the web requests are coming from the proxying Apache server, rather than from the user's browser. So, if you need to know which hostname was used in the web request, rather than the value that was specified in the ProxyPass
directive, you may need to add the following line to your configuration:
ProxyPreserveHost On
This will enable the OHS/IAS to evaluate the value used in the HTTP Host header correctly. In other words, it will be able to determine that the request was made for buglist.foo.com
, rather than ohs
, which was specified in the ProxyPass
directive.
Although Oracle is moving toward using the embedded PL/SQL gateway as the method of accessing APEX, we certainly do not see the benefits of using an external proxying server decreasing—in fact, quite the opposite. Until the PL/SQL gateway supports all (or the majority) of the features that are present in Apache, such as HTTP compression, there will always be a reason to use a proxying web server in this way. This is also a good way to conceal the fact that your application is running in APEX.
Another common issue is ensuring that applications are backed up, so they can be restored in case of disaster. We are talking specifically about the application here, not the schema objects or data that the application uses. For data and database backups, you should already have a well-defined business policy (if you don't, then you need to establish one!).
Of course, there are a number of different ways in which you can perform application backups, including as manual or automatic exports and through the database.
You can export your application from the Application Builder interface. However, since exporting is a manual task, if you have a large number of applications, using this as a backup method is rapidly going to become laborious. Other drawbacks are that it is easy to forget to make a backup and that you need to come up with a manageable workflow to enable you to archive versions of application exports.
We do not advise that you rely on manual exports as your backup policy for your application. However, they can be extremely useful for making a backup just before you do something that you consider risky.
One of the best features of APEX is that it runs entirely inside the database. This has the obvious side effect that if you back up your database, you have also backed up your applications, since each application is stored as metadata within a schema in the database. As long as you have a recent backup of the database, your applications will also be contained within that backup (assuming that it is either a full database backup or you also backed up the tablespace/schemas associated with APEX).
If you are using RMAN to manage your database backups, it can be extremely easy to restore the tablespace/schema in which APEX is installed, thereby reverting all of the applications back to a prior state (although if you just wanted to restore a single application, this method would not be suitable).
The Oracle database also has another great feature known as Flashback, which enables you to go back to a prior point without needing to perform a full restore from backups. To demonstrate how you can use this feature to recover from a problem with your APEX application, imagine that a developer has gone into SQL Workshop and accidentally executed the following command:
delete from apex_application_files
This command deletes files from the underlying table (apex_application_files
is a view) that are associated with the current workspace. The developer only meant to delete a single file but forgot to include a where
clause restriction. Even worse, the SQL Workshop environment has automatically committed the transaction, so the developer cannot perform a rollback. So, what now?
If you had backups of those files, you could upload them back into the APEX environment. But if there were hundreds or thousands of those files, or you didn't have external backups of some of the files, you would be in trouble.
A solution is to take advantage of the Oracle Flashback technology to return the database to the state it was in before the files were deleted. You can use this technique for any situation where you need to get the database back to a prior point in time without reaching for the backups. For example, if you make the mistake of deleting the wrong application and do not have a recent backup of that application, you can use this Flashback method to take the database back to the point before you deleted the application.
As an example, we're going to simulate that disastrous delete
command (in fact, we're not just going to simulate it, we're going to actually do it!). We'll use SQL*Plus for this example. First, we connect, via SQL*Plus, to the schema associated with our workspace, as follows:
apexdemo@DBTEST> select count(*) from apex_application_files; COUNT(*) -------- 0
By default, we cannot see the files in that view, since we are not running the query inside the APEX environment (that is, our workspace is not defined in SQL*Plus). We can fix that as follows:
apexdemo@DBTEST> select 2 wwv_flow_api.get_security_group_id 3 from dual; GET_SECURITY_GROUP_ID --------------------- 0 apexdemo@DBTEST> exec wwv_flow_api.set_security_group_id; PL/SQL procedure successfully completed.
apexdemo@DBTEST> select 2 wwv_flow_api.get_security_group_id 3 from dual; GET_SECURITY_GROUP_ID --------------------- 1.1830E+15
Note that you need to execute only wwv_flow_api.set_security_group_id
. In this example, we used wwv_flow_api.get_security_group_id
to show that before the wwv_flow_api.set_security_group_id
call, the SGID was set to 0, and afterward it was set to the correct value.
Now we can re-query the apex_application_files
view, as follows:
apexdemo@DBTEST> select count(*) from apex_application_files; COUNT(*) -------- 8
We can see that eight files have been uploaded for this workspace. Now we can simulate what the user did by deleting the files. However, before we do that, let's check the current SCN in another SQL*Plus session, since we'll need that later.
SQL> select dbms_flashback.get_system_change_number from dual; GET_SYSTEM_CHANGE_NUMBER ------------------------ 3334372
This shows the current SCN is 3334372.
We can now delete the records, as follows:
apexdemo@DBTEST> delete from apex_application_files; 8 rows deleted. apexdemo@DBTEST> commit; Commit complete. apexdemo@DBTEST> select count(*) from apex_application_files; COUNT(*) -------- 0
They're gone—deleted. How do we get them back? We could restore from backups, of course, but there are a couple of ways that are much easier. First, since we only deleted records from one table we can execute a flashback query that will grab the records we just deleted from the flashback database. Using the SCN from above the flashback query would look like the following:
insert into apex_application_files
select * from apex_application_files
as of scn 3334372 /
This query actually re-populates the apex_application_files
table using data from the flashback database. If we had done more damage to the database, we could flash the whole database back to a point in time before the damage was done. To flash the database back, we must first shut down the database and mount it, as follows:
SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 452984832 bytes Fixed Size 779648 bytes Variable Size 133175936 bytes Database Buffers 318767104 bytes Redo Buffers 262144 bytes Database mounted.
Now we use Flashback to go back to the state we were in before we deleted the files (SCN 3334372).
SQL> flashback database to scn 3334372; Flashback complete.
So now we can open the database, but we also need to remember to reset the logs, since we have used Flashback (the current logs are no longer needed/valid).
SQL> alter database open resetlogs; Database altered.
Now we can re-query the apex_application_files
view, as follows:
apexdemo@DBTEST> exec wwv_flow_api.set_security_group_id; PL/SQL procedure successfully completed. apexdemo@DBTEST> select count(*) from apex_application_files; COUNT(*) -------- 8
And we see that the data is back!
You could be saying at this point, "So what? I could do that if I was using Java instead of APEX. All you've done is use Flashback technology to flash back the database."
To see what's so great about this, imagine for a moment that instead of performing a delete on apex_application_files
, you are in the process of a massive, enterprisewide change management task. Suppose you have just run 394 SQL scripts to upgrade your production application from version 3.41 to 3.42, and then you update your APEX application. Now imagine that something terrible happens, and it doesn't work in production (every developer has this experience at least once in his life, right?). How easily can you regress your production environment back to the working environment you had before? With APEX, you can do this extremely easily by using the Flashback feature of the database. Not only will you go back to the working versions of your schema (the old versions of objects such as tables and their data), but because your APEX applications also live inside the database, you will automatically go back to the version of the application as it was at that point (that is, the working version).
Compare this with a traditional client/server solution or a middle-tier solution, where not only would you need to regress the database, but you would also need to regress the middle-tier applications back to the previous versions. You would also need to remember to revert back to the old versions of any configuration files that the applications might need. In our experience, the fewer things you need to do, the less chance there is of forgetting something.
Note that the usual caveats apply here. Don't test the sample code on your production system. The best advice we can give for critical tasks is this: read the documentation, read the documentation, read the documentation.
Also, note that we're not advocating that using the database Flashback feature should be your first course of action. You should consider using some of the other Flashback technologies, such as Flashback Table, where appropriate, since flashing back the entire database is desirable only in certain circumstances. However, if you need this feature, you've seen how easy it is to use.
You may not be aware that tucked away in the APEX installation package that you downloaded are a couple of utilities (written in Java) that you can use to export your applications. In fact, the utilities can do much more than that. You can use them to export all the applications within a workspace or even all the applications within the instance.
If you have already deleted the APEX installation files after successfully installing APEX, you'll need to download APEX again, since the utility files are not available individually.
The files can be found in the utilities
subdirectory of the root installation directory. We're using a Unix machine for the examples here; however, due to the portability of Java applications, they will also work quite happily on other operating systems—assuming that you followed the directions in the readme.txt
file to set up your Java environment. If you already have Java installed, you should just need to set up your Java CLASSPATH
environment variable, like the following:
[jes@pb ~]$ export CLASSPATH=.:${ORACLE_HOME}/jdbc/lib/ojdbc5.jar
But do read the readme.txt
file in this directory. There have been some subtle changes in the use of APEXExport
in that the JDBC drivers have changed. You now have the choice of either odjbc5.jar
or ojdbc6.jar
. The difference between these drivers is that ojdbc5
is for use with JDK 1.5
where odbcj6
is for use with JDK 1.6
. The 11g database no longer supports JDK 1.4
, so the ojdbc14.jar
is not delivered with the database.
Many Linux distributions have Java 1.4 installed by default in the /usr/bin
directory. The /usr/bin
directory is also in the system path by default, so the 1.4 version of Java will be used when you run the java
command. To determine which version of Java you have, use the following command:
[jes@pb ~]$ java -version java version "1.6.0_25" Java(TM) SE Runtime Environment (build 1.6.0_25-b06) Java HotSpot(TM) Client VM (build 20.0-b11, mixed mode, sharing)
As you can see, we are running Java version 1.6.0_25. Since the JDBC drivers for 11g only support java version 1.5 and 1.6, APEXExport
will work on our machine. If your machine is running Java version 1.4 or earlier, you will have to upgrade.
In Listing 15-7, we have set the current directory to the directory where we downloaded the APEX distribution, and then we perform a directory listing (using the ls -al
command). We have highlighted the utilities
directory in bold.
Example 15-7. Contents of the APEX Distribution Directory
[jes@pb apex]$ pwd /home/oracle/downloads/apex [jes@pb apex]$ ls -al total 1076 drwxr-xr-x 8 oracle dba 4096 Mar 2 01:53 . drwxr-xr-x 4 oracle dba 4096 Mar 2 01:17 .. -rw-r--r-- 1 oracle dba 857 Mar 2 01:53 afiedt.buf -r--r--r-- 1 oracle dba 16068 Apr 16 2010 apex_epg_config_core.sql -r--r--r-- 1 oracle dba 690 Sep 10 2008 apex_epg_config.sql -r--r--r-- 1 oracle dba 6831 Aug 4 2010 apexins.sql -r--r--r-- 1 oracle dba 14642 Apr 5 2010 apexvalidate.sql -r--r--r-- 1 oracle dba 1628 Apr 5 2010 apxchpwd.sql -r--r--r-- 1 oracle dba 2909 Apr 5 2010 apxconf.sql -r--r--r-- 1 oracle dba 7733 Nov 4 07:17 apxdbmig.sql -r--r--r-- 1 oracle dba 10582 Nov 4 07:17 apxdevrm.sql -r--r--r-- 1 oracle dba 3160 Apr 5 2010 apxdvins.sql -r--r--r-- 1 oracle dba 1028 Apr 5 2010 apxe101.sql -r--r--r-- 1 oracle dba 538 Mar 7 2008 apxe102.sql -r--r--r-- 1 oracle dba 2917 Apr 5 2010 apxe111.sql -r--r--r-- 1 oracle dba 8255 Apr 16 2010 apxldimg.sql -r--r--r-- 1 oracle dba 1478 Apr 5 2010 apxrelod.sql -r--r--r-- 1 oracle dba 4404 Apr 5 2010 apxremov.sql -r--r--r-- 1 oracle dba 4389 Aug 4 2010 apxrtins.sql -r--r--r-- 1 oracle dba 44 Apr 26 2007 apxsqler.sql -r--r--r-- 1 oracle dba 8939 Apr 5 2010 apxxemig.sql -r--r--r-- 1 oracle dba 1570 Apr 5 2010 apxxepwd.sql drwxr-xr-x 11 oracle dba 4096 Nov 19 13:25 builder -r--r--r-- 1 oracle dba 4371 Apr 5 2010 catapx.sql drwxr-xr-x 2 oracle dba 20480 Dec 17 09:41 core -r--r--r-- 1 oracle dba 93963 Nov 4 07:17 coreins.sql -r--r--r-- 1 oracle dba 12162 Nov 4 07:17 devins.sql drwxr-xr-x 12 oracle dba 4096 Oct 22 11:01 doc -r--r--r-- 1 oracle dba 2400 Apr 5 2010 endins.sql drwxr-xr-x 26 oracle dba 36864 Nov 10 17:07 images -rw-r--r-- 1 oracle dba 729625 Mar 2 01:42 install2011-03-02_1-23-27.log -r--r--r-- 1 oracle dba 1374 Apr 5 2010 load_trans.sql drwxr-xr-x 2 oracle dba 4096 Nov 15 08:32 owa drwxr-xr-x 4 oracle dba 4096 Nov 10 17:06 utilities -rw-r--r-- 1 oracle dba 4775 Jun 8 2010 welcome.html
Two Java utilities are available: APEXExport
and APEXExportSplitter
. Both reside in the oracle/apex
subdirectory, as shown in Listing 15-8.
Example 15-8. APEXExport and APEXExportSplitter Utilities
[jes@pb utilities]$ ls -al total 36 drwxr-xr-x 4 oracle dba 4096 Nov 10 17:06 . drwxr-xr-x 8 oracle dba 4096 Mar 2 01:53 .. -r--r--r-- 1 oracle dba 4197 Aug 4 2010 enable_sso.sql drwxr-xr-x 2 oracle dba 4096 Nov 10 17:06 fop drwxr-xr-x 3 oracle dba 4096 Nov 10 17:06 oracle -r--r--r-- 1 oracle dba 4964 Nov 4 07:17 readme.txt -r--r--r-- 1 oracle dba 2602 Apr 5 2010 reset_image_prefix.sql [oracle@oim utilities]$ find . -print . ./oracle ./oracle/apex ./oracle/apex/APEXExport.class ./oracle/apex/APEXExportSplitter.class ./enable_sso.sql ./fop ./fop/fop.war ./readme.txt ./reset_image_prefix.sql
Here, we used the find
command to list the files contained in the oracle/apex
subdirectory. Note that we did not change the working directory to the oracle/apex
subdirectory, since we need to run the command from the utilities
directory using the command shown in Listing 15-9.
Example 15-9. APEXExport Usage Information
[jes@pb utilities]$ java oracle/apex/APEXExport Usage APEXExport -db -user -password -applicationid -workspaceid -instance -expWorkspace - skipExportDate -expPubReports -expSavedReports -expIRNotif -expTeamdevdata -expFeedback - deploymentSystem -expFeedbackSince -debug -db: Database connect url in JDBC format -user: Database username -password : Database password -applicationid : ID for application to be exported -workspaceid : Workspace ID for which all applications to be exported or the workspace to be exported -instance : Export all applications -expWorkspace : Export workspace identified by -workspaceid or all workspaces if - workspaceid not specified -skipExportDate : Exclude export date from application export files -expPubReports : Export all user saved public interactive reports -expSavedReports: Export all user saved interactive reports -expIRNotif : Export all interactive report notifications -expFeedback : Export team development feedback for all workspaces or identified by - workspaceid to development or deployment -expTeamdevdata : Export team development data for all workspaces or identified by - workspaceid
-deploymentSystem : Deployment system for exported feedback -expFeedbackSince : Export team development feedback since date in the format YYYYMMDD Application Example: APEXExport -db candy.us.oracle.com:1521:ORCL -user scott -password tiger -applicationid 31500 Workspace Example: APEXExport -db candy.us.oracle.com:1521:ORCL -user scott -password tiger -workspaceid 9999 Instance Example: APEXExport -db candy.us.oracle.com:1521:ORCL -user system -password manager -instance Export All Workspaces Example: APEXExport -db candy.us.oracle.com:1521:ORCL -user system -password manager -expWorkspace Export Feedback to development environment: APEXExport -db candy.us.oracle.com:1521:ORCL -user scott -password tiger -workspaceid 9999 -expFeedback Export Feedback to deployment environment EA2 since 20100308: APEXExport -db candy.us.oracle.com:1521:ORCL -user scott -password tiger -workspaceid 9999 -expFeedback -deploymentSystem EA2 -expFeedbackSince 20100308
Since we did not use any parameters, the command simply output some default usage help information. As you can see, the usage and parameters are pretty intuitive. For example, if you wanted to export application 101, all you would need to do is pass in the database, account, and application details, as shown in Listing 15-10.
Example 15-10. Exporting an Application
[jes@pb utilities]$ java oracle/apex/APEXExport -db localhost:1521:dbtest -user apexdemo -password pass -applicationid 108 Exporting application 108 Completed at Thu Jan 25 03:25:26 BST 2007 [jes@pb utilities]$ ls -al total 760 drwxr-xr-x 5 jes jes 170 Jan 25 03:25 . drwxr-xr-x 21 jes jes 714 Jan 25 03:07 .. -rw-r--r-- 1 jes jes 382591 Jan 25 03:25 f108.sql drwxr-xr-x 3 jes jes 102 Mar 14 17:45 oracle -r--r--r-- 1 jes jes 3747 Feb 27 14:35 readme.txt
Here, we pass in the database connection string in the format of hostname:port:sid
(localhost:1521:dbtest
) and also specify the username (or schema) that is associated with the workspace in which the application resides. Finally, we pass the ID of the application we wish to export (108 in this example). The command produces some output while it executes, showing which application is being exported.
After the command has executed, you will see that a file named f108.sql
is created, which is exactly the same sort of application export file that you would get if you used the Application Builder interface to export the application. In other words, you could now use the Application Builder interface to import this application into another workspace (perhaps on another machine) if desired. You can also import the application by running the SQL script file while connected as your APEX_40000
user if you prefer to perform the import at the command line (or perhaps as part of a batch import).
You can also export all of the applications for a particular workspace. Unfortunately, you need to do a little work here, because the APEXExport
utility requires the workspace ID rather than the workspace name (yes, it would be a nice feature if we could just pass in the workspace name instead). You can get your workspace ID by running the following query in SQL Workshop:
select v('WORKSPACE_ID') from dual
Alternatively, you can execute the following commands in SQL*Plus:
apexdemo@DBTEST> select v('WORKSPACE_ID') from dual; V('WORKSPACE_ID') ----------------- 0 apexdemo@DBTEST> exec wwv_flow_api.set_security_group_id; PL/SQL procedure successfully completed. apexdemo@DBTEST> select v('WORKSPACE_ID') from dual V('WORKSPACE_ID') ----------------- 1280317158978593
You can see that before the call to wwv_flow_api.set_security_group_id
, the value of V('WORKSPACE_ID')
is 0. After the call, you get the correct workspace ID, which you can now use with the APEXExport
command, as follows:
[jes@pb utilities]$ java oracle/apex/APEXExport -db localhost:1521:orcl -user apexdemo -password pass -workspaceid 1280317158978593 Exporting Application 100:'Sample Application' Completed at Fri Apr 08 15:01:20 CDT 2011 Exporting Application 101:'BUGLIST' Completed at Fri Apr 08 15:01:22 CDT 2011 Exporting Application 102:'Theme Master'
As you can see, it took around two seconds to export three applications from our workspace, which is certainly quicker than doing it via the Application Builder interface. In the following, also note that each application is exported into its own separate file, rather than a single file:
[jes@pb utilities]$ ls -al total 2736 drwxr-xr-x 4 oracle dba 4096 Apr 8 15:04 . drwxr-xr-x 8 oracle dba 4096 Mar 2 01:53 .. -r--r--r-- 1 oracle dba 4197 Aug 4 2010 enable_sso.sql -rw-r--r-- 1 oracle dba 564711 Apr 8 15:01 f100.sql -rw-r--r-- 1 oracle dba 2064229 Apr 8 15:01 f101.sql -rw-r--r-- 1 oracle dba 121448 Apr 8 15:01 f102.sql drwxr-xr-x 2 oracle dba 4096 Nov 10 17:06 fop drwxr-xr-x 3 oracle dba 4096 Nov 10 17:06 oracle
-r--r--r-- 1 oracle dba 4964 Nov 4 07:17 readme.txt -r--r--r-- 1 oracle dba 2602 Apr 5 2010 reset_image_prefix.sql
You can also export all of the applications in the entire instance, as shown in Listing 15-11.
Example 15-11. Exporting All Applications in the Workspace
[jes@pb utilities]$ java oracle/apex/APEXExport -db localhost:1521:orcl -user system -password pass -instance Exporting Application 100:'Sample Application' Completed at Fri Apr 08 15:06:40 CDT 2011 Exporting Application 101:'BUGLIST' Completed at Fri Apr 08 15:06:40 CDT 2011 Exporting Application 102:'Theme Master' Completed at Fri Apr 08 15:06:41 CDT 2011 Exporting Application 103:'Global App' Completed at Fri Apr 08 15:06:41 CDT 2011 Exporting Application 104:'BUGLIST_TABTEST' Completed at Fri Apr 08 15:06:41 CDT 2011 Exporting Application 105:'BUGLIST1' Completed at Fri Apr 08 15:06:41 CDT 2011 Exporting Application 106:'Theme Subscriber' Completed at Fri Apr 08 15:06:41 CDT 2011
So we now have export files for all of the applications in our test instance. Note that this has exported the applications, but you would still need to manually export the workspace itself, if required.
You might be wondering how you know which workspace these export files belong to. In a disaster recovery situation, which workspace would each application need to be installed into? The information for that is contained within the application export file itself. For example, if we look at the f101.sql
file, we find the following:
prompt Set Credentials... begin -- Assumes you are running the script connected to SQL*Plus as the Oracle user APEX_40000 or as the owner (parsing schema) of the application. wwv_flow_api.set_security_group_id(p_security_group_id=>nvl(wwv_flow_application_install.get_w orkspace_id,1280317158978593)); end; /
You can see that the file has a call to the same wwv_flow_api.set_security_group_id
procedure that we used in our earlier SQL*Plus session, except in this case, the script is passing a value in the p_security_group_id
parameter. This value (1280317158978593
) is the same value that we obtained when we queried v('WORKSPACE_ID')
. In other words, the script will install back into the same workspace if we execute it from SQL*Plus (assuming that workspace exists in the instance in which we install it).
We can now use a Unix cron
entry (a way of scheduling commands) to run the APEXExport
command at predefined intervals. First, we create a Unix shell script (called backup_apex.sh
), which wraps the APEXExport
command, as shown in Listing 15-12.
Example 15-12. Contents of the backup_apex.sh Script
#!/usr/bin/bash export CLASSPATH=.:/u1/jdbc/lib/classes12.zip:/opt/local/apexbkup cd /opt/local/apexbkup/ /usr/local/bin/java oracle.apex.APEXExport -db localhost:1521:dbtest -user apexdemo -password pass -workspaceid 1280317158978593
This script sets the CLASSPATH
environment variable so that the APEXExport
command can find the required Java libraries (in the same way that we had to set CLASSPATH
variable at the command line before).
[jes@pb apexbkup]$ ls -al total 8 drwxr-xr-x 3 jes jes 512 May 12 15:01 . drwx--x--x 10 jes jes 512 Jun 4 19:49 .. -rwxr-xr-x 1 jes jes 223 May 12 15:00 backup_apex.sh drwxr-xr-x 3 jes jes 512 May 12 14:46 oracle
So far, we have placed the backup_apex.sh
script in /opt/local/apexbkup
and have also copied the directory containing the APEXExport
Java command to this directory. You could locate this in a different directory and modify the backup_apex.sh
script so that it pointed to the correct directory, but we have done it this way to keep the example simple.
We can now run the backup_apex.sh
script rather than having to type the full command line in full, as in the earlier example, as follows:
[jes@pb ~]$ /opt/local/apexbkup/backup-apex.sh Exporting Application 100:'Sample Application v2.0' Completed at Thu Jan 25 10:43:55 BST 2007 Exporting Application 101:'Sample Application v2.0' Completed at Thu Jan 25 10:43:57 BST 2007 Exporting Application 102:'OJ API' Completed at Thu Jan 25 10:44:00 BST 2007 . . . extra output omitted
The exported files will be located in the /opt/local/apexbkup
directory, due to the cd /opt/local/apexbkup
command in our script. You could modify this if you wished, or perhaps change the script so that it copies the files to another directory (perhaps creating a new directory for each day's exports so that it is easier to locate previous exports).
The following code shows the contents of the crontab
file that we have created in our Unix account:
jes@pb[10:44am]~> crontab -l 0 1 * * * /opt/local/apexbkup/backup-apex.sh
This crontab
entry means "run the /opt/local/apexbkup/backup-apex.sh
script at 1 a.m. every day."
Now, this is quite a simplified example, and there are a few obvious flaws, as follows:
The username and password credentials are stored in the backup-apex.sh
script. You could tie down the permissions on that file so that no one else can read it (it would still be executable by the owner of the file and via the cron
entry, of course).
You cannot easily configure where the export scripts should be output to from the APEXExport
command itself (it would be a nice addition if you could), so you would need to control this from the backup-apex.sh
script itself. This means that you would need to develop this simple example further if you wanted to use it in a production environment.
You cannot export the workspace definitions in the same way that you can export the applications. This means that you will also need to export the workspaces manually at regular intervals (where your own policy defines what "regular" means).
If you are familiar with Unix shell scripting, you can do some incredibly sophisticated things. We have set up one of our test environments with a similar automated export routine, but we have adapted the shell script, so that once the files are exported, the shell script automatically checks them into our source control system. Alternatively, you could e-mail those exported files to some offsite location, which you could then access in the event of a problem.
These examples demonstrated just how easy (and incredibly useful) automating your application backups can be. We encourage you to use the command-line tools to reduce the burden of performing the exports manually for backup purposes.
If you have used the Application Builder interface to export your application, you might not have noticed an incredibly useful setting that enables you to export your application as it existed at a previous point in time. This is the "As of" setting, as shown in Figure 15-3.
In Figure 15-3, a value of 30 is entered in the "As of" field before performing the export. This will have the effect of creating an export file for the application as it existed 30 minutes ago; in other words, without any of the changes made in the last 30 minutes.
Suppose you have spent all day making changes to your application, but you have also accidentally changed some code that you shouldn't have, which has now broken your application. In this situation, if you restored your application from the previous day's export (that, of course, you made with the automated backup method we have already covered), you would lose all the changes you made today. However, if you use the "As of" setting to export your application to a point in time before you made those fatal changes, you will be able to effectively keep the changes you spent all day making, but lose the more recent changes that broke your application.
This method is loosely analogous to recovering the database to a previous point in time, rather than restoring it to a previous backup. If the last backup you have is yesterday's, or last week's, you are going to lose all the changes made since that backup. The "As of" setting allows you to create another export of your application that will contain changes made up to that point.
You might not be surprised to know that this ability to export your application at a previous point in time uses the same Flashback technology that we covered earlier. Behind the scenes, the application export function uses the Flashback features to query the metadata about your application at that point in time.
However, there is a limit to how far back in time you can go. This is a configurable limit, but it is nonetheless a limit. The limit relies on the undo_retention
parameter setting for your database, which in a default installation is set to 900 seconds or 15 minutes. You can query the current value on your database by connecting as a privileged user and running the following command:
sys@DBTEST> show parameter undo_retention NAME TYPE VALU ------------------ ----------- -------- undo_retention integer 21600
On our test system, the retention is set to 21600, which is 21,600 seconds (or 6 hours). We highly recommend that you increase the default setting of 900 seconds (15 minutes) to enable you to go further back in time for your exports (this also enables you to use the other Flashback features, such as Flashback Table and Flashback Query, within this time window). But bear in mind that increasing the retention period will increase your disk space storage requirements (since your database will need to store that extra information). Yet, there would be nothing worse than one day finding out your undo retention was just that bit too small to help you. So, we feel that it's better to think in terms of hours (or days in some circumstances), rather than minutes, when it comes to the undo_retention
parameter.
Another common need is to migrate your applications from one instance to another instance. For example, suppose you have made some changes to your application in your development environment. What is the best way to migrate those changes to your test environment and then later to your live environment? In this case, you want to upgrade your application. Another alternative is to clone your application.
To help with change management, we advise that you do not give your end users (in the production environment) the direct URL to your application. In other words do not use the URLs like this:
http://yourserver:7777/pls/apex/f?p=108:1
Instead, use the techniques we covered earlier in the "Managing URLs" section to have URLs like the following:
http://yourserver/buglist/
Using these types of URLs means that you can manage upgrades to the production environment in the following way:
Export the application from the test environment.
Import the application into the production environment but give it a new application ID (say application 208).
Test the application in production to make sure it works correctly.
Switch the method you're using to manage the http://yourserver/buglist/
URL to point to application 208 once you're sure it works correctly.
This means that rather than replacing the application in the live environment, you are installing a new application, testing it, and then switching to point at the URL once you're satisfied with the upgrade. For example, if you are using the virtual host method to manage your URLs, you might have the entry shown in Listing 15-13 for your production application.
Example 15-13. VirtualHost Entry for a Live Application
<VirtualHost *> ServerName buglist.live.localdomain ServerAlias buglist.live.localdomain RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=108:1 [R=301] </VirtualHost>
You can then install the new application as application 208. You can choose to either test this application by using the full URL (for example, http://buglist.localdomain/pls/apex/f?p=208:1
) or set up another VirtualHost
entry for testing the application in live
, as shown in Listing 15-14.
Example 15-14. VirtualHost for Testing a New Application in live
<VirtualHost *> ServerName buglisttest.live.localdomain ServerAlias buglisttest.live.localdomain RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=208:1 [R=301] </VirtualHost>
Once you install the new application in live
, you can use the URL buglisttest.live.localdomain
. When you're happy with the way it works, you can change the VirtualHost
entry for the live application to point to 208 instead of 108, and then restart the web server (which should just take a few seconds).
This method offers some distinct advantages over replacing the currently live application, as follows:
You can test the new application in parallel with the live application still running.
You can easily regress to the old live application if you discover problems after making the switch.
There is minimal downtime. The users don't need to wait while you replace the current version with the new version.
You can improve on this further by using application aliases, which removes the need to manually change the application ID. In that case, your VirtualHost
entries would reference the application name rather than the numeric ID, as shown in Listing 15-15.
Example 15-15. Using Application Names in the VirtualHost Entry
<VirtualHost *> ServerName buglist.live.localdomain ServerAlias buglist.live.localdomain RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=BUGLIST:1 [R=301] </VirtualHost> <VirtualHost *> ServerName buglisttest.live.localdomain ServerAlias buglisttest.live.localdomain RewriteEngine On RewriteRule ^/$ /pls/apex/f?p=BUGLISTTEST:1 [R=301] </VirtualHost>
Notice how we now use BUGLIST
and BUGLISTTEST
as the application names. This means that we no longer need to worry about which numeric ID the applications use and can instead rely on a distinctive name to differentiate them.
If you use aliases, you will need to give your new application a different name before you export it from your development environment. However, this may be a more manageable solution than using the numeric ID in a production environment.
In this example, once we were happy with how the new application worked, we could go into the Application Builder interface and rename the live application (BUGLIST
) to something like BUGLISTOLD
, and then rename the new application from BUGLISTTEST
to BUGLIST
. We would not even need to restart the web server. Since we did not make any configuration changes to the VirtualHost
section, the http://buglist.live.livedomain
URL will now just resolve to the new application (since it now uses the BUGLIST
application name).
This makes for a much simplified and controllable change-management process for migrating your applications across environments.
Sometimes you might want to test an application on another database instance. Or, perhaps you want to make some changes to the application and the underlying schema objects, and you want to do that in a completely separate environment (if, for example, you don't have the luxury of development, test, and live instances). If you're doing some major changes to an application, you may want to create a completely separate workspace on the application, associated with a completely separate schema, and then clone the application and underlying schema objects from the original workspace into the new environment.
There are many different ways to perform this sort of "environment cloning," depending on your exact requirements. For example if you require a complete clone of every workspace and every application in an instance to install on a new instance, you might find that cloning via RMAN or using database backups to make a clone of the database is the best way to do this. However, you might just need to clone your application and data within the same instance, perhaps on your laptop or to test some major changes in your development environment. In this case, you can perform the cloning in a different way, as we'll demonstrate in this section.
With APEX installed and working in your environment, you need to perform the following steps:
You don't need to perform the steps in this order. In fact, it makes sense to do them in a slightly different order because you need to have the new schema created before you create the workspace (since the workspace needs to have the schema assigned to it when the workspace is created).
In this example, we'll clone the Buglist application, which is in our APEXDEMO
workspace (which is using the APEXDEMO
schema).
First, we export the APEX workspace using the Export Workspace function in the APEX administration section, as shown in Figures 15-4 and 15-5. You will need to be logged in to APEX as one of the instance administrators to access this functionality.
This creates a file (called apexdemo.sql
in this case), similar to the application export file, which contains all of the DML and DDL commands needed to create the workspace. One thing to be aware of is the file format that you use when you export the workspace, as shown in Figure 15-5. Your options are Unix and DOS. Make sure that when you import the workspace again, you use the correct file format. In other words, if you export it in the Unix file format, be sure to import it in the Unix format; otherwise, you may encounter problems due to the different way that line feeds and carriage returns are handled in the Unix and DOS formats.
So, we now have our workspace export. We can also export our application in one of the ways covered earlier in this chapter (either manually or by using the APEXExport
command). The end result of this step is an application export file called f101.sql
(for the Buglist application).
Now we get to the slightly trickier step. We need to be able to clone all of our schema objects and data. There are many different ways to do this, including the following:
Use scripts that you have maintained to create all of your schema objects.
Use a tool such as TOAD or SQL Developer to generate the scripts necessary to rebuild the schema objects.
Use the SQL*Loader tool to load the data into the new schema (after extracting it from the original schema in some way).
Use the external tables feature of the database to load data into the new tables.
Use the exp
tool to export a dump file of the original schema, and then use the imp
tool to import the dump file into the new schema.
Use Data Pump, which is particularly suitable for moving large amounts of data very quickly and efficiently. If the other methods prove too slow, Data Pump is your salvation (but Data Pump can take some work to configure).
All of these methods are viable options.
The script method, which is commonly used, relies on you maintaining external scripts every time you make changes to your schema objects. For example, you might have a script (called create_foo.sql
) with this command:
create table foo( id number, name varchar2(20) ) /
If you wanted to add a new column, such as for e-mail addresses, to the foo
table, you would need to modify the create_foo.sql
script as follows:
create table foo( id number,
name varchar2(20), email varchar2(30) ) /
You would also need to maintain another script to handle the situation where the foo
table already exists in the old format, without the email
column. You might create a new script called upgrade_foo.sql
.
alter table foo add (email varchar2(30)) /
You can see how maintaining scripts like these can quickly become hard work. And you need discipline to make sure that anyone who makes changes to your schema objects will make the necessary changes to the scripts. This is why using tools such as TOAD, SQL Developer, and others become an attractive option: they can be used to generate the scripts for you.
Some advanced tools let you maintain separate versions of your schema definition and will generate the scripts required for upgrading from one version of the schema to another version automatically (although you should always check these scripts by eye first before running them just to make absolutely sure you're happy with what they will do).
In our example, we just want to completely copy the original schema. We're not upgrading in any sense, because the new schema does not exist (we need to create it). This is where most people would opt to use the exp
and imp
tools, which are designed to do exactly this sort of schema-level copying. Let's see how to use these tools to perform the cloning and what problems you might encounter. It is worth noting that imp
and exp
are likely to be deprecated in future releases of the database in favor of DataPump Import and Export (which are covered later in this chapter).
exp
and imp
are command-line tools that enable you to export (exp
) and import (imp
) export files, called dumpfiles, containing schema objects. Using them, you can simply copy the contents of a schema into another schema.
First, we need to create a dumpfile of the original APEXDEMO
schema, as shown in Listing 15-16.
Example 15-16. Exporting the APEXDEMO Schema with exp
[jes@pb apexdemo]$ NLS_LANG=AMERICAN_AMERICA.AL32UTF8 [jes@pb apexdemo]$ export NLS_LANG [jes@pb apexdemo]$ echo $NLS_LANG AMERICAN_AMERICA.AL32UTF8 [jes@pb exports]$ exp Username: apexdemo Password: Enter array fetch buffer size: 4096 >
Export file: expdat.dmp > (2)U(sers), or (3)T(ables): (2)U > Export grants (yes/no): yes > Export table data (yes/no): yes > Compress extents (yes/no): yes > no Export done in AL32UTF8 character set and UTF8 NCHAR character set . exporting pre-schema procedural objects and actions . exporting foreign function library names for user APEXDEMO . exporting PUBLIC type synonyms . exporting private type synonyms . exporting object type definitions for user APEXDEMO About to export APEXDEMO's objects ... . exporting database links . exporting sequence numbers . exporting cluster definitions . about to export APEXDEMO's tables via Conventional Path ... . . exporting table APEX_ACCESS_CONTROL 2 rows exported . . exporting table APEX_ACCESS_SETUP 1 rows exported . . exporting table APEX_CUSTOM_LOG 0 rows exported . . exporting table BIG_EMP 14 rows exported . . exporting table BUGLIST 19 rows exported . . exporting table DEMO_CUSTOMERS 7 rows exported . . exporting table DEMO_IMAGES 11 rows exported . . exporting table DEMO_ORDERS 10 rows exported . . exporting table DEMO_ORDER_ITEMS 16 rows exported . . exporting table DEMO_PAGE_HIERARCHY 18 rows exported . . exporting table DEMO_PRODUCT_INFO 10 rows exported . . exporting table DEMO_STATES 51 rows exported . . exporting table DEMO_USERS 2 rows exported . . exporting table DEPT 4 rows exported . . exporting table DUMMY 0 rows exported . . exporting table EMP 14 rows exported . . exporting table REPORT_HEADINGS 8 rows exported . . exporting table TREE_NAVIGATION 4 rows exported . . exporting table UPLOADED_FILES 1 rows exported . . exporting table USER_REPOSITORY 12 rows exported . . exporting table USER_SEARCHES 32 rows exported . . exporting table VERIFICATION_LINK 1 rows exported . exporting synonyms . exporting views . exporting stored procedures . exporting operators . exporting referential integrity constraints . exporting triggers . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions
. exporting materialized views . exporting snapshot logs . exporting job queues . exporting refresh groups and children . exporting dimensions . exporting post-schema procedural objects and actions . exporting statistics Export terminated successfully without warnings.
In this example, we executed the exp
command without passing any parameters, so we were prompted for a username and password. We use APEXDEMO
, which is the schema we want to export. It is also possible to connect as a DBA or privileged user, and then export any other user's schema.
We are then prompted for some parameters, such as whether to export grants (yes, in this example) and whether to compress extents (no, which is usually the answer you should give). After these prompts, the command begins to export each schema object. It also works out the relationships between the objects, so that the resulting export file will contain the creation of the objects in the correct order; for example, so that indexes can be created after the corresponding table has been created.
Before we ran the exp
command, we set the NLS_LANG
environment variable to AMERICAN_AMERICA.AL32UTF8
so that it matched the NLS parameters of our database. Before you export or import, you should make sure of the following:
Your exp
client character set matches your database character set when you export.
The exp
client and imp
client character sets match.
The imp
client character set matches your database character set when you import.
If any of these character sets do not match, you may end up with your data being converted between the character sets, and potentially losing data due to the conversion process. You should be concerned if you ever see the message "Possible charset conversion" during either the import or export procedure.
We should now have a dumpfile containing all the schema objects and data within the APEXDEMO
schema, as follows:
[jes@pb apexdemo]$ ls -al total 336 drwxr-xr-x 3 jes jes 102 Feb 26 15:12 . drwxr-xr-x 19 jes jes 646 Feb 26 14:58 .. -rw-r--r-- 1 jes jes 169984 Feb 26 15:13 expdat.dmp
We'll import this schema into the same instance, but create another user to receive the schema objects. You would use the same procedure if you wanted to load the objects into another instance (where you could use the same username).
We connect to the database as a privileged user to create a new user, which we'll call APE-XCLONE
, as shown in Listing 15-17.
Example 15-17. Creating the New APEXCLONE User
dba@DBTEST> create user apexclone 2 identified by pass
3 default tablespace users; User created. dba@DBTEST> grant connect, resource to apexclone; Grant succeeded. dba@DBTEST> revoke unlimited tablespace from apexclone; Revoke succeeded. dba@DBTEST> alter user apexclone quota unlimited on users; User altered.
We have created a user with a password of pass
(not very imaginative) and assigned the user to the default USERS
tablespace. We have also granted APEXCLONE
the connect
and resource
roles so that the user can connect to the database and create schema objects (otherwise, we would not be able to perform the import).
We have used the connect
and resource
roles because this is a simple example. In your own production systems, you would probably want to grant explicit system privileges, such as create table
and create trigger
, rather than using these roles. This is because by using the roles, you might be granting your users more rights than they actually need. Make sure you carefully examine which rights any roles you use allow. One of the riskier privileges that goes along with the resource
role is the unlimited tablespace
privilege, which means the user will be able to create objects in any tablespace, regardless of any quotas that are in force. This is obviously usually extremely undesirable, so we have revoked the unlimited tablespace
privilege from the APEXCLONE
user (which is part of the resource role) and explicitly granted a quota on the USERS
tablespace.
Now we can run the imp
command and import the dumpfile that was just created, as shown in Listing 15-18. Note that we did not need to set the NLS_LANG
environment variable again, since we are still within the same shell session (otherwise, we would need to execute the same commands we performed before the export).
Example 15-18. Importing the New Schema Objects
[jes@pb exports]$ imp Username: apexclone Password: Import file: expdat.dmp > Enter insert buffer size (minimum is 8192) 30720> Export file created by EXPORT:V10.02.00 via conventional path Warning: the objects were exported by APEXDEMO, not by you import done in AL32UTF8 character set and UTF8 NCHAR character set List contents of import file only (yes/no): no > Ignore create error due to object existence (yes/no): no > Import grants (yes/no): yes >
Import table data (yes/no): yes > Import entire export file (yes/no): no > yes . importing APEXDEMO's objects into APEXCLONE . . importing table "APEX_ACCESS_CONTROL" 2 rows imported . . importing table "APEX_ACCESS_SETUP" 1 rows imported . . importing table "APEX_CUSTOM_LOG" 0 rows imported <extra output omitted> . . importing table "VERIFICATION_LINK" 1 rows imported About to enable constraints... Import terminated successfully without warnings.
We have omitted some of the output in Listing 15-18 (it shows the same tables being imported that were exported).
We can check how the new schema and the original one compare by querying the user_objects
view in both schemas, as follows:
apexdemo@DBTEST> select status, count(*) from user_objects 2 group by status; STATUS COUNT(*) ------- ---------- INVALID 3 VALID 87 apexclone@DBTEST> select status, count(*) from user_objects 2 group by status; STATUS COUNT(*) ------- ---------- INVALID 3 VALID 87
So we can see that all the objects have been copied, and the status (VALID
or INVALID
) is identical to the original schema. However, you may sometimes find that the status is different in the new schema, simply because objects that were invalid in the original schema might have been recompiled during the import, which has changed their status to VALID
.
On the face of things, this looks like a great way to clone the schema and data (the data is also copied across, although you have the option to omit the data if you wish to re-create only the objects themselves). However, there are some potential issues with this method. Recall our previous example where we created the table foo
:
create table foo( id number, name varchar2(20) );
This is the way you would typically write DDL. However, sometimes you find that automated tools that generate the DDL for you might produce something like this for a table that contains a CLOB column. Let's take a look at the following:
CREATE TABLE "APEXDEMO"."FOO" ("ID" NUMBER, "NAME" VARCHAR2(20 BYTE), "DATA" CLOB ) TABLESPACE "APEXDEMO" LOB ("DATA") STORE AS (TABLESPACE "APEXDEMO");
We actually used a GUI tool to create this table, and then used the tool to show the DDL that would be needed to re-create the table. The difference here is that the schema has been specified in the create
statement. In other words, rather than saying "create the foo
table in the default schema," we are saying "create the foo
table in the APEXDEMO
schema." This might seem like quite a subtle difference; however, if we repeat our exp
/imp
procedure, a couple of errors appear, as shown in Listing 15-19.
Example 15-19. Import Fails with Errors
[jes@pb apexdemo]$ imp Username: apexclone Password: Import file: expdat.dmp > Enter insert buffer size (minimum is 8192) 30720> Export file created by EXPORT:V10.01.00 via conventional path Warning: the objects were exported by APEXDEMO, not by you import done in AL32UTF8 character set and UTF8 NCHAR character set List contents of import file only (yes/no): no > Ignore create error due to object existence (yes/no): no > Import grants (yes/no): yes > Import table data (yes/no): yes > Import entire export file (yes/no): no > yes . importing APEXDEMO's objects into APEXCLONE . . importing table "APEX_ACCESS_CONTROL" 2 rows imported . . importing table "APEX_ACCESS_SETUP" 1 rows imported . . importing table "APEX_CUSTOM_LOG" 0 rows imported . . importing table "BIG_EMP" 14 rows imported . . importing table "BUGLIST" 19 rows imported
. . importing table "DEMO_CUSTOMERS" 7 rows imported . . importing table "DEMO_IMAGES" 11 rows imported . . importing table "DEMO_ORDERS" 10 rows imported . . importing table "DEMO_ORDER_ITEMS" 16 rows imported . . importing table "DEMO_PAGE_HIERARCHY" 18 rows imported . . importing table "DEMO_PRODUCT_INFO" 10 rows imported . . importing table "DEMO_STATES" 51 rows imported . . importing table "DEMO_USERS" 2 rows imported . . importing table "DEPT" 4 rows imported . . importing table "DUMMY" 0 rows imported . . importing table "EMP" 14 rows imported IMP-00017: following statement failed with ORACLE error 1950: "CREATE TABLE "FOO" ("ID" NUMBER, "NAME" VARCHAR2(20), "DATA" CLOB) PCTFREE" " 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FR" "EELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "APEXDEMO" LOGGING NOCOMPRE" "SS LOB ("DATA") STORE AS (TABLESPACE "APEXDEMO" ENABLE STORAGE IN ROW CHUN" "K 8192 PCTVERSION 10 NOCACHE STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GR" "OUPS 1 BUFFER_POOL DEFAULT))" IMP-00003: ORACLE error 1950 encountered ORA-01950: no privileges on tablespace 'APEXDEMO' . . importing table "REPORT_HEADINGS" 8 rows imported . . importing table "TREE_NAVIGATION" 4 rows imported . . importing table "USER_REPOSITORY" 12 rows imported . . importing table "USER_SEARCHES" 32 rows imported . . importing table "VERIFICATION_LINK" 1 rows imported About to enable constraints... Import terminated successfully with warnings.
This time, the import procedure was unable to create the foo
table in the APEXCLONE
schema. The following error message explains why it failed:
ORA-01950: no privileges on tablespace 'APEXDEMO'
Even though we are trying to import into the APEXCLONE
schema, the import is trying to create an object in the APEXDEMO
schema. This is why it was important to revoke the unlimited tablespace
privilege after we created the APEXCLONE
user; otherwise, the import would have succeeded because the APEXCLONE
user would have been able to create objects in the APEXDEMO
schema. The import process is trying to create the object in the wrong schema because, when the foo
table was created, we fully qualified the schema in the DDL, as follows:
CREATE TABLE "APEXDEMO"."FOO" ("ID" NUMBER, "NAME" VARCHAR2(20 BYTE), "DATA" CLOB ) TABLESPACE "APEXDEMO" LOB ("DATA") STORE AS (TABLESPACE "APEXDEMO");
The problem here is that when the imp
process ran, it managed to rewrite the first tablespace definition it found from APEXDEMO
to USERS
(remember the new APEXCLONE
uses the USERS
tablespace); however, the imp
process does not rewrite the second tablespace definition; that is, the part within the STORE AS
definition. If we had not specified a STORE AS
in the DDL, the import would have worked, since the CLOB column would have been created using the default tablespace for the user (in this case, the USERS
tablespace).
This is one of the major issues you will have using the exp/imp
method to clone schemas if your objects contain fully qualified tablespace definitions: sometimes the imp
command will be unable to rewrite the definitions to use the new schema and will try to create the objects in the original schema instead.
One solution is to run the imp
command with the special parameter INDEXFILE=Y
, which, instead of importing the data, will create a file containing all the DDL from the dumpfile, which you can then edit by hand to rewrite the tablespace definitions. You would then use this new file to pre-create all the schema objects (this time using the correct tablespaces), and then rerun the imp
command, this time just to import the data (since the objects have already been created).
The imp/exp
method is well documented in the imp/exp
documentation. It is also covered in the excellent Expert One-On-One book by Tom Kyte (Apress, 2003), as well as Tom's "Ask Tom" web site (http://asktom.oracle.com
).
While this method works, it can be extremely cumbersome, particularly if you need to perform the cloning on a regular basis. Next, we'll look at another, much easier schema cloning method.
Data Pump, at first glance, looks quite similar to the exp/imp
tools in that you can use it to export and import data. However, Data Pump is capable of things that are just not possible using exp/imp
and also has the following advantages (based on the Data Pump documentation, so your own figures for performance may vary):
Data Pump Export and Import utilities are typically much faster than the original export and import utilities. A single thread of Data Pump Export is about twice as fast as the original export tool, while Data Pump Import is 15-45 times faster than the original import tool.
Data Pump jobs can be restarted without loss of data, whether the stoppage was voluntary or involuntary.
Data Pump jobs support fine-grained object selection. Virtually any type of object can be included or excluded in a Data Pump job.
Data Pump supports the ability to load one instance directly from another (network import) and unload a remote instance (network export).
One of the major differences between imp/exp
and Data Pump is that with imp/exp
, the export file will be created on the client side (you run then on the client machine, which might actually be the same machine as the database is on); but with Data Pump, the export and import files will reside on the server—in other words, the export file will be created on the same file system that the database resides on. So, setting up Data Pump does require a bit more work than using exp/imp
.
Let's walk through an example. First, we need to connect to the database as a privileged user and create a directory object that specifies a directory path for the location of the files that Data Pump is going to use later, as follows:
dba@DBTEST> create directory datapump as '/home/oracle/datapump'; Directory created.
Note that this is just an example. You should check which directory structure is appropriate for your infrastructure. We are using the /home/oracle/datapump
directory. We could have given the directory another name (such as apexdemo
), but we used the name datapump
just to clearly show its use within our directory structure. Also remember that the Oracle processes will attempt to read and write to this directory, so you must ensure that the permissions in that directory are correct, allowing the user or ID of the process that Oracle is running to access that directory.
We could perform the export and import of the schema as the users themselves (as we did with the exp/imp
procedure). We would need to grant privileges so these users could access the directory we just created (using the grant read, write on directory datapump to <user>
command). However, we want to use some advanced Data Pump parameters, which require elevated privileges that are not ordinarily available to normal users, such as the very dangerous sounding import full database
privilege. Thus, we are going to perform the process as a privileged user.
Now, we can perform the actual export, as shown in Listing 15-20.
Example 15-20. Using Data Pump to Export the APEXDEMO Schema
[jes@pb datapump]$ pwd /home/oracle/datapump [jes@pb datapump]$ expdp schemas=apexdemo directory=datapump dumpfile=apexdemo.dmp logfile=export.log Username: dba Password: FLASHBACK automatically enabled to preserve database integrity. Starting "SYSTEM"."SYS_EXPORT_SCHEMA_1": system/******** schemas= apexdemo directory=datapump dumpfile=apexdemo.dmp logfile=export.log Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 1.687 MB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX . . exported "APEXDEMO"."USER_REPOSITORY" 8.054 KB 12 rows . . exported "APEXDEMO"."UPLOADED_FILES" 53.67 KB 1 rows . . exported "APEXDEMO"."APEX_ACCESS_CONTROL" 7.156 KB 2 rows . . exported "APEXDEMO"."APEX_ACCESS_SETUP" 5.562 KB 1 rows . . exported "APEXDEMO"."BIG_EMP" 7.820 KB 14 rows . . exported "APEXDEMO"."BUGLIST" 8.585 KB 19 rows . . exported "APEXDEMO"."DEMO_CUSTOMERS" 8.984 KB 7 rows . . exported "APEXDEMO"."DEMO_IMAGES" 5.906 KB 11 rows . . exported "APEXDEMO"."DEMO_ORDERS" 6.429 KB 10 rows . . exported "APEXDEMO"."DEMO_ORDER_ITEMS" 6.585 KB 16 rows . . exported "APEXDEMO"."DEMO_PAGE_HIERARCHY" 5.984 KB 18 rows . . exported "APEXDEMO"."DEMO_PRODUCT_INFO" 7.664 KB 10 rows . . exported "APEXDEMO"."DEMO_STATES" 6.054 KB 51 rows . . exported "APEXDEMO"."DEMO_USERS" 7.179 KB 2 rows . . exported "APEXDEMO"."DEPT" 5.648 KB 4 rows . . exported "APEXDEMO"."EMP" 7.812 KB 14 rows . . exported "APEXDEMO"."REPORT_HEADINGS" 6.062 KB 8 rows . . exported "APEXDEMO"."TREE_NAVIGATION" 6.312 KB 4 rows . . exported "APEXDEMO"."USER_SEARCHES" 6.632 KB 32 rows . . exported "APEXDEMO"."VERIFICATION_LINK" 5.593 KB 1 rows . . exported "APEXDEMO"."APEX_CUSTOM_LOG" 0 KB 0 rows . . exported "APEXDEMO"."DUMMY" 0 KB 0 rows . . exported "APEXDEMO"."FOO" 0 KB 0 rows Master table "SYSTEM"."SYS_EXPORT_SCHEMA_1" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_1 is: /home/oracle/datapump/apexdemo.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_1" successfully completed at 11:11
Note that we are actually performing the Data Pump export while connected to the same machine as the database, just so you can see the file that has been created. However, you could perform exactly the same command from a client machine, and the export file would still be created on the server (rather than the client machine). The following is the actual command used to perform the export:
expdp schemas=apexdemo directory=datapump dumpfile=apexdemo.dmp logfile=export.log
Here, we specify the schema name (APEXDEMO
) and pass the name of the directory we created earlier (datapump
). We then see the output of the Data Pump Export command running through the various schema objects.
When the export has finished, we should be able to see the export file created in the directory (on the server), as follows:
[jes@pb datapump]$ ls -al total 2240 drwxr-xr-x 4 oracle oinstall 136 Jan 27 11:11 . drwxr-xr-x 28 oracle oinstall 952 Jan 24 23:59 .. -rw-r----- 1 oracle oinstall 1138688 Jan 27 11:11 apexdemo.dmp -rw-r--r-- 1 oracle oinstall 4389 Jan 27 11:11 export.log
Now we drop the APEXCLONE
user (since it is the quickest way to remove all the objects we imported earlier) and re-create the user as we did earlier, as shown in the following:
sys@DBTEST> drop user apexclone cascade; User dropped. sys@DBTEST> create user apexclone identified by pass 2 default tablespace users; User created. sys@DBTEST> grant connect, resource to apexclone; Grant succeeded. sys@DBTEST> revoke unlimited tablespace from apexclone; Revoke succeeded. sys@DBTEST> alter user apexclone quota unlimited on users; User altered.
We can now perform the import using Data Pump, as shown in Listing 15-21.
Example 15-21. Using Data Pump to Import into the APEXCLONE Schema
[jes@pb datapump]$ impdp remap_schema=APEXDEMO:APEXCLONE REMAP_TABLESPACE=APEXDEMO:USERS DIRECTORY=datapump dumpfile=apexdemo.dmp logfile=import.log Username: system Password: Master table "SYSTEM"."SYS_IMPORT_FULL_1" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_1": system/******** remap_schema=APEXDEMO:APEXCLONE REMAP_TABLESPACE=APEXDEMO:USERS DIRECTORY=datapump dumpfile=apexdemo.dmp logfile=import.log Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"APEXCLONE" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "APEXCLONE"."USER_REPOSITORY" 8.054 KB 12 rows . . imported "APEXCLONE"."UPLOADED_FILES" 53.67 KB 1 rows . . imported "APEXCLONE"."APEX_ACCESS_CONTROL" 7.156 KB 2 rows . . imported "APEXCLONE"."APEX_ACCESS_SETUP" 5.562 KB 1 rows . . imported "APEXCLONE"."BIG_EMP" 7.820 KB 14 rows . . imported "APEXCLONE"."BUGLIST" 8.585 KB 19 rows . . imported "APEXCLONE"."DEMO_CUSTOMERS" 8.984 KB 7 rows . . imported "APEXCLONE"."DEMO_IMAGES" 5.906 KB 11 rows . . imported "APEXCLONE"."DEMO_ORDERS" 6.429 KB 10 rows . . imported "APEXCLONE"."DEMO_ORDER_ITEMS" 6.585 KB 16 rows . . imported "APEXCLONE"."DEMO_PAGE_HIERARCHY" 5.984 KB 18 rows . . imported "APEXCLONE"."DEMO_PRODUCT_INFO" 7.664 KB 10 rows . . imported "APEXCLONE"."DEMO_STATES" 6.054 KB 51 rows . . imported "APEXCLONE"."DEMO_USERS" 7.179 KB 2 rows . . imported "APEXCLONE"."DEPT" 5.648 KB 4 rows . . imported "APEXCLONE"."EMP" 7.812 KB 14 rows . . imported "APEXCLONE"."REPORT_HEADINGS" 6.062 KB 8 rows . . imported "APEXCLONE"."TREE_NAVIGATION" 6.312 KB 4 rows . . imported "APEXCLONE"."USER_SEARCHES" 6.632 KB 32 rows . . imported "APEXCLONE"."VERIFICATION_LINK" 5.593 KB 1 rows . . imported "APEXCLONE"."APEX_CUSTOM_LOG" 0 KB 0 rows . . imported "APEXCLONE"."DUMMY" 0 KB 0 rows . . imported "APEXCLONE"."FOO" 0 KB 0 rows Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/INDEX/SE_TBL_FBM_INDEX_INDEX/INDEX Job "SYSTEM"."SYS_IMPORT_FULL_1" completed with 0 error(s) at 11:27
The following is the command we used to perform the import:
impdp remap_schema=APEXDEMO:APEXCLONE remap_tablespace=APEXDEMO:USERS DIRECTORY=datapump dumpfile=apexdemo.dmp logfile=import.log
The remap_schema
parameter allows Data Pump to map from the original schema (APE-XDEMO
) to the new schema (APEXCLONE
). The remap_tablespace
parameter allows Data Pump to map from the original tablespace (APEXDEMO
) to the new tablespace (USERS
).
As you can see, the output from Data Pump is different from that of the exp/imp
method. Unlike using exp/imp
, Data Pump Import successfully manages to import the foo
table, even though the tablespace for the CLOB column was fully qualified against the APEXDEMO
schema, since it was mapped to the new tablespace with the remap_tablespace
parameter.
Clearly, using these parameters is a much easier way of mapping between schemas and tablespaces than trying to do it with the exp/imp
tools, which cannot always successfully rewrite the schema references within the DDL, as you saw in the previous section.
Notice that the output from the Data Pump Import contains the following line:
ORA-31684: Object type USER:"APEXCLONE" already exists
We get this message (it's not an error; just a message) because we pre-created the APEXCLONE
user. Let's see what happens if we delete the user and try the import again. First, drop the user as follows:
dba@DBTEST> drop user apexclone cascade; User dropped.
Now run the import again. You'll get the same results as in Listing 15-21.
You might be surprised to see that even though we did not pre-create the user, the import was successful. This is because the export file contains all the necessary DDL to create the user for us. However, if we let Data Pump create the user for us, the password that we know for that user will not work. For example, we'll try to log in as follows:
[jes@pb datapump]$ sqlplus Enter user-name: apexclone Enter password: ERROR: ORA-01017: invalid username/password; logon denied
What happened here is that during the export, the hashed password for the APEXDEMO
user was included in the dumpfile, so the APEXCLONE
user was created using the same hash value, as shown :
dba@DBTEST> select username, password from dba_users 2 where username in ('APEXDEMO', 'APEXCLONE'), USERNAME PASSWORD ------------------------------ ------------------------------ APEXCLONE E30FB3C8B61086A3 APEXDEMO E30FB3C8B61086A3
However, and this is where things get tricky, the login procedure uses both the username and password when generating the hash that should be compared against the value stored in the database. Therefore, because the usernames are different, the same password would not hash to the same value. So, we need to set a new password for the APEXCLONE
user, as follows, before we can log in:
dba@DBTEST> alter user apexclone identified by pass; User altered.
We should now be able to log in as the APEXCLONE
user and check that the objects were created properly, as follows:
[jes@pb datapump]$ sqlplus Enter user-name: apexclone Enter password: apexdemo@dbtest> select count(*) from user_objects; COUNT(*) ---------- 90
Success! So, as you can see, although Data Pump requires a bit more configuration before you can use it, it is far more flexible and saves time compared with using the exp/imp
method.
Now that we have successfully cloned the schema, we can import the workspace file and application. Since we are re-creating the application on the same instance (and the workspace specified in the workspace export already exists), we won't import the workspace file, but will instead create a new one, as shown in Figures 15-6 and 15-7.
After creating (or importing) the workspace, we can import the application. If we log in to the APEXCLONE
workspace, we can use the Application Builder interface to import the application, as shown in Figure 15-8.
You can see that we are warned that the application was exported from another workspace, and that the original parse as schema in the export file is the APEXDEMO
schema. We can assign APEXCLONE
as the new parse as schema. Since we are importing into the same instance, we need to choose to auto-assign a new application ID; otherwise, it would conflict with the existing application.
We now have a fully working application, running within the APEXCLONE
schema, which is an exact copy of the application we had in the APEXDEMO
schema.
If we used SQL*Plus to import the application, instead of using Application Builder, we would get the following error:
apexclone@DBTEST> @f101.sql APPLICATION 108 - Buglist Application Set Credentials... Illegal security group id value for this schema. Check Compatibility... API Last Extended:20070525 Your Current Version:20070525 This import is compatible with version: 20070108 COMPATIBLE (You should be able to run this import without issues.) Set Application ID... begin * ERROR at line 1: ORA-20001: Package variable g_security_group_id must be set. ORA-06512: at "FLOWS_30000.WWV_FLOW_API", line 46 ORA-06512: at "FLOWS_30000.WWV_FLOW_API", line 238 ORA-06512: at line 4
The problem here is that with this method, the script is trying to use the workspace ID of the workspace from which the application was exported. Therefore, we need to modify the script to use the ID of the new APEXCLONE
workspace, as follows:
[jes@pb exports]$ sqlplus Enter user-name: apexclone Enter password: apexclone@DBTEST> exec wwv_flow_api.set_security_group_id; PL/SQL procedure successfully completed. apexclone@DBTEST> select wwv_flow_api.get_security_group_id 2 from dual; GET_SECURITY_GROUP_ID --------------------- 5379828196761673
So we now change the line in the f101.sql
script from the following:
begin -- Assumes you are running the script connected to SQL*Plus as the
-- Oracle user FLOWS_30000 or as the owner (parsing schema) of -- the application. wwv_flow_api.set_security_group_id( p_security_group_id=>986113558690831); end; /
to this:
begin -- Assumes you are running the script connected to SQL*Plus as the -- Oracle user FLOWS_30000 or as the owner (parsing schema) of -- the application. wwv_flow_api.set_security_group_id( p_security_group_id=>5379828196761673); end; /
Note that if you used the original workspace export file to re-create the workspace on another instance, you would not need to perform this step, since the new workspace ID would match the original workspace ID. It is only because we are importing into a workspace with a different ID that we need to make this change.
If we now rerun the script, we will get the following error:
apexclone@DBTEST> @f108.sql APPLICATION 108 - Buglist Application Set Credentials... Check Compatibility... API Last Extended:20070525 Your Current Version:20070525 This import is compatible with version: 20070108 COMPATIBLE (You should be able to run this import without issues.) Set Application ID... begin * ERROR at line 1: ORA-20001: Application 108 was not deleted. Import will not be attempted. ORA-06512: at "FLOWS_30000.WWV_FLOW_API", line 261 ORA-06512: at line 4
The problem here is that the script is trying to use an application ID of 108 when we already have that application installed in this instance. We can resolve this by doing a search and replace through the file and replacing the application ID of 108 with another (unique) number (such as 1108). Also, it is important to change the following line in the export file:
begin -- SET APPLICATION ID
wwv_flow.g_flow_id := 1108; wwv_flow_api.g_id_offset := 0; null; end;
Here, we have already changed the g_flow_id
(which represents the application ID) from 108 to 1108. We also need to change the g_id_offset
value if we are installing on an instance that already contains this application; otherwise, many of the internal IDs would conflict with the already installed application. So we have changed this line to the following:
wwv_flow_api.g_id_offset := 100000;
This is sufficiently large enough to ensure that it does not conflict with other applications in our instance (you should check which value makes sense in your own instance). Again, you would not need to do this if you were installing into a different instance (unless another application used the same ID, of course).
We can now rerun the script, and this time, it should execute successfully, as shown in Listing 15-22.
Example 15-22. Executing the f108.sql Script
[jes@pb exports]$ sqlplus Enter user-name: apexclone Enter password: apexclone@DBTEST> @f108.sql APPLICATION 1108 - Buglist Application Set Credentials... Check Compatibility... API Last Extended:20070525 Your Current Version:20070525 This import is compatible with version: 20070108 COMPATIBLE (You should be able to run this import without issues.) Set Application ID... ...authorization schemes ...navigation bar entries ...application processes ...application items ...application level computations ...Application Tabs ...Application Parent Tabs ...Shared Lists of values ...Application Trees ...page groups ...comments: requires application express 2.2 or higher ...PAGE 0: 0 ...PAGE 1: Report Page ...PAGE 2: Insert Form ...PAGE 3: Update Form ...PAGE 4: Success Page ...PAGE 5: Analysis ...PAGE 6: Analyze Reported ...PAGE 7: Analyze Reported
...PAGE 8: Analyze Status ...PAGE 9: Analyze Status ...PAGE 10: Analyze Reported By ...PAGE 11: Analyze Reported By ...PAGE 12: Analyze Assigned To ...PAGE 13: Analyze Assigned To ...PAGE 14: Empty Report Page ...PAGE 15: New User Registration ...PAGE 16: Help ...PAGE 17: Big Emp ...PAGE 19: Charts ...PAGE 24: Uploaded Files ...PAGE 101: Login ...lists ...breadcrumbs ...page templates for application: 1108 ......Page template 1221013001094394 <extra output ommited> ...button templates ......Button Template 1223605281094424 ......Button Template 1223820688094430 ......Button Template 1224023773094431 ......Button Template 1224205308094432 ...region templates ......region template 1224405629094433 <extra output ommited> ...List Templates ......list template 1230704349094467 <extra output ommited> ...web services (9iR2 or better) ...shared queries ...report layouts ...authentication schemes ......scheme 1240217348094545 ......scheme 1240527785094547 ......scheme 1391009692011209 ...done
We have omitted some of the output, but you can see that the script successfully completes this time.
In the Application Builder interface, we will see that the application is now available to edit and run in the APEXCLONE
workspace, as shown in Figure 15-9 (note the workspace name at the bottom of the page).
If you are slightly nervous about editing the export file in this way, feel free to use the Application Builder interface, which will automatically create the new application ID for you. However, you might want to automate this process, which is obviously easier if you use SQL*Plus.
We've covered several diverse topics in this chapter, but they are all necessary to running an efficient production environment. You can help your users a lot by taking advantage of URL management to provide friendly URLs. And we all agree on the need to back up applications, because disaster will surely befall all of us sooner or later. And finally, being able to quickly clone an APEX environment is incredibly helpful when developing and testing new versions of an application. Cloning is also a useful troubleshooting tool, enabling you to take a snapshot of an application in order to diagnose and resolve a problem.