3.4. Working with Schema Objects

A schema is collection of database objects owned by a specific database user. In an Oracle10g database, the schema has the same name as the database user, so the two terms are synonymous.

Schema objects include the segments (tables, indexes, and so on) you have seen in tablespaces as well as nonsegment database objects owned by a user. These nonsegment objects include constraints, views, synonyms, procedures, and packages. Database objects that are not owned by one user and thus are not schema objects include roles, tablespaces, and directories.

In this section, you will learn about the built-in datatypes that Oracle provides for use in your tables, how to create and manage tables, how to implement business rules as constraints on your tables, and how to improve the performance of your tables with indexes. Finally, we will briefly cover other schema objects that you can use in your applications.

3.4.1. Specifying Datatypes

Oracle10g has several built-in datatypes that you can use in your tables. These datatypes fall into six major categories:

  • Character

  • Numeric

  • Datetime

  • LOB (Large Object)

  • ROWID

  • Binary

NOTE

Oracle 10g supports additional datatypes, but we will focus on these six major categories.

3.4.1.1. Character Datatypes

Character datatypes store alphanumeric data in the database character set or the Unicode character set. The database character set is specified when the database is created and indicates which languages can be represented in the database. The US7ASCII character set supports the English language as well as any other language that uses a subset of the English alphabet. The WE8ISO8859P1 character set supports several European languages, including English, French, German, and Spanish. The Unicode character set AL16UTF16 is intended to concurrently support every known language, although there are a few not yet included, such as Egyptian hieroglyphs and cuneiform.

The database character datatypes are as follows:

CHAR(size [byte|char]), NCHAR(size) Fixed width types that always store the columnwidth amount of data, right padding with spaces as needed. The size specification is in bytes if you do not include the keyword char. The NCHAR variation uses the Unicode character set, and the size is always given in characters.

VARCHAR(size [byte|char]), VARCHAR2(size [byte|char]), NVARCHAR2(size) Variable width types. Unlike their CHAR counterparts, the VARCHAR types store only the amount of data that is actually used. The size specification is in bytes if you do not include the keyword char. The NVARCHAR2 variation uses the Unicode character set and is always given in characters. VARCHAR and VARCHAR2 are synonymous in Oracle10g, but Oracle reserves the right to change comparison semantics of VARCHAR in future releases; so the VARCHAR2 type is preferred.

LONG A legacy datatype that exists for backward compatibility. It stores variable-length alphanumeric data up to 2GB in size. There are many restrictions on the usage of the columns of type LONG: there is a limit of one column of type LONG per table, tables containing a LONG cannot be partitioned, LONG datatypes cannot be used in subqueries, and few functions will work with LONG data. The CLOB datatype is the preferred datatype for character data larger than VARCHAR2.

Here is an example of the character datatypes in use:

CREATE TABLE number_samples
(name          VARCHAR2(48)
,country_code  CHAR(2)
,address       NVARCHAR2(100)
,city          NVARCHAR2(64)
);

3.4.1.2. Numeric Datatypes

Numeric datatypes can store positive and negative fixed and floating-point numbers, zero, infinity, and the special value Not A Number.

The database numeric datatypes are as follows:

NUMBER[(precision[, scale])] Stores zero, positive numbers, and negative numbers. precision is the total number of digits and defaults to 38—the maximum. Scale is the number of digits to the right of the decimal point and defaults to 0. A negative scale tells the database to round off data to the left of the decimal point. scale has a valid range of −84 to 127. Table 3.1 shows how precision and scale affect the way number types are stored.

Table 3.1. Precision, Scale, and Rounding
SpecificationActual ValueStored Value
NUMBER(11,4)12345.678912345.6789
NUMBER(11,2)12345.678912345.68
NUMBER(11,-2)12345.678912300
NUMBER(5,2)12345.6789Error – Precision is too small
NUMBER(5,2)123456Error – Precision is too small

BINARY_FLOAT, BINARY_DOUBLE Store single-precision and double-precision floating-point data or one of the special floating-point values listed in Table 3.2.

Table 3.2. Floating-Point Constants
ConstantDescription
BINARY_FLOAT_NANNot a number
BINARY_FLOAT_INFINITYInfinite
BINARY_FLOAT_MAX_NORMAL3.40282347e+38
BINARY_FLOAT_MIN_NORMAL1.17549435e-038
BINARY_FLOAT_MAX_SUBNORMAL1.17549421e-038
BINARY_FLOAT_MIN_SUBNORMAL1.40129846e-045
BINARY_DOUBLE_NANNot a number
BINARY_DOUBLE_INFINITYInfinite
BINARY_DOUBLE_MAX_NORMAL1.7976931348623157E+308
BINARY_DOUBLE_MIN_NORMAL2.2250738585072014E-308
BINARY_DOUBLE_MAX_SUBNORMAL2.2250738585072009E-308
BINARY_DOUBLE_MIN_SUBNORMAL4.9406564584124654E-324

Here is an example of the number datatypes in use:

CREATE TABLE number_samples
(id       NUMBER
,cost      NUMBER(11,2)
,mass      BINARY_FLOAT
,velocity BINARY_DOUBLE
);

3.4.1.3. Datetime Datatypes

Oracle10g has several datetime datatypes that can store dates, time, and time periods:

DATE Stores a date and time with a one-second granularity. The date portion can be from January 1, 4712 BCE to December 31, 9999. The time portion of a DATE datatype defaults to midnight, or 00:00:00 hours, minutes, and seconds.

TIMESTAMP[(precision)] Stores a date and time with subsecond granularity. The date portion can be from January 1, 4712 BCE to December 31, 9999. precision is the number of digits of subsecond granularity. precision defaults to 6 and can range from 0 to 9.

TIMESTAMP[(precision)] WITH TIMEZONE Extends the TIMESTAMP datatype by also storing a time zone offset. This time zone offset defines the difference (in hours and minutes) from the local time zone and UTC (Coordinated Universal Time, also known as Greenwich mean time or GMT). Like TIMESTAMP, precision defaults to 6 and can range from 0 to 9. Two TIMESTAMP WITH TIMEZONE values are considered equal if they represent the same chronological time. For example, 10:00AM EST is equal to 9:00AM CST or 15:00 UTC.

TIMESTAMP[(precision)] WITH LOCAL TIMEZONE Extends the TIMESTAMP datatype by also storing a time zone offset. The TIMESTAMP WITH LOCAL TIMEZONE datatype does not store the time zone offset with the column data. Instead, the timestamp value is converted from the local time to the database time zone. Likewise, when data is retrieved, it is converted from the database time zone to the local time zone. Like TIMESTAMP, precision defaults to 6 and can range from 0 to 9.

INTERVAL YEAR[(precision)] TO MONTH Stores a period of time in years and months. precision is the maximum number of digits needed for the year portion of this period, with a default of 2 and a range of 0 to 9. Use the INTERVAL YEAR TO MONTH datatype to store the difference between two datetime values if yearly or monthly granularity is needed.

INTERVAL DAY[(d_precision)] TO SECOND[(s_precision)] Stores a period of time in days, hours, minutes, and seconds. d_precision is the maximum number of digits needed for the day portion of this period, with a default of 2 and a range of 0 to 9. s_precision is the number of digits to the right of the decimal point needed for the fractional seconds portion of this period, with a default of 6 and a range of 0 to 9. Use the INTERVAL DAY TO SECOND datatype to store the difference between two datetime values if granularity down to a fractional second is needed.

Here is an example of the datetime datatypes in use:

CREATE TABLE datetime_samples
(dt       DATE
,ts       TIMESTAMP(3)
,ts2      TIMESTAMP
,tstz     TIMESTAMP(3) WITH TIME ZONE
,tsltz    TIMESTAMP WITH LOCAL TIME ZONE
,long_duration  INTERVAL YEAR(4) TO MONTH
,short_duration INTERVAL DAY(3) TO SECOND(2)
);

3.4.1.4. LOB Datatypes

As the name implies, LOB datatypes store large objects, up to 232 − 1, or 4,294,967,295 data blocks. With an 8KB data block size, this comes out to about 32TB per field. LOBs are designed for text, image video, audio, and spatial data. When you create a table with LOB columns, you can specify a different tablespace and different attributes for the LOB data than for the rest of the table. The LOB locator, a kind of pointer, is stored inline with the row and is used to access the LOB data.

The database LOB datatypes are as follows:

CLOB Stores variable-length character data.

NCLOB Stores variable-length character data using the Unicode character set.

BLOB Stores binary variable-length data inside the database. BLOB data does not undergo character set conversion when passed between databases or between client and server processes.

BFILE Stores binary variable-length data outside the database. BFILEs are limited to a maximum of 4GB of data and even less in some operating systems.

Here is an example of the LOB datatypes in use:

CREATE TABLE lob_examples
( id NUMBER
, name VARCHAR2(32)
, description VARCHAR2(4000)

, definition CLOB
, mp3 BLOB
)TABLESPACE USERS
LOB (definition) STORE AS
    (TABLESPACE user3_data);

3.4.1.5. ROWID Datatypes

ROWIDs are either physical or logical addresses that uniquely identify each row in an Oracle10g table. The database ROWID datatypes are as follows:

ROWID Stores the base64-encoded physical address of any row in a heap-organized table in the database. ROWIDs incorporate the Object ID (OID), relative file number, block number, and row slot within the block. They are used internally in indexes and via the ROWID pseudocolumn in SQL. You can use ROWID datatype columns in your tables to store "row pointers" to rows in other tables.

UROWID (Universal ROWID) Stores the base64-encoded string representing the logical address of a row in an index-organized table.

Here is an example of the ROWID datatypes in use:

CREATE TABLE rowid_samples
(tab_rowid       ROWID
,iot_rowid       UROWID
);

3.4.1.6. Binary Datatypes

Oracle10g binary datatypes can be used to store unstructured data. Unlike regular character data, binary data does not undergo character set conversion when passed from database to database via a database link or export/import utility or when passed between database client and server processes.

The database binary datatypes are as follows:

RAW(size) Stores unstructured data up to 2000 bytes in size.

LONG RAW Stores unstructured data up to 2GB in size. Like the LONG datatype, it exists to support backward compatibility, and there are several restrictions on LONG RAW columns—Oracle discourages their use. Consider using the BLOB datatype instead.

Here is an example of the binary datatypes in use:

CREATE TABLE binary_samples
(init_string    RAW(2000)
,logo_image     LONG RAW
);

3.4.2. Creating Tables

As you know from Chapter 1, "Installing Oracle 10g," tables are the primary data storage containers in an Oracle database. Data in a table is organized into rows and columns. Each column is named, has a specific datatype and size, such as CHAR(16), VARCHAR2(50), TIMESTAMP(6), or NUMBER. A row is a single occurrence of this set of columns. You can think of columns as fields, and you can think of rows as records.

When you create a table, you must give it a name as well as specify the column names and datatypes. You can optionally specify many additional attributes, such as column default values, extent sizes, which tablespace to use, and so on. Table and column names have the following requirements:

  • Must be from 1 to 30 bytes in length.

  • Must begin with a letter.

  • Can include letters, numbers, the underscore symbol (_), the pound symbol (#), and the dollar symbol ($). (However, Oracle discourages the use of pound and dollar symbols in names.)

  • Cannot be a reserved word such as NUMBER or INDEX.

If the name is enclosed in double quotation marks (" "), the only requirement is that the name be from 1 to 30 bytes long and not contain an embedded double quotation mark. Each column name must be unique within a table, and the table name must be unique within the namespace for tables, views, sequences, private synonyms, procedures, functions, packages, materialized views, and user-defined types. The namespace is simply the domain of allowable names for the set of schema objects that it serves.

In addition to the namespace shared by tables and views, the database has separate namespaces for each of the following:

  • Indexes

  • Constraints

  • Clusters

  • Database triggers

  • Private database links

  • Dimensions

  • Roles

  • Public synonyms

  • Public database links

  • Tablespaces

  • Profiles

  • Parameter files (PFILEs)

For example, if you have a view named BOOKS, you cannot name a table BOOKS (tables and views share a namespace), although you can create an index named BOOKS (indexes and tables have separate namespaces) and a constraint named BOOKS (constraints and tables have separate namespaces).

In the following sections, you will see how to create and manage tables.

3.4.2.1. Creating a Table

To create a table, use the CREATE TABLE statement. At a minimum, you need to list the column names and datatypes for the table. Here is an example:

CREATE TABLE change_log
(log_id      NUMBER
,who         VARCHAR2(64)
,when        TIMESTAMP
,what        VARCHAR2(200)
);

Commas delimit or separate the column definitions, which start with the column name: log_ id, who, when, and what are the columns in this example. You can add some attributes to your table definition such as the tablespace in which you want your table stored:

CREATE TABLE change_log
(log_id      NUMBER
,who         VARCHAR2(64)
,when        TIMESTAMP
,what        VARCHAR2(200)
) TABLESPACE users;

After you create the table, you can display the structure of a table with the SQL*Plus DESCRIBE command:

SQL> describe change_log
 Name                    Null?    Type
 ----------------------- -------- ----------------
 LOG_ID                           NUMBER
 WHO                               VARCHAR2(64)
 WHEN                              TIMESTAMP(6)
 WHAT                             VARCHAR2(200)

3.4.2.2. Creating a Table Using a Query

When you can create a table based on a query, you do not need to specify the column attributes; they are inherited from the existing schema object. Queries used in a CREATE TABLE AS SELECT statement can be on a single table, a view, or can join multiple tables.

This table creation syntax is frequently identified with the abbreviation CTAS (Create Table As Select):

CREATE TABLE january2004_log
NOLOGGING COMPRESS
TABLESPACE archives

AS SELECT * FROM change_log
WHERE when BETWEEN TO_DATE('01-JAN-2004','DD-Mon-YYYY')
 AND TO_DATE('31-JAN-2004','DD-Mon-YYYY'),

SQL> describe january2004_log
 Name                    Null?    Type
 ----------------------- -------- ----------------
 LOG_ID                           NUMBER
 WHO                              VARCHAR2(64)
 WHEN                             TIMESTAMP(6)
 WHAT                             VARCHAR2(200)

The option NOLOGGING tells the database not to log the contents of the table to the redo log and not to log subsequent direct-path insert operations to the redo log. The COMPRESS option tells the database to add the data to the table using data compression, thus requiring less disk space. The TABLESPACE option tells the database where to store the table.

3.4.2.3. Creating a Temporary Table

You can create a temporary table whose contents are transitory and only visible to the session that inserted data into it. The definition of the table persists, but the data in a temporary table lasts only for either the duration of the transaction (ON COMMIT DELETE ROWS) or for the duration of the session (ON COMMIT PRESERVE ROWS). Here is an example:

CREATE GLOBAL TEMPORARY TABLE my_session
(category      VARCHAR2(16)
,running_count NUMBER
) ON COMMIT DELETE ROWS;

Programs can manipulate data in temporary tables or join them to permanent tables in the same manner as any other table.

3.4.2.4. Setting Default Values

You can set default values for the columns in your table. When subsequent INSERT statements do not explicitly populate these columns, the database assigns the default value to the column.

Having a default value does not ensure that the column will always have a value. An INSERT or an UPDATE statement can always explicitly set a column to NULL unless there is a NOT NULL constraint on the column.

NOTE

See the section "Creating Constraints" later in this chapter for more information on creating or using constraints.

Default values can be any SQL expression that does not reference a PL/SQL function, other columns, or the pseudocolumns ROWNUM, NEXTVAL, CURRVAL, LEVEL, or PRIOR. To implement more complex rules in PL/SQL for assigning default values, you can use a BEFORE INSERT trigger.

NOTE

See Chapter 7, "Managing Data with SQL, PL/SQL, and Utilities," for more information on creating triggers.

Default values are defined as part of a column specification. This definition can be made at table creation time, like this:

CREATE TABLE change_log
(log_id      NUMBER
,who         VARCHAR2(64) DEFAULT USER
,when        TIMESTAMP    DEFAULT SYSTIMESTAMP
,what        VARCHAR2(200)
) TABLESPACE users;

To define a default value after a table has been created, use the ALTER TABLE statement to modify the column specification, like this:

ALTER TABLE change_log MODIFY
who         VARCHAR2(64) DEFAULT USER;

3.4.2.5. Adding Comments to a Table or a Column

You can add descriptive comments to your tables and columns in order to better describe the content or usage of these database objects. Comments can be a maximum of 4,000 bytes in length and can have embedded white space and punctuation.

Use the COMMENT ON statement to assign a comment to either a table or a column, like this:

COMMENT ON TABLE change_log IS
  'This table is where you record changes
 to the configuration of the DEMO system';

COMMENT ON COLUMN change_log.log_id IS
'System generated key for change log table
 Populated with the change_seq sequence.';

The comment must be enclosed in single quotes, but can span physical lines. To display the comments on a table, query the DBA_TAB_COMMENTS, ALL_TAB_COMMENTS, or USER_TAB_ COMMENTS data dictionary views:

SELECT owner, table_name, comments
FROM all_tab_comments

WHERE table_name = 'CHANGE_LOG';


OWNER  TABLE_NAME  COMMENTS
------ ----------- ---------------------------------------
CHIP   CHANGE_LOG  This table is where you record changes
                   to the configuration of the DEMO system

To display the comments on a column, query the DBA_COL_COMMENTS, ALL_COL_COMMENTS, or USER_COL_COMMENTS data dictionary views:

SELECT table_name, column_name, comments
FROM user_col_comments
WHERE table_name = 'CHANGE_LOG'
AND   column_name = 'LOG_ID';


TABLE_NAME COLUMN COMMENTS
---------- ------ -----------------------------------------
CHANGE_LOG LOG_ID System generated key for change log table
                  Populated with the change_seq sequence.

3.4.2.6. Renaming a Table

You can use the RENAME statement to change the name of a table, view, sequence, or private synonym—objects that share a namespace with tables. The syntax is RENAME old_name TO new_name. When you rename a table, the database invalidates all objects that depend on (refer to) the table, such as views, procedures, or synonyms. The database automatically alters associated indexes, grants, and constraints to reference the new name.

To change the name of the change_log table to demo_change_log, execute this:

RENAME change_log TO demo_change_log;

Another way to rename a table is with the RENAME TO clause of the ALTER TABLE statement. For example:

ALTER TABLE change_log RENAME TO demo_change_log;

Neither syntax is preferred; both semantics are fully supported.

3.4.2.7. Adding and Dropping Columns in a Table

One task that you will need to perform while managing tables is adding new columns to an existing table. Use the ALTER TABLE statement to add columns. When adding a single column, use the syntax ALTER TABLE table_name ADD column_spec.

When you add multiple columns to a table, enclose a comma-delimited list of column specifications with parentheses. The column specification includes the column name, the column's datatype, and any default value that the column will have.

For example, to add a column named HOW to the change_log table, execute the following SQL:

ALTER TABLE change_log ADD how VARCHAR2(45);

To add the two columns—HOW and WHY—to the change_log table, use the syntax with parentheses, like this:

ALTER TABLE change_log ADD
(how     VARCHAR2(45)
,why     VARCHAR2(60)
);

To remove a column from a table, use the ALTER TABLE DROP COLUMN statement, as in this example:

ALTER TABLE change_log DROP COLUMN how;

To drop multiple columns, you don't use the keyword COLUMN, and instead enclose the comma-delimited list of columns in parentheses:

ALTER TABLE change_log DROP (how,why);

3.4.2.8. Modifying Columns

You may need to occasionally make changes to the columns of a table—increase or decrease the size of a column, rename a column, or assign a default value to a column. You use the ALTER TABLE MODIFY statement to make these column-level changes. As with the ADD and DROP options, you have two syntactical options: one for modifying a single column and one for modifying multiple columns.

To make changes to a single column, you specify the column name together with the new characteristics. For example, to change the column WHAT from VARCHAR2(200) to VARCHAR2(250), you execute:

ALTER TABLE change_log MODIFY what VARCHAR2(250);

To change multiple columns, enclose a comma-delimited list of modified column specs in parentheses, like this:

ALTER TABLE change_log MODIFY
(what    VARCHAR2(250)
,who     VARCHAR2(50)  DEFAULT user
);

3.4.2.9. Viewing the Attributes of a Table

You can use several data dictionary views to display the attributes of a table. The first place to start is usually with the DESCRIBE command from SQL*Plus or iSQL*Plus:

SQL> describe employees
 Name                    Null?     Type
 ----------------------- -------- ----------------
 EMPLOYEE_ID              NOT NULL NUMBER
 HIRE_DATE                NOT NULL DATE
 FIRST_NAME                        VARCHAR2(42)
 LAST_NAME                         VARCHAR2(42)
 PAYROLL_ID                        VARCHAR2(10)
 DEPT_NBR                          NUMBER
 MANAGER                           NUMBER

The DESCRIBE command displays the column names, datatypes, and nullity of each column. To see the table physical attributes, query the ALL_TABLES or USER_TABLES dictionary view, like this:

SELECT * FROM user_tables
WHERE table_name = 'EMPLOYEES';

To see what constraints are declared on a table, use the ALL_CONSTRAINTS and ALL_CONS_ COLUMNS views, like this:

SELECT constraint_name,constraint_type ,r_constraint_name
FROM user_constraints
WHERE table_name = 'EMPLOYEES';

                               CONS
CONSTRAINT_NAME                TYPE   R_CONSTRAINT_NAME
------------------------------ ------ ---------------------
NN_EMP_ID                       C
SYS_C005286                     C
EMPLOYEES_PK                    P
UNIQ_PAYROLL_ID                 U
EMP_DEPT_FK                     R      DEPARTMENT_PK
MGR_EMP_FK                      R      EMPLOYEES_PK
HIRE_DATE_CHECK                 C

The CONSTRAINT_TYPE column indicates the kind of constraint:

  • C is for Check

  • P is for Primary Key

  • R is for Referential (or Foreign Key)

  • U is for Unique

NOT NULL constraints are stored both as a column attribute and as a check constraint. The SEARCH_CONDITION column of the USER_CONSTRAINTS view is only applicable for CHECK constraints:

SELECT search_condition
FROM user_constraints
WHERE constraint_name = 'SYS_C005286';

SEARCH_CONDITION
------------------------------------------------
"HIRE_DATE" IS NOT NULL

The columns participating in a FOREIGN KEY constraint can be found in the ALL_CONS_ COLUMNS view, like this:

SELECT column_name, position
FROM all_cons_columns
WHERE constraint_name = 'EMP_DEPT_FK';

COLUMN_NAME                       POSITION
------------------------------ ----------
DEPT_NBR                                 1

3.4.2.10. Viewing the Contents of a Table

To view the contents of a table, simply select the columns of interest from the table. You can use the column list reported in the DESCRIBE command to limit your column selection or you can specify * for the column list to view all columns in the table. Use a WHERE clause to filter the rows, like this:

SELECT * FROM employees;

SELECT employee_id, first_name, last_name, hire_date
FROM employees
WHERE hire_date > SYSDATE − 90
;

3.4.3. Working With Constraints

Constraints enforce business rules in the database. In other words, they limit the acceptable data values for a table. Constraints are optional schema objects that depend on tables. Although you can have a table without any constraints, you cannot create a constraint without a table.

Oracle lets you create several types of constraints on your tables to enforce your business rules, including the following:

  • NOT NULL

  • UNIQUE

  • PRIMARY KEY

  • REFERENTIAL

  • CHECK

You can create constraints together with the table in the CREATE TABLE statement. After you create a table, you add or remove a constraint from a table with an ALTER TABLE statement. You specify the constraint information with either the in-line syntax as a column attribute or the out-of-line syntax as part of the table definition. Constraints do not require a name; if you do not name the constraint, Oracle generates one for you. However, the generated names are simply numbers prefixed with SYS_C and may not be very meaningful.

The following sections describe each of the constraint types in detail.

3.4.3.1. Working with NOT NULL Constraints

By default, all columns in a table allow NULL as a valid value. A NULL represents unknown or nonexistent information. Some business rules can be enforced with a NOT NULL constraint. For example, an employee may not be considered a valid employee if their hire date is not known. You enforce this business rule by placing a NOT NULL constraint on the hire_date column of the employees table. Any INSERT or UPDATE statements fail if the protected column does not have a value.

NOT NULL constraints must be declared together with the column definition using in-line syntax.

Here is an example using the NOT NULL constraint:

CREATE TABLE employees
(employee_id     NUMBER  CONSTRAINT nn_emp_id  NOT NULL
,hire_date       DATE                          NOT NULL
,first_name      VARCHAR2(42)
,last_name       VARCHAR2(42)
);

3.4.3.2. Working with UNIQUE Constraints

A UNIQUE constraint ensures that each occurrence of the columns protected by this constraint is different from all other occurrences in the table. UNIQUE constraints cannot be created on columns of type CLOB, NCLOB, BLOB, LONG, LONG RAW, or TIMESTAMP WITH TIMEZONE.

Here is how you create an employees table that has a UNIQUE constraint on the payroll_ id column using the out-of-line syntax:

CREATE TABLE employees
(employee_id    NUMBER       NOT NULL

,hire_date      DATE         NOT NULL
,first_name     VARCHAR2(42)
,last_name       VARCHAR2(42)
,payroll_id      VARCHAR2(10)
,CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
);

Using the in-line syntax, the statement looks like this:

CREATE TABLE employees
(employee_id     NUMBER     NOT NULL
,hire_date       DATE       NOT NULL
,first_name      VARCHAR2(42)
,last_name       VARCHAR2(42)
,payroll_id      VARCHAR2(10)
    CONSTRAINT uniq_payroll_id UNIQUE
);

No two rows in this table can have the same value in payroll_id. NULL values do not count as a distinct value, so this employees table can have multiple rows with a NULL payroll_id. To ensure that payroll_id is always present, you need a NOT NULL constraint.

The database uses an index to help enforce this constraint. The index is usually a unique index, and if you create the UNIQUE constraint together with the table, the database automatically creates a unique index on the columns protected by the UNIQUE constraint, and the name of the index defaults to the name of the constraint. To assign attributes to this index, take advantage of the USING INDEX clause, like this:

CREATE TABLE employees
(employee_id    NUMBER        NOT NULL
,hire_date      DATE          NOT NULL
,first_name     VARCHAR2(42)
,last_name       VARCHAR2(42)
,payroll_id      VARCHAR2(10)
,CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
  USING INDEX TABLESPACE indx
);

You can add a UNIQUE constraint after the table is built by using an ALTER TABLE statement, like this:

ALTER TABLE employees ADD
CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
  USING INDEX TABLESPACE indx
;

When adding a UNIQUE constraint to an existing table, Oracle raises an exception, and the statement fails if there are any duplicate keys (that is, violations of the constraint). To assist you in identifying any duplicates, you can specify an EXCEPTIONS INTO clause. First, create an exceptions table with the utlexcpt.sql (using ROWID) or utlexpt1.sql (using UROWID) script located in the oracle_home/rdbms/admin directory. These scripts create a table named EXCEPTIONS that contains columns for the ROWID, owner, table_name, and constraint that is in violation. You can use this information to fix any data that violates the constraint and then try the ALTER statement again to create the constraint.

Here is an example:

ALTER TABLE employees ADD
   CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
   USING INDEX TABLESPACE indx
   EXCEPTIONS INTO exceptions;

3.4.3.3. Working with PRIMARY KEY Constraints

Primary keys are defined in a relational model as being the unique identifier for any tupple in an entity. PRIMARY KEY constraints enforce the validity of a primary key, so a table can have only one PRIMARY KEY constraint. A PRIMARY KEY constraint implicitly includes both NOT NULL constraints on each column in the key as well as a UNIQUE constraint, enforced with an index, on all columns in the key. You can create a PRIMARY KEY constraint at the same time that you create a table.

Here is an example of creating a PRIMARY KEY constraint together with a table, using outof-line syntax:

CREATE TABLE employees
(employee_id    NUMBER        NOT NULL
,hire_date      DATE          NOT NULL
,first_name     VARCHAR2(42)
,last_name       VARCHAR2(42)
,payroll_id      VARCHAR2(10)
,CONSTRAINT employees_pk PRIMARY KEY (employee_id)
  USING INDEX TABLESPACE indx
);

As with UNIQUE constraints, PRIMARY KEY constraints cannot be created on columns of type CLOB, NCLOB, BLOB, LONG, LONG RAW, or TIMESTAMP WITH TIMEZONE.

3.4.3.4. Working with FOREIGN KEY Constraints

FOREIGN KEY constraints are also known as referential integrity constraints because they enforce referential integrity. FOREIGN KEY constraints enforce referential integrity by ensuring that data values referenced in one table are defined in another table. FOREIGN KEY constraints tie these two tables together in a parent/child or referenced/dependent relationship.

When you declare a FOREIGN KEY constraint, you identify the columns in one table whose values must also appear in the primary key or unique key of another table. The table with the primary or unique key is known as the parent or referenced table, and the table with the FOREIGN KEY constraint is known as the child or dependent table.

As with UNIQUE and PRIMARY KEY constraints, FOREIGN KEY constraints cannot be created on columns of type CLOB, NCLOB, BLOB, LONG, LONG RAW, or TIMESTAMP WITH TIMEZONE.

Here is an example of creating a parent table (DEPARTMENTS) and child table (EMPLOYEES) with a PRIMARY KEY constraint on the parent and a FOREIGN KEY constraint on the child table, using out-of-line syntax:

CREATE TABLE departments
(dept_nbr      NUMBER   NOT NULL
   CONSTRAINT department_pk PRIMARY KEY
,dept_name     VARCHAR2(32)
,manager_id    NUMBER
);

CREATE TABLE employees
(employee_id    NUMBER        NOT NULL
,hire_date      DATE          NOT NULL
,first_name     VARCHAR2(42)
,last_name       VARCHAR2(42)
,payroll_id      VARCHAR2(10)
,dept_nbr       NUMBER
,CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
  USING INDEX TABLESPACE indx
,CONSTRAINT employees_pk PRIMARY KEY (employee_id)
  USING INDEX TABLESPACE indx
,CONSTRAINT emp_dept_fk FOREIGN KEY (dept_nbr)
  REFERENCES departments(dept_nbr)
);

In this example, each employee belongs to a department, so we put a DEPT_NBR column in the EMPLOYEES table, which will hold each employee's department number. The DEPARTMENTS table defines all the valid department numbers to ensure that DEPT_NBR values appearing in the EMPLOYEES table are defined in the DEPARTMENTS table—in essence that an employee belongs to a valid department. You implement this relationship or rule with a FOREIGN KEY constraint. By default, FOREIGN KEYs allow NULLs.

By default, the database raises an exception and does not allow you to delete rows from a parent table if those rows are referenced in the child table. If this behavior isn't what you want, you can tell the database to automatically maintain referential integrity in a couple of ways: by deleting the child rows and specifying ON DELETE CASCADE or by setting the columns in the child table to NULL with the ON DELETE SET NULL clause, like this:

ALTER TABLE employees
   ADD CONSTRAINT emp_dept_fk FOREIGN KEY (dept_nbr)
     REFERENCES departments(dept_nbr) ON DELETE CASCADE;

ALTER TABLE departments ADD CONSTRAINT
   dept_mgr_fk FOREIGN KEY (manager_id) REFERENCES
     employees(employee_id) ON DELETE SET NULL;

The first statement tells the database that deleting a department should cause a cascading deletion of that department's employees. The second statement tells the database that deleting an employee who is a department manager should cause that department's MANAGER_ID column to revert to NULL.

3.4.3.4.1. A Self-Referencing Foreign Key

The parent and child tables do not always have to be separate tables; they can be separate columns of the same table. This configuration is known as a self-referencing foreign key. An example of a self-referencing foreign key can be added to the EMPLOYEES table used in the previous section. The business rule that will be enforced requires that each employee report to a manager and also that the manager be a valid employee. To add this rule to the EMPLOYEES table, add the MANAGER column together with a FOREIGN KEY constraint on which it references the EMPLOYEES table, like this:

ALTER TABLE employees ADD
   (manager    NUMBER
   ,CONSTRAINT mgr_emp_fk FOREIGN KEY (manager)
      REFERENCES employees(employee_id)
      ON DELETE SET NULL
   );

3.4.3.4.2. Deferred Constraint Checking

It's possible that your programs might temporarily violate a FOREIGN KEY constraint without really violating the underlying business rule if a program adds data to both tables participating in a FOREIGN KEY constraint within the scope of a single transaction.

For example, if you hire a new employee and create a new department for that person to manage, you need to add a row to both the EMPLOYEES table (which references the new department) as well as the DEPARTMENTS table (which references the new employee). Although this temporary violation will not go against the intent of the business rule, you will need to create the constraints with some additional options, like this:

ALTER TABLE employees
   ADD CONSTRAINT emp_dept_fk FOREIGN KEY (dept_nbr)

REFERENCES departments(dept_nbr) ON DELETE CASCADE
     DEFERRABLE;

ALTER TABLE departments ADD CONSTRAINT
   dept_mgr_fk FOREIGN KEY (manager_id) REFERENCES
     employees(employee_id) ON DELETE SET NULL
     DEFERRABLE INITIALLY DEFERRED;

By default, the database checks that a FOREIGN KEY constraint is satisfied at the end of each statement. You define this behavior with the keywords INITIALLY IMMEDIATE. Also by default, the database will not allow programs to defer constraint checking to the end of the transaction. You define this behavior with the keywords NOT DEFERRABLE.

When you create a constraint, you can tell the database to allow either immediate or deferred constraint checking by specifying the keyword DEFERRABLE. If you normally want a DEFERRABLE constraint to be deferred, create it with the INITIALLY DEFERRED option. Only DEFERRABLE constraints can be set to INITIALLY DEFERRED. Once you create a constraint, you cannot change its deferability (for example, from NOT DEFERRABLE to DEFERABLE); instead, you must drop and re-create the constraint with the new specification.

3.4.3.5. Working with CHECK Constraints

CHECK constraints verify that a column or set of column values meet a specific condition that must evaluate to a Boolean. If the condition evaluates to FALSE, the database raises an exception, and the INSERT or UPDATE statement fails. The condition cannot include subqueries, references to other tables, or calls to functions that are not deterministic. A function is deterministic if it always returns the same result when passed the same input parameters. Examples of deterministic functions include SQRT, TO_DATE, and SUBSTR. The functions SYSDATE, USER, and DBTIMEZONE are not deterministic. The condition must be a Boolean SQL expression enclosed in parentheses.

Add a CHECK constraint to ensure that every employee's hire date is later than the company's founding date, like this:

ALTER TABLE employees ADD CONSTRAINT
   validate_hire_date CHECK
   (hire_date > TO_DATE('15-Apr-1999','DD-Mon-YYYY'));

3.4.3.6. Modifying Constraints

Once created, constraints can be dropped, disabled (temporarily not enforced), enabled (enforced again), or renamed. You make these changes to constraints using an ALTER TABLE statement. Take care in disabling UNIQUE or PRIMARY KEY constraints because disabling these constraints results in the supporting index being dropped (unless you also specify KEEP INDEX.

To drop a constraint, use an ALTER TABLE statement with the constraint name, like this:

ALTER TABLE employees DROP CONSTRAINT validate_hire_date;

Because there can be only one PRIMARY KEY constraint on a table, you can drop it by simply specifying DROP PRIMARY KEY without actually using the constraint's name. If FOREIGN KEY constraints reference your PRIMARY KEY or UNIQUE constraint, you need to drop these dependent constraints before or in conjunction (using the CASCADE keyword) with the PRIMARY KEY constraint:

ALTER TABLE employees DROP PRIMARY KEY CASCADE;

To rename a constraint, give the old and new names:

ALTER TABLE employees
   RENAME CONSTRAINT validate_hire_date TO hire_date_check;

When bulk loading data into a table, it is often more efficient to disable FOREIGN KEY and CHECK constraints, load the data, and then re-enable these constraints, like this:

ALTER TABLE employees DISABLE CONSTRAINT mgr_emp_fk;
-- bulk load the table
ALTER TABLE employees ENABLE  CONSTRAINT mgr_emp_fk;

Disabling either a PRIMARY KEY or a UNIQUE constraint may drop the supporting index and therefore may not be desirable for a load process that uses that index.

3.4.4. Working with Indexes

Indexes are optional data structures built on tables. Indexes can improve data retrieval performance by providing a direct access method instead of the default full table scan retrieval method. You can build Btree or bitmap indexes on one or more columns in a table. An index key is defined as one data value stored in the index. A Btree index sorts the keys into a binary tree and stores these keys together with the table's ROWIDs. In a bitmap index, a bitmap is created for each key. There is a bit in each bitmap for every ROWID in the table, forming the equivalent of a two-dimensional matrix. The bits are set if the corresponding row in the bitmap exists.

Btree indexes are the default index type, can be unique or non-unique, and are appropriate for medium- to high-cardinality columns—those having many distinct values. Btree indexes support row-level locking and so are appropriate for multiuser, transactional applications. The indexes supporting a PRIMARY KEY or UNIQUE constraints are Btree indexes.

Bitmap indexes, on the other hand, are best for multiple combinations of low- to mediumcardinality columns (you cannot create a unique bitmap index), and they do not support rowlevel locking. Bitmap indexes are best in environments in which changes to data are limited and controlled, such as many data warehousing applications. Because bitmap indexes cannot efficiently make changes to the indexed data, they are often dropped prior to data loading and then re-created after a data load.

In the following section, we will show you how to create, drop, and manage indexes.

3.4.4.1. Creating and Dropping Indexes

To create an index, you first need a table. In the CREATE INDEX statement, you tell the database the name of the new index, which table to create the index on, and which columns to include. If multiple columns will be included, use a comma-delimited list.

To create a Btree index on the DEPT_NBR column of the EMPLOYEES table used in the preceding sections, use a CREATE INDEX statement, like this:

CREATE INDEX emp_dept_nbr ON employees (dept_nbr)
   TABLESPACE indx;

A unique index requires the additional keyword UNIQUE, like this:

CREATE UNIQUE INDEX dname_uix ON departments (dept_name);

If you frequently access employees by seniority, you can create a multicolumn index on both department number and hire date, like this:

CREATE INDEX emp_seniority ON
   employees (dept_nbr, hire_date)
   TABLESPACE indx;

To create three single-column bitmap indexes on the STATE, REGION, and METRO_AREA columns of the GEOGRAPHY table, execute the following:

CREATE BITMAP INDEX state_bix  ON geography (state);
CREATE BITMAP INDEX region_bix ON geography (region);
CREATE BITMAP INDEX metro_bix  ON geography (metro_area);

To drop an index, use a DROP INDEX statement, like these:

DROP INDEX emp_seniority;
DROP INDEX state_bix;

3.4.4.2. Managing Indexes

You can perform several maintenance actions on an index, including rebuilding an index, moving it to a new tablespace, coalescing it, or renaming the index. All these actions are performed with different clauses of an ALTER INDEX statement.

To rebuild an index, which will shrink its size and possibly reduce the Btree depth (making it more efficient), use a REBUILD clause, like this.

ALTER INDEX emp_seniority REBUILD;

To move an index from one tablespace to another, you specify a new tablespace in conjunction with REBUILD:

ALTER INDEX uniq_payroll_id REBUILD TABLESPACE hr_indx;

Coalescing an index is like a quick-and-dirty rebuild. Instead of re-creating the Btree, a coalesce combines entries in nearby leaf blocks with the intent of freeing space from some of the leaf blocks. The space and resources required for a coalesce is much less than the full rebuild. But when you coalesce an index, you cannot move the index or reduce the depth of the Btree. Here is an example of coalescing an index:

ALTER INDEX uniq_payroll_id COALESCE;

Renaming an index is beneficial if you have a poorly named index. It also comes in handy if you need to take care of some high-availability maintenance actions that include staging the new version of an index using a temporary name and then in a short deployment window renaming the old and new indexes. Here is an example of renaming an index:

ALTER INDEX sys_c001428 RENAME TO employee_pk;

3.4.5. Working with Views

Views are virtual tables consisting of a stored query. A view is created with a query that incorporates one or more base tables. Although views are often used for selecting data, some views can also be updated, deleted, or inserted into. You can use views to restrict access to a subset of a table, or you can make an awkward join appear as a simple table, reducing a query's complexity.

The data dictionary is a good example of a collection of complex joins appearing as more simple tables. The ALL_CONSTRAINTS view, for example, is a complex nine-table join that also includes two subqueries. Trying to navigate the base tables used in the dictionary views is a bit daunting, but the dictionary views are much more usable because they hide a great deal of the underlying complexity.

Likewise, some data is deemed too sensitive for most access, such as the passwords associated with a database link. The data is stored in the underlying base table link$, but not revealed in the dictionary views DBA_DB_LINKS or ALL_DB_LINKS. Because access to the base table is restrictive, but the dictionary views are widely accessed, the database can hide the sensitive data in the base tables.

To create a view named empv that is based on a combination of data in both the EMPLOYEES and DEPARTMENTS tables, you can execute the following:

CREATE OR REPLACE VIEW empv
(employee_name
,department
,manager
,hire_date
) AS SELECT
 E.first_name||' '||E.last_name
,D.dept_name
,M.first_name||' '||M.last_name
,E.hire_date

FROM employees    E
     ,departments D
    ,employees    M
WHERE E.dept_nbr = D.dept_nbr
 AND  D.manager_id = M.employee_id
;

The OR REPLACE keywords tell the database to replace the view definition if it already exists. If the OR REPLACE keyword is not included, the statement fails if the view already exists. The column list in the parentheses is the list of column names as they appear in the view and correspond positionally with the column expressions in the query—the first expression the query maps to the first column name for the view, the second expression in the query, to the second column name, and so on. You can choose not to include salary or commission in the view definition and only grant privileges on the view, thus restricting access to sensitive data.

To remove a view from the database, use a DROP VIEW statement, like this:

DROP VIEW empv;

3.4.6. Working with Sequences

Sequences are schema objects that generate unique integers. They are frequently used to generate primary key values.

The CREATE SEQUENCE statement creates new sequences. You must, at a minimum, give the sequence a name. You can also assign the following attributes to the sequence in the CREATE statement:

START WITH Defines the initial value for the sequence.

INCREMENT BY Defines how the subsequent sequence numbers will be generated. The default is 1, and valid values are nonzero integers of less than 29 digits. A negative INCREMENT BY results in a descending sequence that generates progressively lower numbers. A positive INCREMENT BY results in an ascending sequence that generates progressively higher numbers.

MAXVALUE Defines the highest value that the sequence can generate. The default is the special keyword NOMAXVALUE, which evaluates to 1027 for an ascending sequence and to −1 for a descending sequence.

MINVALUE Defines the lowest value that the sequence can generate. The default for MINVALUE is the special keyword NOMINVALUE, which evaluates to −1026 for an ascending sequence and to −1 for a descending sequence.

CACHE Defines how many values will be preallocated and held in memory. The default is 20, and the valid values range from 2 to the number returned from the formula:

(CEIL (MAXVALUE - MINVALUE)) / ABS(INCREMENT BY)

To create a sequence called employees_seq that generates unique integers, starting with 100500 and incrementing by 1, use the following statement:

CREATE SEQUENCE employees_seq START WITH 100500 NOMAXVALUE NOMINVALUE;

To request the next integer from the sequence, reference the pseudocolumn NEXTVAL in a SQL statement, like this:

SELECT employees_seq.nextval FROM dual;

These values can be used in INSERT statements or in UPDATE statements, like this:

INSERT INTO employees (employee_id, hire_date)
   SELECT employees_seq.NEXTVAL, hired_date
   FROM merged_employees;

You can use the ALTER SEQUENCE statement to modify the INCREMENT BY, MAXVALUE, MINVALUE, CYCLE, and CACHE attributes, but not the START WITH attribute, like this:

ALTER SEQUENCE employees_seq INCREMENT BY −1;

If you need to reset the next value that a sequence will generate, drop it, re-create it, and then re-grant any privileges on it. To drop a sequence, use the DROP SEQUENCE statement, like this:

DROP SEQUENCE employee_seq;

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset