Data Definition
This chapter covers how one creates the database structures that
will hold one's data. In a relational database, the raw data is
stored in tables, so the majority of this chapter is devoted to
explaining how tables are created and modified and what features are
available to control what data is stored in the tables.
Subsequently, we discuss how tables can be organized into
schemas, and how privileges can be assigned to tables. Finally,
we will briefly look at other features that affect the data storage,
such as views, functions, and triggers. Detailed information on
these topics is found in .
Table Basics
A table in a relational database is much like a table on paper: It
consists of rows and columns. The number and order of the columns
is fixed, and each column has a name. The number of rows is
variable -- it reflects how much data is stored at a given moment.
SQL does not make any guarantees about the order of the rows in a
table. When a table is read, the rows will appear in random order,
unless sorting is explicitly requested. This is covered in . Furthermore, SQL does not assign unique
identifiers to rows, so it is possible to have several completely
identical rows in a table. This is a consequence of the
mathematical model that underlies SQL but is usually not desirable.
Later in this chapter we will see how to deal with this issue.
Each column has a data type. The data type constrains the set of
possible values that can be assigned to a column and assigns
semantics to the data stored in the column so that it can be used
for computations. For instance, a column declared to be of a
numerical type will not accept arbitrary text strings, and the data
stored in such a column can be used for mathematical computations.
By contrast, a column declared to be of a character string type
will accept almost any kind of data but it does not lend itself to
mathematical calculations, although other operations such as string
concatenation are available.
PostgreSQL includes a sizable set of
built-in data types that fit many applications. Users can also
define their own data types. Most built-in data types have obvious
names and semantics, so we defer a detailed explanation to . Some of the frequently used data types are
integer for whole numbers, numeric for
possibly fractional numbers, text for character
strings, date for dates, time for
time-of-day values, and timestamp for values
containing both date and time.
To create a table, you use the aptly named CREATE
TABLE command. In this command you specify at least a
name for the new table, the names of the columns and the data type
of each column. For example:
CREATE TABLE my_first_table (
first_column text,
second_column integer
);
This creates a table named my_first_table with
two columns. The first column is named
first_column and has a data type of
text; the second column has the name
second_column and the type integer.
The table and column names follow the identifier syntax explained
in . The type names are
usually also identifiers, but there are some exceptions. Note that the
column list is comma-separated and surrounded by parentheses.
Of course, the previous example was heavily contrived. Normally,
you would give names to your tables and columns that convey what
kind of data they store. So let's look at a more realistic
example:
CREATE TABLE products (
product_no integer,
name text,
price numeric
);
(The numeric type can store fractional components, as
would be typical of monetary amounts.)
When you create many interrelated tables it is wise to choose a
consistent naming pattern for the tables and columns. For
instance, there is a choice of using singular or plural nouns for
table names, both of which are favored by some theorist or other.
There is a limit on how many columns a table can contain.
Depending on the column types, it is between 250 and 1600.
However, defining a table with anywhere near this many columns is
highly unusual and often a questionable design.
If you no longer need a table, you can remove it using the
DROP TABLE command. For example:
DROP TABLE my_first_table;
DROP TABLE products;
Attempting to drop a table that does not exist is an error.
Nevertheless, it is common in SQL script files to unconditionally
try to drop each table before creating it, ignoring the error
messages.
If you need to modify a table that already exists look into later in this chapter.
With the tools discussed so far you can create fully functional
tables. The remainder of this chapter is concerned with adding
features to the table definition to ensure data integrity,
security, or convenience. If you are eager to fill your tables with
data now you can skip ahead to and read the
rest of this chapter later.
System Columns
Every table has several system columns> that are
implicitly defined by the system. Therefore, these names cannot be
used as names of user-defined columns. (Note that these
restrictions are separate from whether the name is a key word or
not; quoting a name will not allow you to escape these
restrictions.) You do not really need to be concerned about these
columns, just know they exist.
columns
system columns
oid>
OID
The object identifier (object ID) of a row. This is a serial
number that is automatically added by
PostgreSQL to all table rows (unless
the table was created using WITHOUT OIDS, in which
case this column is not present). This column is of type
oid (same name as the column); see for more information about the type.
tableoid>
The OID of the table containing this row. This column is
particularly handy for queries that select from inheritance
hierarchies, since without it, it's difficult to tell which
individual table a row came from. The
tableoid can be joined against the
oid column of
pg_class to obtain the table name.
xmin>
The identity (transaction ID) of the inserting transaction for
this tuple. (Note: In this context, a tuple is an individual
state of a row; each update of a row creates a new tuple for the
same logical row.)
cmin>
The command identifier (starting at zero) within the inserting
transaction.
xmax>
The identity (transaction ID) of the deleting transaction, or
zero for an undeleted tuple. It is possible for this column to
be nonzero in a visible tuple: That usually indicates that the
deleting transaction hasn't committed yet, or that an attempted
deletion was rolled back.
cmax>
The command identifier within the deleting transaction, or zero.
ctid>
The physical location of the tuple within its table. Note that
although the ctid can be used to
locate the tuple very quickly, a row's
ctid will change each time it is
updated or moved by VACUUM FULL>. Therefore
ctid is useless as a long-term row
identifier. The OID, or even better a user-defined serial
number, should be used to identify logical rows.
OIDs are 32-bit quantities and are assigned from a single cluster-wide
counter. In a large or long-lived database, it is possible for the
counter to wrap around. Hence, it is bad practice to assume that OIDs
are unique, unless you take steps to ensure that they are unique.
Recommended practice when using OIDs for row identification is to create
a unique constraint on the OID column of each table for which the OID will
be used. Never assume that OIDs are unique across tables; use the
combination of tableoid> and row OID if you need a
database-wide identifier. (Future releases of
PostgreSQL are likely to use a separate
OID counter for each table, so that tableoid>
must> be included to arrive at a globally unique identifier.)
Transaction identifiers are also 32-bit quantities. In a long-lived
database it is possible for transaction IDs to wrap around. This
is not a fatal problem given appropriate maintenance procedures;
see for details. However, it is
unwise to depend on uniqueness of transaction IDs over the long term
(more than one billion transactions).
Command
identifiers are also 32-bit quantities. This creates a hard limit
of 232> (4 billion) SQL commands
within a single transaction. In practice this limit is not a
problem --- note that the limit is on number of
SQL commands, not number of tuples processed.
Default Values
A column can be assigned a default value. When a new row is
created and no values are specified for some of the columns, the
columns will be filled with their respective default values. A
data manipulation command can also request explicitly that a column
be set to its default value, without knowing what this value is.
(Details about data manipulation commands are in .)
If no default value is declared explicitly, the null value is the
default value. This usually makes sense because a null value can
be thought to represent unknown data.
In a table definition, default values are listed after the column
data type. For example:
CREATE TABLE products (
product_no integer,
name text,
price numeric DEFAULT 9.99
);
The default value may be a scalar expression, which will be
evaluated whenever the default value is inserted
(not when the table is created).
Constraints
Data types are a way to limit the kind of data that can be stored
in a table. For many applications, however, the constraint they
provide is too coarse. For example, a column containing a product
price should probably only accept positive values. But there is no
data type that accepts only positive numbers. Another issue is
that you might want to constrain column data with respect to other
columns or rows. For example, in a table containing product
information, there should only be one row for each product number.
To that end, SQL allows you to define constraints on columns and
tables. Constraints give you as much control over the data in your
tables as you wish. If a user attempts to store data in a column
that would violate a constraint, an error is raised. This applies
even if the value came from the default value definition.
Check Constraints
A check constraint is the most generic constraint type. It allows
you to specify that the value in a certain column must satisfy an
arbitrary expression. For instance, to require positive product
prices, you could use:
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0)
);
As you see, the constraint definition comes after the data type,
just like default value definitions. Default values and
constraints can be listed in any order. A check constraint
consists of the key word CHECK followed by an
expression in parentheses. The check constraint expression should
involve the column thus constrained, otherwise the constraint
would not make too much sense.
You can also give the constraint a separate name. This clarifies
error messages and allows you to refer to the constraint when you
need to change it. The syntax is:
CREATE TABLE products (
product_no integer,
name text,
price numeric CONSTRAINT positive_price CHECK (price > 0)
);
So, to specify a named constraint, use the key word
CONSTRAINT followed by an identifier followed
by the constraint definition.
A check constraint can also refer to several columns. Say you
store a regular price and a discounted price, and you want to
ensure that the discounted price is lower than the regular price.
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0),
discounted_price numeric CHECK (discounted_price > 0),
CHECK (price > discounted_price)
);
The first two constraints should look familiar. The third one
uses a new syntax. It is not attached to a particular column,
instead it appears as a separate item in the comma-separated
column list. Column definitions and these constraint
definitions can be listed in mixed order.
We say that the first two constraints are column constraints, whereas the
third one is a table constraint because it is written separately
from the column definitions. Column constraints can also be
written as table constraints, while the reverse is not necessarily
possible. The above example could also be written as
CREATE TABLE products (
product_no integer,
name text,
price numeric,
CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0),
CHECK (price > discounted_price)
);
or even
CREATE TABLE products (
product_no integer,
name text,
price numeric CHECK (price > 0),
discounted_price numeric,
CHECK (discounted_price > 0 AND price > discounted_price)
);
It's a matter of taste.
It should be noted that a check constraint is satisfied if the
check expression evaluates to true or the null value. Since most
expressions will evaluate to the null value if one operand is null
they will not prevent null values in the constrained columns. To
ensure that a column does not contain null values, the not-null
constraint described in the next section should be used.
Not-Null Constraints
A not-null constraint simply specifies that a column must not
assume the null value. A syntax example:
CREATE TABLE products (
product_no integer NOT NULL,
name text NOT NULL,
price numeric
);
A not-null constraint is always written as a column constraint. A
not-null constraint is functionally equivalent to creating a check
constraint CHECK (column_name
IS NOT NULL), but in
PostgreSQL creating an explicit
not-null constraint is more efficient. The drawback is that you
cannot give explicit names to not-null constraints created that
way.
Of course, a column can have more than one constraint. Just write
the constraints after one another:
CREATE TABLE products (
product_no integer NOT NULL,
name text NOT NULL,
price numeric NOT NULL CHECK (price > 0)
);
The order doesn't matter. It does not necessarily affect in which
order the constraints are checked.
The NOT NULL constraint has an inverse: the
NULL constraint. This does not mean that the
column must be null, which would surely be useless. Instead, this
simply defines the default behavior that the column may be null.
The NULL constraint is not defined in the SQL
standard and should not be used in portable applications. (It was
only added to PostgreSQL to be
compatible with other database systems.) Some users, however,
like it because it makes it easy to toggle the constraint in a
script file. For example, you could start with
CREATE TABLE products (
product_no integer NULL,
name text NULL,
price numeric NULL
);
and then insert the NOT key word where desired.
In most database designs the majority of columns should be marked
not null.
Unique Constraints
Unique constraints ensure that the data contained in a column or a
group of columns is unique with respect to all the rows in the
table. The syntax is
CREATE TABLE products (
product_no integer UNIQUE,
name text,
price numeric
);
when written as a column constraint, and
CREATE TABLE products (
product_no integer,
name text,
price numeric,
UNIQUE (product_no)
);
when written as a table constraint.
If a unique constraint refers to a group of columns, the columns
are listed separated by commas:
CREATE TABLE example (
a integer,
b integer,
c integer,
UNIQUE (a, c)
);
It is also possible to assign names to unique constraints:
CREATE TABLE products (
product_no integer CONSTRAINT must_be_different UNIQUE,
name text,
price numeric
);
In general, a unique constraint is violated when there are (at
least) two rows in the table where the values of each of the
corresponding columns that are part of the constraint are equal.
However, null values are not considered equal in this
consideration. That means, in the presence of a multicolumn
unique constraint it is possible to store an unlimited number of
rows that contain a null value in at least one of the constrained
columns. This behavior conforms to the SQL standard, but we have
heard that other SQL databases may not follow this rule. So be
careful when developing applications that are intended to be
portable.
Primary Keys
Technically, a primary key constraint is simply a combination of a
unique constraint and a not-null constraint. So, the following
two table definitions accept the same data:
CREATE TABLE products (
product_no integer UNIQUE NOT NULL,
name text,
price numeric
);
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
Primary keys can also constrain more than one column; the syntax
is similar to unique constraints:
CREATE TABLE example (
a integer,
b integer,
c integer,
PRIMARY KEY (a, c)
);
A primary key indicates that a column or group of columns can be
used as a unique identifier for rows in the table. (This is a
direct consequence of the definition of a primary key. Note that
a unique constraint does not, in fact, provide a unique identifier
because it does not exclude null values.) This is useful both for
documentation purposes and for client applications. For example,
a GUI application that allows modifying row values probably needs
to know the primary key of a table to be able to identify rows
uniquely.
A table can have at most one primary key (while it can have many
unique and not-null constraints). Relational database theory
dictates that every table must have a primary key. This rule is
not enforced by PostgreSQL, but it is
usually best to follow it.
Foreign Keys
A foreign key constraint specifies that the values in a column (or
a group of columns) must match the values appearing in some row
of another table.
We say this maintains the referential
integrity between two related tables.
Say you have the product table that we have used several times already:
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
Let's also assume you have a table storing orders of those
products. We want to ensure that the orders table only contains
orders of products that actually exist. So we define a foreign
key constraint in the orders table that references the products
table:
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products (product_no),
quantity integer
);
Now it is impossible to create orders with
product_no entries that do not appear in the
products table.
We say that in this situation the orders table is the
referencing table and the products table is
the referenced table. Similarly, there are
referencing and referenced columns.
You can also shorten the above command to
CREATE TABLE orders (
order_id integer PRIMARY KEY,
product_no integer REFERENCES products,
quantity integer
);
because in absence of a column list the primary key of the
referenced table is used as referenced column.
A foreign key can also constrain and reference a group of columns.
As usual, it then needs to be written in table constraint form.
Here is a contrived syntax example:
CREATE TABLE t1 (
a integer PRIMARY KEY,
b integer,
c integer,
FOREIGN KEY (b, c) REFERENCES other_table (c1, c2)
);
Of course, the number and type of the constrained columns needs to
match the number and type of the referenced columns.
A table can contain more than one foreign key constraint. This is
used to implement many-to-many relationships between tables. Say
you have tables about products and orders, but now you want to
allow one order to contain possibly many products (which the
structure above did not allow). You could use this table structure:
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
CREATE TABLE orders (
order_id integer PRIMARY KEY,
shipping_address text,
...
);
CREATE TABLE order_items (
product_no integer REFERENCES products,
order_id integer REFERENCES orders,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
Note also that the primary key overlaps with the foreign keys in
the last table.
We know that the foreign keys disallow creation of orders that
do not relate to any products. But what if a product is removed
after an order is created that references it? SQL allows you to
specify that as well. Intuitively, we have a few options:
Disallow deleting a referenced product
Delete the orders as well
Something else?
To illustrate this, let's implement the following policy on the
many-to-many relationship example above: When someone wants to
remove a product that is still referenced by an order (via
order_items), we disallow it. If someone
removes an order, the order items are removed as well.
CREATE TABLE products (
product_no integer PRIMARY KEY,
name text,
price numeric
);
CREATE TABLE orders (
order_id integer PRIMARY KEY,
shipping_address text,
...
);
CREATE TABLE order_items (
product_no integer REFERENCES products ON DELETE RESTRICT,
order_id integer REFERENCES orders ON DELETE CASCADE,
quantity integer,
PRIMARY KEY (product_no, order_id)
);
Restricting and cascading deletes are the two most common options.
RESTRICT can also be written as NO
ACTION and it's also the default if you do not specify
anything. There are two other options for what should happen with
the foreign key columns when a primary key is deleted:
SET NULL and SET DEFAULT.
Note that these do not excuse you from observing any constraints.
For example, if an action specifies SET DEFAULT
but the default value would not satisfy the foreign key, the
deletion of the primary key will fail.
Analogous to ON DELETE there is also
ON UPDATE which is invoked when a primary key
is changed (updated). The possible actions are the same.
More information about updating and deleting data is in .
Finally, we should mention that a foreign key must reference
columns that are either a primary key or form a unique constraint.
If the foreign key references a unique constraint, there are some
additional possibilities regarding how null values are matched.
These are explained in the reference documentation for
.
Inheritance
This section needs to be rethought. Some of the
information should go into the following chapters.
Let's create two tables. The capitals table contains
state capitals which are also cities. Naturally, the
capitals table should inherit from cities.
CREATE TABLE cities (
name text,
population float,
altitude int -- (in ft)
);
CREATE TABLE capitals (
state char(2)
) INHERITS (cities);
In this case, a row of capitals inherits all
attributes (name, population, and altitude) from its
parent, cities. The type of the attribute name is
text, a native PostgreSQL type for variable length
ASCII strings. The type of the attribute population is
float, a native PostgreSQL type for double precision
floating-point numbers. State capitals have an extra
attribute, state, that shows their state. In PostgreSQL,
a table can inherit from zero or more other tables,
and a query can reference either all rows of a
table or all rows of a table plus all of its
descendants.
The inheritance hierarchy is actually a directed acyclic graph.
For example, the following query finds the names of all cities,
including state capitals, that are located at an altitude
over 500ft:
SELECT name, altitude
FROM cities
WHERE altitude > 500;
which returns:
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Madison | 845
On the other hand, the following query finds
all the cities that are not state capitals and
are situated at an altitude over 500ft:
SELECT name, altitude
FROM ONLY cities
WHERE altitude > 500;
name | altitude
-----------+----------
Las Vegas | 2174
Mariposa | 1953
Here the ONLY
before cities indicates that the query should
be run over only cities and not tables below cities in the
inheritance hierarchy. Many of the commands that we
have already discussed -- SELECT,
UPDATE and DELETE --
support this ONLY
notation.
In some cases you may wish to know which table a particular tuple
originated from. There is a system column called
TABLEOID in each table which can tell you the
originating table:
SELECT c.tableoid, c.name, c.altitude
FROM cities c
WHERE c.altitude > 500;
which returns:
tableoid | name | altitude
----------+-----------+----------
139793 | Las Vegas | 2174
139793 | Mariposa | 1953
139798 | Madison | 845
(If you try to reproduce this example, you will probably get
different numeric OIDs.) By doing a join with
pg_class> you can see the actual table names:
SELECT p.relname, c.name, c.altitude
FROM cities c, pg_class p
WHERE c.altitude > 500 and c.tableoid = p.oid;
which returns:
relname | name | altitude
----------+-----------+----------
cities | Las Vegas | 2174
cities | Mariposa | 1953
capitals | Madison | 845
Deprecated
In previous versions of PostgreSQL, the
default was not to get access to child tables. This was found to
be error prone and is also in violation of the SQL standard. Under the old
syntax, to get the sub-tables you append * to the table name.
For example
SELECT * from cities*;
You can still explicitly specify scanning child tables by appending
*, as well as explicitly specify not scanning child tables by
writing ONLY
. But beginning in version 7.1, the default
behavior for an undecorated table name is to scan its child tables
too, whereas before the default was not to do so. To get the old
default behavior, set the configuration option
SQL_Inheritance to off, e.g.,
SET SQL_Inheritance TO OFF;
or add a line in your postgresql.conf file.
A limitation of the inheritance feature is that indexes (including
unique constraints) and foreign key constraints only apply to single
tables, not to their inheritance children. Thus, in the above example,
specifying that another table's column REFERENCES cities(name)>
would allow the other table to contain city names but not capital names.
This deficiency will probably be fixed in some future release.
Modifying Tables
When you create a table and you realize that you made a mistake, or
the requirements of the application changed, then you can drop the
table and create it again. But this is not a convenient option if
the table is already filled with data, or if the table is
referenced by other database objects (for instance a foreign key
constraint). Therefore PostgreSQL
provides a family of commands to make modifications on existing
tables.
You can
Add columns,
Remove columns,
Add constraints,
Remove constraints,
Change default values,
Rename columns,
Rename tables.
All these actions are performed using the ALTER
TABLE command.
Adding a Column
To add a column, use this command:
ALTER TABLE products ADD COLUMN description text;
The new column will initially be filled with null values in the
existing rows of the table.
You can also define a constraint on the column at the same time,
using the usual syntax:
ALTER TABLE products ADD COLUMN description text CHECK (description <> '');
A new column cannot have a not-null constraint since the column
initially has to contain null values. But you can add a not-null
constraint later. Also, you cannot define a default value on a
new column. According to the SQL standard, this would have to
fill the new columns in the existing rows with the default value,
which is not implemented yet. But you can adjust the column
default later on.
Removing a Column
To remove a column, use this command:
ALTER TABLE products DROP COLUMN description;
Adding a Constraint
To add a constraint, the table constraint syntax is used. For example:
ALTER TABLE products ADD CHECK (name <> '');
ALTER TABLE products ADD CONSTRAINT some_name UNIQUE (product_no);
ALTER TABLE products ADD FOREIGN KEY (product_group_id) REFERENCES product_groups;
To add a not-null constraint, which cannot be written as a table
constraint, use this syntax:
ALTER TABLE products ALTER COLUMN product_no SET NOT NULL;
The constraint will be checked immediately, so the table data must
satisfy the constraint before it can be added.
Removing a Constraint
To remove a constraint you need to know its name. If you gave it
a name then that's easy. Otherwise the system assigned a
generated name, which you need to find out. The
psql command \d
tablename can be helpful
here; other interfaces might also provide a way to inspect table
details. Then the command is:
ALTER TABLE products DROP CONSTRAINT some_name;
This works the same for all constraint types except not-null
constraints. To drop a not null constraint use
ALTER TABLE products ALTER COLUMN product_no DROP NOT NULL;
(Recall that not-null constraints do not have names.)
Changing the Default
To set a new default for a column, use a command like this:
ALTER TABLE products ALTER COLUMN price SET DEFAULT 7.77;
To remove any default value, use
ALTER TABLE products ALTER COLUMN price DROP DEFAULT;
This is equivalent to setting the default to null, at least in
PostgreSQL>. As a consequence, it is not an error
to drop a default where one hadn't been defined, because the
default is implicitly the null value.
Renaming a Column
To rename a column:
ALTER TABLE products RENAME COLUMN product_no TO product_number;
Renaming a Table
To rename a table:
ALTER TABLE products RENAME TO items;
Privileges
When you create a database object, you become its owner. By
default, only the owner of an object can do anything with the
object. In order to allow other users to use it,
privileges must be granted. (There are also
users that have the superuser privilege. Those users can always
access any object.)
To change the owner of a table, index, sequence, or view, use the
ALTER TABLE command.
There are several different privileges: SELECT>,
INSERT>, UPDATE>, DELETE>,
RULE>, REFERENCES>, TRIGGER>,
CREATE>, TEMPORARY>, EXECUTE>,
USAGE>, and ALL PRIVILEGES>. For complete
information on the different types of privileges supported by
PostgreSQL, refer to the
GRANT reference page. The following sections
and chapters will also show you how those privileges are used.
The right to modify or destroy an object is always the privilege of
the owner only.
To assign privileges, the GRANT command is
used. So, if joe is an existing user, and
accounts is an existing table, the privilege to
update the table can be granted with
GRANT UPDATE ON accounts TO joe;
The user executing this command must be the owner of the table. To
grant a privilege to a group, use
GRANT SELECT ON accounts TO GROUP staff;
The special user
name PUBLIC can
be used to grant a privilege to every user on the system. Writing
ALL in place of a specific privilege specifies that all
privileges will be granted.
To revoke a privilege, use the fittingly named
REVOKE command:
REVOKE ALL ON accounts FROM PUBLIC;
The special privileges of the table owner (i.e., the right to do
DROP>, GRANT>, REVOKE>, etc.)
are always implicit in being the owner,
and cannot be granted or revoked. But the table owner can choose
to revoke his own ordinary privileges, for example to make a
table read-only for himself as well as others.
Schemas
schemas
namespaces
A PostgreSQL database cluster
contains one or more named databases. Users and groups of users are
shared across the entire cluster, but no other data is shared across
databases. Any given client connection to the server can access
only the data in a single database, the one specified in the connection
request.
Users of a cluster do not necessarily have the privilege to access every
database in the cluster. Sharing of user names means that there
cannot be different users named, say, joe> in two databases
in the same cluster; but the system can be configured to allow
joe> access to only some of the databases.
A database contains one or more named schemas>, which
in turn contain tables. Schemas also contain other kinds of named
objects, including data types, functions, and operators. The same
object name can be used in different schemas without conflict; for
example, both schema1> and myschema> may
contain tables named mytable>. Unlike databases,
schemas are not rigidly separated: a user may access objects in any
of the schemas in the database he is connected to, if he has
privileges to do so.
There are several reasons why one might want to use schemas:
To allow many users to use one database without interfering with
each other.
To organize database objects into logical groups to make them
more manageable.
Third-party applications can be put into separate schemas so
they cannot collide with the names of other objects.
Schemas are analogous to directories at the operating system level,
except that schemas cannot be nested.
Creating a Schema
To create a separate schema, use the command CREATE
SCHEMA. Give the schema a name of your choice. For
example:
CREATE SCHEMA myschema;
qualified names
names
qualified
To create or access objects in a schema, write a
qualified name> consisting of the schema name and
table name separated by a dot:
schema>.>table>
Actually, the even more general syntax
database>.>schema>.>table>
can be used too, but at present this is just for pro-forma compliance
with the SQL standard; if you write a database name it must be the
same as the database you are connected to.
So to create a table in the new schema, use
CREATE TABLE myschema.mytable (
...
);
This works anywhere a table name is expected, including the table
modification commands and the data access commands discussed in
the following chapters.
To drop a schema if it's empty (all objects in it have been
dropped), use
DROP SCHEMA myschema;
To drop a schema including all contained objects, use
DROP SCHEMA myschema CASCADE;
See for a description of the general
mechanism behind this.
Often you will want to create a schema owned by someone else
(since this is one of the ways to restrict the activities of your
users to well-defined namespaces). The syntax for that is:
CREATE SCHEMA schemaname AUTHORIZATION username;
You can even omit the schema name, in which case the schema name
will be the same as the user name. See for how this can be useful.
Schema names beginning with pg_> are reserved for
system purposes and may not be created by users.
The Public Schema
In the previous sections we created tables without specifying any
schema names. By default, such tables (and other objects) are
automatically put into a schema named public
. Every new
database contains such a schema. Thus, the following are equivalent:
CREATE TABLE products ( ... );
and
CREATE TABLE public.products ( ... );
The Schema Search Path
search path
unqualified names
names
unqualified
Qualified names are tedious to write, and it's often best not to
wire a particular schema name into applications anyway. Therefore
tables are often referred to by unqualified names>,
which consist of just the table name. The system determines which table
is meant by following a search path>, which is a list
of schemas to look in. The first matching table in the search path
is taken to be the one wanted. If there is no match in the search
path, an error is reported, even if matching table names exist
in other schemas in the database.
The first schema named in the search path is called the current schema.
Aside from being the first schema searched, it is also the schema in
which new tables will be created if the CREATE TABLE>
command does not specify a schema name.
To show the current search path, use the following command:
SHOW search_path;
In the default setup this returns:
search_path
--------------
$user,public
The first element specifies that a schema with the same name as
the current user is to be searched. Since no such schema exists
yet, this entry is ignored. The second element refers to the
public schema that we have seen already.
The first schema in the search path that exists is the default
location for creating new objects. That is the reason that by
default objects are created in the public schema. When objects
are referenced in any other context without schema qualification
(table modification, data modification, or query commands) the
search path is traversed until a matching object is found.
Therefore, in the default configuration, any unqualified access
again can only refer to the public schema.
To put our new schema in the path, we use
SET search_path TO myschema,public;
(We omit the $user here because we have no
immediate need for it.) And then we can access the table without
schema qualification:
DROP TABLE mytable;
Also, since myschema is the first element in
the path, new objects would by default be created in it.
We could also have written
SET search_path TO myschema;
Then we no longer have access to the public schema without
explicit qualification. There is nothing special about the public
schema except that it exists by default. It can be dropped, too.
See also for other ways to access
the schema search path.
The search path works in the same way for data type names, function names,
and operator names as it does for table names. Data type and function
names can be qualified in exactly the same way as table names. If you
need to write a qualified operator name in an expression, there is a
special provision: you must write
OPERATOR(>schema>.>operator>)>
This is needed to avoid syntactic ambiguity. An example is
SELECT 3 OPERATOR(pg_catalog.+) 4;
In practice one usually relies on the search path for operators,
so as not to have to write anything so ugly as that.
Schemas and Privileges
By default, users cannot see the objects in schemas they do not
own. To allow that, the owner of the schema needs to grant the
USAGE privilege on the schema. To allow users
to make use of the objects in the schema, additional privileges
may need to be granted, as appropriate for the object.
A user can also be allowed to create objects in someone else's
schema. To allow that, the CREATE privilege on
the schema needs to be granted. Note that by default, everyone
has the CREATE privilege on the schema
public. This allows all users that manage to
connect to a given database to create objects there. If you do
not want to allow that, you can revoke that privilege:
REVOKE CREATE ON SCHEMA public FROM PUBLIC;
(The first public
is the schema, the second
public
means every user
. In the
first sense it is an identifier, in the second sense it is a
reserved word, hence the different capitalization; recall the
guidelines from .)
The System Catalog Schema
In addition to public> and user-created schemas, each
database contains a pg_catalog> schema, which contains
the system tables and all the built-in data types, functions, and
operators. pg_catalog> is always effectively part of
the search path. If it is not named explicitly in the path then
it is implicitly searched before> searching the path's
schemas. This ensures that built-in names will always be
findable. However, you may explicitly place
pg_catalog> at the end of your search path if you
prefer to have user-defined names override built-in names.
In PostgreSQL versions before 7.3,
table names beginning with pg_> were reserved. This is
no longer true: you may create such a table name if you wish, in
any non-system schema. However, it's best to continue to avoid
such names, to ensure that you won't suffer a conflict if some
future version defines a system table named the same as your
table. (With the default search path, an unqualified reference to
your table name would be resolved as the system table instead.)
System tables will continue to follow the convention of having
names beginning with pg_>, so that they will not
conflict with unqualified user-table names so long as users avoid
the pg_> prefix.
Usage Patterns
Schemas can be used to organize your data in many ways. There are
a few usage patterns that are recommended and are easily supported by
the default configuration:
If you do not create any schemas then all users access the
public schema implicitly. This simulates the situation where
schemas are not available at all. This setup is mainly
recommended when there is only a single user or a few cooperating
users in a database. This setup also allows smooth transition
from the non-schema-aware world.
You can create a schema for each user with the same name as
that user. Recall that the default search path starts with
$user, which resolves to the user name.
Therefore, if each user has a separate schema, they access their
own schemas by default.
If you use this setup then you might also want to revoke access
to the public schema (or drop it altogether), so users are
truly constrained to their own schemas.
To install shared applications (tables to be used by everyone,
additional functions provided by third parties, etc.), put them
into separate schemas. Remember to grant appropriate
privileges to allow the other users to access them. Users can
then refer to these additional objects by qualifying the names
with a schema name, or they can put the additional schemas into
their path, as they choose.
Portability
In the SQL standard, the notion of objects in the same schema
being owned by different users does not exist. Moreover, some
implementations do not allow you to create schemas that have a
different name than their owner. In fact, the concepts of schema
and user are nearly equivalent in a database system that
implements only the basic schema support specified in the
standard. Therefore, many users consider qualified names to
really consist of
username>.tablename>.
This is how PostgreSQL will effectively
behave if you create a per-user schema for every user.
Also, there is no concept of a public> schema in the
SQL standard. For maximum conformance to the standard, you should
not use (perhaps even remove) the public> schema.
Of course, some SQL database systems might not implement schemas
at all, or provide namespace support by allowing (possibly
limited) cross-database access. If you need to work with those
systems, then maximum portability would be achieved by not using
schemas at all.
Other Database Objects
Tables are the central objects in a relational database structure,
because they hold your data. But they are not the only objects
that exist in a database. Many other kinds of objects can be
created to make the use and management of the data more efficient
or convenient. They are not discussed in this chapter, but we give
you a list here so that you are aware of what is possible.
Views
Functions, operators, data types, domains
Triggers and rewrite rules
Dependency Tracking
When you create complex database structures involving many tables
with foreign key constraints, views, triggers, functions, etc. you
will implicitly create a net of dependencies between the objects.
For instance, a table with a foreign key constraint depends on the
table it references.
To ensure the integrity of the entire database structure,
PostgreSQL makes sure that you cannot
drop objects that other objects still depend on. For example,
attempting to drop the products table we had considered in , with the orders table depending on
it, would result in an error message such as this:
DROP TABLE products;
NOTICE: constraint $1 on table orders depends on table products
ERROR: Cannot drop table products because other objects depend on it
Use DROP ... CASCADE to drop the dependent objects too
The error message contains a useful hint: if you do not want to
bother deleting all the dependent objects individually, you can run
DROP TABLE products CASCADE;
and all the dependent objects will be removed. In this case, it
doesn't remove the orders table, it only removes the foreign key
constraint. (If you want to check what DROP ... CASCADE> will do,
run DROP> without CASCADE> and read the NOTICE> messages.)
All drop commands in PostgreSQL support
specifying CASCADE. Of course, the nature of
the possible dependencies varies with the type of the object. You
can also write RESTRICT instead of
CASCADE to get the default behavior which is to
restrict drops of objects that other objects depend on.
According to the SQL standard, specifying either
RESTRICT or CASCADE is
required. No database system actually implements it that way, but
whether the default behavior is RESTRICT or
CASCADE varies across systems.
Foreign key constraint dependencies and serial column dependencies
from PostgreSQL versions prior to 7.3
are not maintained or created during the
upgrade process. All other dependency types will be properly
created during an upgrade.