[alpha_api]: javascript:alphaApi()
[alpha_implementation]: javascript:alphaImplementation()

Object Relational Tutorial {@name=datamapping}
============

In this tutorial we will cover a basic SQLAlchemy object-relational mapping scenario, where we store and retrieve Python objects from a database representation.  The database schema will begin with one table, and will later develop into several.  The tutorial is in doctest format, meaning each `>>>` line represents something you can type at a Python command prompt, and the following text represents the expected return value.

## Imports

A quick check to verify that we are on at least **version 0.4** of SQLAlchemy:

    {python}
    >>> import sqlalchemy
    >>> sqlalchemy.__version__ # doctest:+SKIP
    0.4.0
    
First, lets import some symbols to get us started with our database connection as well as what we need to tell SQLAlchemy about the database tables we want to work with.

    {python}
    >>> from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData

Many users prefer to just say `from sqlalchemy import *`, or `import sqlalchemy as sa`, for this step.

## Connecting

For this tutorial we will use an in-memory-only SQLite database.   This is an easy way to test things without needing to have an actual database defined anywhere.  To connect we use `create_engine()`:

    {python}
    >>> engine = create_engine('sqlite:///:memory:', echo=True)
    
The `echo` flag is a shortcut to setting up SQLAlchemy logging, which is accomplished via Python's standard `logging` module.  With it enabled, we'll see all the generated SQL produced.  If you are working through this tutorial and want less output generated, set it to `False`.   This tutorial will format the SQL behind a popup window so it doesn't get in our way; just click the "SQL" links to see whats being generated.
    
## Define and Create a Table {@name=tables}

Next we want to tell SQLAlchemy about our tables.  We will start with just a single table called `users`, which will store records for the end-users using our application (lets assume its a website).  We define our tables all within a catalog called `MetaData`, using the `Table` construct, which resembles regular SQL CREATE TABLE syntax:

    {python}
    >>> metadata = MetaData()
    >>> users_table = Table('users', metadata,
    ...     Column('id', Integer, primary_key=True),
    ...     Column('name', String(40)),
    ...     Column('fullname', String(100)),
    ...     Column('password', String(15))
    ... )

All about how to define `Table` objects, as well as how to create them from an existing database automatically, is described in [metadata](rel:metadata).

Next, to tell the `MetaData` we'd actually like to create our `users_table` for real inside the SQLite database, we use `create_all()`, passing it the `engine` instance which points to our database.  This will check for the presence of a table first before creating, so its safe to call multiple times:

    {python}
    {sql}>>> metadata.create_all(engine) # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
    PRAGMA table_info("users")
    {}
    CREATE TABLE users (
        id INTEGER NOT NULL, 
        name VARCHAR(40), 
        fullname VARCHAR(100), 
        password VARCHAR(15), 
        PRIMARY KEY (id)
    )
    None
    COMMIT

So now our database is created, our initial schema is present, and our SQLAlchemy application knows all about the tables and columns in the database; this information is to be re-used by the Object Relational Mapper, as we'll see now.
 
## Define a Python Class to be Mapped {@name=mapping}

So lets create a rudimental `User` object to be mapped in the database.  This object will for starters have three attributes, `name`, `fullname` and `password`.  It only need subclass Python's built-in `object` class (i.e. its a new style class).  We will give it a constructor so that it may conveniently be instantiated with its attributes at once, as well as a `__repr__` method so that we can get a nice string representation of it:

    {python}
    >>> class User(object):
    ...     def __init__(self, name, fullname, password):
    ...         self.name = name
    ...         self.fullname = fullname
    ...         self.password = password
    ...
    ...     def __repr__(self):
    ...        return "<User(%r,%r, %r)>" % (self.name, self.fullname, self.password)

## Setting up the Mapping

With our `users_table` as well as our `User` class, we want to map the two together.  That's where the SQLAlchemy ORM package comes in.  We'll use the `mapper` function to create a **mapping** between `users_table` and `User`:

    {python}
    >>> from sqlalchemy.orm import mapper
    >>> mapper(User, users_table) # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
    <sqlalchemy.orm.mapper.Mapper object at 0x...>
    
The `mapper()` function creates a new `Mapper` object and stores it away for future reference.  It also **instruments** the attributes on our `User` class, corresponding to the `users_table` table.  The `id`, `name`, `fullname`, and `password` columns in our `users_table` are now instrumented upon our `User` class, meaning it will keep track of all changes to these attributes, and can save and load their values to/from the database.  Lets create our first user, 'ed', and ensure that the object has all three of these attributes

    {python}
    >>> ed_user = User('ed', 'Ed Jones', 'edspassword')
    >>> ed_user.name
    'ed'
    >>> ed_user.password
    'edspassword'
    >>> str(ed_user.id)
    'None'
    
What was that last `id` attribute?  That was placed there by the `Mapper`, to track the value of the `id` column in the `users_table`.  Since our `User` doesn't exist in the database, it's id is `None`.  When we save the object, it will get populated automatically with its new id.

## Creating a Session

We're now ready to start talking to the database.  The ORM's "handle" to the database is the `Session`.  Whenever you need to have a conversation with the database, you instantiate one of these objects.  
    
    {python}
    >>> from sqlalchemy.orm import Session
    >>> session = Session(engine)
    
The above `Session` is associated with our SQLite `engine`, but it hasn't opened any connections yet.  When it's first used, it retrieves a connection from a pool of connections stored in the `engine`, and holds onto it until we commit all changes and/or close the session object.  With most database configurations, theres also a transaction in progress (one notable exception to this is MySQL, when you use its default table style of MyISAM).  There's many options available to modify this behavior but we'll go with this straightforward version to start.    

## Saving Objects

So saving our `User` is as easy as issuing `save()`:

    {python}
    >>> session.save(ed_user)
    
But you'll notice nothing has happened yet.  Well, lets pretend something did, and try to query for our user.  This is done using the `query()` method on `Session`.  We create a new query representing the set of all `User` objects first.  Then we narrow the results by "filtering" down to the user we want; that is, the user whose `name` attribute is `"ed"`.  Finally we call `first()` which tells `Query`, "we'd like the first result in this list".

    {python}
    {sql}>>> session.query(User).filter_by(name='ed').first() # doctest:+ELLIPSIS,+NORMALIZE_WHITESPACE
    BEGIN
    INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
    ['ed', 'Ed Jones', 'edspassword']
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name = ? ORDER BY users.oid 
     LIMIT 1 OFFSET 0
    ['ed']
    {stop}<User('ed','Ed Jones', 'edspassword')>

And we get back our new user.  If you view the generated SQL, you'll see that the `Session` issued an `INSERT` statement before querying for us.  The `Session` stores whatever you put into it in memory, and at certain points in time issues a **flush**, which issues SQL to the database to store the object.  You can also manually invoke this operation; however when the `Session` is configured to `autoflush`, its usually not needed.

OK, we'll lets do some more operations.  Lets create and save three more users:

    {python}
    >>> session.save(User('wendy', 'Wendy Williams', 'foobar'))
    >>> session.save(User('mary', 'Mary Contrary', 'xxg527'))
    >>> session.save(User('fred', 'Fred Flinstone', 'blah'))

Also, Ed has already decided his password isn't too secure, so lets change it:
    
    {python}
    >>> ed_user.password = 'f8s7ccs'
    
Then we'll permanently store everything thats been changed and added to the database.  We do this via `commit()`:

    {python}
    {sql}>>> session.commit()
    UPDATE users SET password=? WHERE users.id = ?
    ['f8s7ccs', 1]
    INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
    ['wendy', 'Wendy Williams', 'foobar']
    INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
    ['mary', 'Mary Contrary', 'xxg527']
    INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
    ['fred', 'Fred Flinstone', 'blah']
    COMMIT

`commit()` flushes whatever remaining changes remain to the database, and commits the transaction.  The connection resources referenced by the session are now returned to the connection pool.  Subsequent operations with this session will then automatically be started in a **new** transaction, which will again re-acquire connection resources.

Lets take a look at Ed's `id` attribute again, just to make sure its set up:

    {python}
    >>> ed_user.id
    1

After each `INSERT` operation, the `Session` assigns all newly generated ids and defaults to the mapped object instance.

One crucial thing to note about the `Session` is that each object instance is cached within the Session, based on its primary key identitifer.  The reason for this cache is not as much for performance as it is for maintaining an **identity map** of instances.  This map guarantees that whenever you work with a particular `User` object in a session, **you always get the same instance back**.  As below, reloading Ed gives us the same instance back:

    {python}
    {sql}>>> ed_user is session.query(User).filter_by(name='ed').one() # doctest: +NORMALIZE_WHITESPACE
    BEGIN
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name = ? ORDER BY users.oid 
    LIMIT 2 OFFSET 0
    ['ed']
    {stop}True

## Querying

A whirlwind tour through querying.

A `Query` is created from the `Session`, relative to a particular class we wish to load.

    {python}
    >>> query = session.query(User)

Once we have a query, we can start loading objects.  The Query object, when first created, represents all the instances of its main class.  You can iterate through it directly:

    {python}
    {sql}>>> for user in session.query(User):
    ...     print user.name
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users ORDER BY users.oid
    []
    {stop}ed
    wendy
    mary
    fred

...and the SQL will be issued at the point where the query is evaluated as a list.  If you apply array slices before iterating, LIMIT and OFFSET are applied to the query:

    {python}
    {sql}>>> for u in session.query(User)[1:3]: #doctest: +NORMALIZE_WHITESPACE
    ...    print u
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users ORDER BY users.oid 
    LIMIT 2 OFFSET 1
    []
    {stop}<User('wendy','Wendy Williams', 'foobar')>
    <User('mary','Mary Contrary', 'xxg527')>

Narrowing the results down is accomplished either with `filter_by()`, which uses keyword arguments:

    {python}
    {sql}>>> for user in session.query(User).filter_by(name='ed', fullname='Ed Jones'):
    ...    print user
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.fullname = ? AND users.name = ? ORDER BY users.oid
    ['Ed Jones', 'ed']
    {stop}<User('ed','Ed Jones', 'f8s7ccs')>

...or `filter()`, which uses SQL expression language constructs.  These allow you to use regular Python operators with the properties on your mapped class:

    {python}
    {sql}>>> for user in session.query(User).filter(User.name=='ed'):
    ...    print user
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name = ? ORDER BY users.oid
    ['ed']
    {stop}<User('ed','Ed Jones', 'f8s7ccs')>

You can also use the `Column` constructs attached to the `users_table` object to construct SQL expressions:

    {python}
    {sql}>>> for user in session.query(User).filter(users_table.c.name=='ed'):
    ...    print user
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name = ? ORDER BY users.oid
    ['ed']
    {stop}<User('ed','Ed Jones', 'f8s7ccs')>

Most common SQL operators are available, such as `LIKE`:

    {python}
    {sql}>>> session.query(User).filter(User.name.like('%ed'))[1] # doctest: +NORMALIZE_WHITESPACE
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name LIKE ? ORDER BY users.oid 
     LIMIT 1 OFFSET 1
    ['%ed']
    {stop}<User('fred','Fred Flinstone', 'blah')>

Note above our array index of `1` placed the appropriate LIMIT/OFFSET and returned a scalar result immediately.

The `all()`, `one()`, and `first()` methods immediately issue SQL without using an iterative context or array index.  `all()` returns a list:

    {python}
    >>> query = session.query(User).filter(User.name.like('%ed'))

    {sql}>>> query.all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name LIKE ? ORDER BY users.oid
    ['%ed']
    {stop}[<User('ed','Ed Jones', 'f8s7ccs')>, <User('fred','Fred Flinstone', 'blah')>]

`first()` applies a limit of one and returns the first result as a scalar:

    {python}
    {sql}>>> query.first()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name LIKE ? ORDER BY users.oid 
     LIMIT 1 OFFSET 0
    ['%ed']
    {stop}<User('ed','Ed Jones', 'f8s7ccs')>

and `one()`, applies a limit of *two*, and if not exactly one row returned (no more, no less), raises an error:

    {python}
    {sql}>>> try:  
    ...     user = query.one() 
    ... except Exception, e: 
    ...     print e
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name LIKE ? ORDER BY users.oid 
     LIMIT 2 OFFSET 0
    ['%ed']
    {stop}Multiple rows returned for one()

All `Query` methods that don't return a result instead return a new `Query` object, with modifications applied.  Therefore you can call many query methods successively to build up the criterion you want:

    {python}
    >>> query = session.query(User).filter(User.id<2).filter_by(name='ed')
    {sql}>>> query.filter(User.fullname=='Ed Jones').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.id < ? AND users.name = ? AND users.fullname = ? ORDER BY users.oid
    [2, 'ed', 'Ed Jones']
    {stop}[<User('ed','Ed Jones', 'f8s7ccs')>]

If you need to use other conjunctions besides `AND`, all SQL conjunctions are available explicitly within expressions, such as `and_()` and `or_()`, when using `filter()`:

    {python}
    >>> from sqlalchemy import and_, or_
    
    {sql}>>> session.query(User).filter(
    ...    and_(User.id<224, or_(User.name=='ed', User.name=='wendy'))
    ...    ).all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.id < ? AND (users.name = ? OR users.name = ?) ORDER BY users.oid
    [224, 'ed', 'wendy']
    {stop}[<User('ed','Ed Jones', 'f8s7ccs')>, <User('wendy','Wendy Williams', 'foobar')>]
    
You also have full ability to use literal strings to construct SQL.  For a single criterion, use a string with `filter()`:

    {python}
    {sql}>>> for user in session.query(User).filter("id<224").all():
    ...     print user.name
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE id<224 ORDER BY users.oid
    []
    {stop}ed
    wendy
    mary
    fred
    
Bind parameters can be specified with string-based SQL, using a colon.  To specify the values, use the `params()` method:

    {python}
    {sql}>>> session.query(User).filter("id<:value and name=:name").params(value=224, name='fred').one() # doctest: +NORMALIZE_WHITESPACE
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE id<? and name=? ORDER BY users.oid 
    LIMIT 2 OFFSET 0
    [224, 'fred']
    {stop}<User('fred','Fred Flinstone', 'blah')>

Note that when we use constructed SQL expressions, bind paramters are generated for us automatically; we don't need to worry about them.
       
To use an entirely string-based statement, using `from_statement()`; just ensure that the columns clause of the statement contains the column names normally used by the mapper (below illustrated using an asterisk):

    {python}
    {sql}>>> result = session.query(User).from_statement("SELECT * FROM users").all()
    SELECT * FROM users
    []

`from_statement()` can also accomodate full `select()` constructs.  These are described in the [sql](rel:sql):

    {python}
    >>> from sqlalchemy import select, func
    
    {sql}>>> session.query(User).from_statement(
    ...     select([users_table], select([func.max(users_table.c.name)]).label('maxuser')==users_table.c.name) 
    ...    ).all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE (SELECT max(users.name) 
    FROM users) = users.name
    []
    {stop}[<User('wendy','Wendy Williams', 'foobar')>]
    
There's also a way to combine scalar results with objects, using `add_column()`.  This is often used for functions and aggregates.  When `add_column()` (or its cousin `add_entity()`, described later) is used, tuples are returned:

    {python}
    {sql}>>> for r in session.query(User).add_column(select([func.max(users_table.c.name)]).label('maxuser')):
    ...     print r
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, (SELECT max(users.name) 
    FROM users) AS maxuser 
    FROM users ORDER BY users.oid
    []
    {stop}(<User('ed','Ed Jones', 'f8s7ccs')>, u'wendy')
    (<User('wendy','Wendy Williams', 'foobar')>, u'wendy')
    (<User('mary','Mary Contrary', 'xxg527')>, u'wendy')
    (<User('fred','Fred Flinstone', 'blah')>, u'wendy')

## Building a One-to-Many Relation

We've spent a lot of time dealing with just one class, and one table.  Let's now look at how SQLAlchemy deals with two tables, which have a relationship to each other.   Let's say that the users in our system also can store any number of email addresses associated with their username.  This implies a basic one to many association from the `users_table` to a new table which stores email addresess, which we will call `addresses`.  We will also create a relationship between this new table to the users table, using a `ForeignKey`:

    {python}
    >>> from sqlalchemy import ForeignKey
    
    >>> addresses_table = Table('addresses', metadata, 
    ...     Column('id', Integer, primary_key=True),
    ...     Column('email_address', String(100), nullable=False),
    ...     Column('user_id', Integer, ForeignKey('users.id')))
    
Another call to `create_all()` will skip over our `users` table and build just the new `addresses` table:

    {python}
    {sql}>>> metadata.create_all(engine) # doctest: +NORMALIZE_WHITESPACE
    PRAGMA table_info("users")
    {}
    PRAGMA table_info("addresses")
    {}
    CREATE TABLE addresses (
        id INTEGER NOT NULL, 
        email_address VARCHAR(100) NOT NULL, 
        user_id INTEGER, 
        PRIMARY KEY (id), 
         FOREIGN KEY(user_id) REFERENCES users (id)
    )
    None
    COMMIT

For our ORM setup, we're going to start all over again.  We will first close out our `Session` and clear all `Mapper` objects:

    {python}
    >>> from sqlalchemy.orm import clear_mappers
    >>> session.close()
    >>> clear_mappers()
    
Our `User` class, still around, reverts to being just a plain old class.  Lets create an `Address` class to represent a user's email address:

    {python}
    >>> class Address(object):
    ...     def __init__(self, email_address):
    ...         self.email_address = email_address
    ...
    ...     def __repr__(self):
    ...         return "<Address(%r)>" % self.email_address

Now comes the fun part.  We define a mapper for each class, and associate them using a function called `relation()`.  We can define each mapper in any order we want:

    {python}
    >>> from sqlalchemy.orm import relation
    
    >>> mapper(User, users_table, properties={    # doctest: +ELLIPSIS
    ...     'addresses':relation(Address, backref='user')
    ... })
    <sqlalchemy.orm.mapper.Mapper object at 0x...>
    
    >>> mapper(Address, addresses_table) # doctest: +ELLIPSIS
    <sqlalchemy.orm.mapper.Mapper object at 0x...>

Above, the new thing we see is that `User` has defined a relation named `addresses`, which will reference a list of `Address` objects.  How does it know it's a list ?  SQLAlchemy figures it out for you, based on the foreign key relationship between `users_table` and `addresses_table`.  

## Working with Related Objects and Backreferences {@name=relation_backref}

Now when we create a `User`, it automatically has this collection present:

    {python}
    >>> jack = User('jack', 'Jack Bean', 'gjffdd')
    >>> jack.addresses
    []
    
We are free to add `Address` objects, and the `session` will take care of everything for us.

    {python}
    >>> jack.addresses.append(Address(email_address='jack@google.com'))
    >>> jack.addresses.append(Address(email_address='j25@yahoo.com'))
    
Before we save into the `Session`, lets examine one other thing that's happened here.  The `addresses` collection is present on our `User` because we added a `relation()` with that name.  But also within the `relation()` function is the keyword `backref`.  This keyword indicates that we wish to make a **bi-directional relationship**.  What this basically means is that not only did we generate a one-to-many relationship called `addresses` on the `User` class, we also generated a **many-to-one** relationship on the `Address` class.  This relationship is self-updating, without any data being flushed to the database, as we can see on one of Jack's addresses:

    {python}
    >>> jack.addresses[1]
    <Address('j25@yahoo.com')>
    
    >>> jack.addresses[1].user
    <User('jack','Jack Bean', 'gjffdd')>
    
Let's save into the session, then close out the session and create a new one...so that we can see how `Jack` and his email addresses come back to us:

    {python}
    >>> session.save(jack)
    {sql}>>> session.commit()
    BEGIN
    INSERT INTO users (name, fullname, password) VALUES (?, ?, ?)
    ['jack', 'Jack Bean', 'gjffdd']
    INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
    ['jack@google.com', 5]
    INSERT INTO addresses (email_address, user_id) VALUES (?, ?)
    ['j25@yahoo.com', 5]
    COMMIT
    
    >>> session = Session(engine)
    
Querying for Jack, we get just Jack back.  No SQL is yet issued for for Jack's addresses:

    {python}
    {sql}>>> jack = session.query(User).filter_by(name='jack').one()
    BEGIN
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users 
    WHERE users.name = ? ORDER BY users.oid 
     LIMIT 2 OFFSET 0
    ['jack']
    
    >>> jack
    <User(u'jack',u'Jack Bean', u'gjffdd')>
    
Let's look at the `addresses` collection.  Watch the SQL:

    {python}
    {sql}>>> jack.addresses
    SELECT addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id 
    FROM addresses 
    WHERE ? = addresses.user_id ORDER BY addresses.oid
    [5]
    {stop}[<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
    
When we accessed the `addresses` collection, SQL was suddenly issued.  This is an example of a **lazy loading relation**.

If you want to reduce the number of queries (dramatically, in many cases), we can apply an **eager load** to the query operation.  We clear out the session to ensure that a full reload occurs:

    {python}
    >>> session.clear()
    
Then apply an **option** to the query, indicating that we'd like `addresses` to load "eagerly".  SQLAlchemy then constructs a join between the `users` and `addresses` tables:

    {python}
    >>> from sqlalchemy.orm import eagerload
    
    {sql}>>> jack = session.query(User).options(eagerload('addresses')).filter_by(name='jack').one() #doctest: +NORMALIZE_WHITESPACE
    SELECT addresses_1.id AS addresses_1_id, addresses_1.email_address AS addresses_1_email_address, 
    addresses_1.user_id AS addresses_1_user_id, users.id AS users_id, users.name AS users_name, 
    users.fullname AS users_fullname, users.password AS users_password 
    FROM (SELECT users.id AS users_id, users.oid AS users_oid 
        FROM users 
    WHERE users.name = ? ORDER BY users.oid LIMIT 2 OFFSET 0) AS tbl_row_count, 
     users LEFT OUTER JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id 
    WHERE users.id = tbl_row_count.users_id ORDER BY tbl_row_count.oid, addresses_1.oid
    ['jack']
    
    >>> jack
    <User(u'jack',u'Jack Bean', u'gjffdd')>
    
    >>> jack.addresses
    [<Address(u'jack@google.com')>, <Address(u'j25@yahoo.com')>]
    
If you think that query is elaborate, it is !  But SQLAlchemy is just getting started.  Note that when using eager loading, *nothing* changes as far as the ultimate results returned.  The "loading strategy", as its called, is designed to be completely transparent in all cases, and is for optimization purposes only.  Any query criterion you use to load objects, including ordering, limiting, other joins, etc., should return identical results regardless of the combiantion of lazily- and eagerly- loaded relationships present.

## Querying with Joins {@name=joins}

Which brings us to the next big topic.  What if we want to create joins that *do* change the results ?  For that, another `Query` tornado is coming....

One way to join two tables together is just to compose a SQL expression.   Below we make one up using the `id` and `user_id` attributes on our mapped classes:

    {python}
    >>> query = session.query(User).filter(User.id==Address.user_id)
    {sql}>>> query.filter(Address.email_address=='jack@google.com').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users, addresses 
    WHERE users.id = addresses.user_id AND addresses.email_address = ? ORDER BY users.oid
    ['jack@google.com']
    {stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]

Or we can make a real JOIN construct; below we use the `join()` function available on `Table` to create a `Join` object, then tell the `Query` to use it as our FROM clause:

    {python}
    >>> query = session.query(User).select_from(users_table.join(addresses_table))
    {sql}>>> query.filter(Address.email_address=='jack@google.com').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users JOIN addresses ON users.id = addresses.user_id 
    WHERE addresses.email_address = ? ORDER BY users.oid
    ['jack@google.com']
    {stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]

Note that the `join()` construct has no problem figuring out the correct join condition between `users_table` and `addresses_table`..the `ForeignKey` we constructed says it all.

The easiest way to join is automatically, using the `join()` method on `Query`.  Just give this method the path from A to B, using the name of a mapped relationship directly:

    {python}
    >>> query = session.query(User).join('addresses')
    {sql}>>> query.filter(Address.email_address=='jack@google.com').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users JOIN addresses ON users.id = addresses.user_id 
    WHERE addresses.email_address = ? ORDER BY users.oid
    ['jack@google.com']
    {stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]

Each time the `join()` is called on `Query`, the **joinpoint** of the query is moved to be that of the endpoint of the join.  As above, when we joined from `users_table` to `addresses_table`, all subsequent criterion used by `filter_by()` are against the `addresses` table.  When you `join()` again, the joinpoint starts back from the root.  We can also backtrack to the beginning explicitly using `reset_joinpoint()`.  This instruction will place the joinpoint back at the root `users` table, where subsequent `filter_by()` criterion are again against `users`:

    {python}
    >>> query = session.query(User).join('addresses').filter_by(email_address='jack@google.com')
    {sql}>>> query.reset_joinpoint().filter_by(name='jack').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users JOIN addresses ON users.id = addresses.user_id 
    WHERE addresses.email_address = ? AND users.name = ? ORDER BY users.oid
    ['jack@google.com', 'jack']
    {stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]

In all cases, we can get the `User` and the matching `Address` objects back at the same time, by telling the session we want both.  This returns the results as a list of tuples:

    {python}
    >>> query = session.query(User).add_entity(Address).join('addresses')
    {sql}>>> query.filter(Address.email_address=='jack@google.com').all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password, addresses.id AS addresses_id, addresses.email_address AS addresses_email_address, addresses.user_id AS addresses_user_id 
    FROM users JOIN addresses ON users.id = addresses.user_id 
    WHERE addresses.email_address = ? ORDER BY users.oid
    ['jack@google.com']
    {stop}[(<User(u'jack',u'Jack Bean', u'gjffdd')>, <Address(u'jack@google.com')>)]

Another common scenario is the need to join on the same table more than once.  For example, if we want to find a `User` who has two distinct email addresses, both `jack@google.com` as well as `j25@yahoo.com`, we need to join to the `Addresses` table twice.  SQLAlchemy does provide `Alias` objects which can accomplish this; but far easier is just to tell `join()` to alias for you:

    {python}
    >>> query = session.query(User).join('addresses', aliased=True)
    >>> query = query.filter(Address.email_address=='jack@google.com')
    >>> query = query.join('addresses', aliased=True).filter(Address.email_address=='j25@yahoo.com')
    {sql}>>> query.all()
    SELECT users.id AS users_id, users.name AS users_name, users.fullname AS users_fullname, users.password AS users_password 
    FROM users JOIN addresses AS addresses_1 ON users.id = addresses_1.user_id JOIN addresses AS addresses_2 ON users.id = addresses_2.user_id 
    WHERE addresses_1.email_address = ? AND addresses_2.email_address = ? ORDER BY users.oid
    ['jack@google.com', 'j25@yahoo.com']
    {stop}[<User(u'jack',u'Jack Bean', u'gjffdd')>]

The key thing which occured above is that our SQL criterion were **alisaed** as appropriate corresponding to the alias generated in the most recent `join()` call.  This is an example of where SQLAlchemy's relational construct engine makes a complicated task simple.