summaryrefslogtreecommitdiff
path: root/docs/build/autogenerate.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/build/autogenerate.rst')
-rw-r--r--docs/build/autogenerate.rst293
1 files changed, 293 insertions, 0 deletions
diff --git a/docs/build/autogenerate.rst b/docs/build/autogenerate.rst
new file mode 100644
index 0000000..6958841
--- /dev/null
+++ b/docs/build/autogenerate.rst
@@ -0,0 +1,293 @@
+Auto Generating Migrations
+===========================
+
+Alembic can view the status of the database and compare against the table metadata
+in the application, generating the "obvious" migrations based on a comparison. This
+is achieved using the ``--autogenerate`` option to the ``alembic revision`` command,
+which places so-called *candidate* migrations into our new migrations file. We
+review and modify these by hand as needed, then proceed normally.
+
+To use autogenerate, we first need to modify our ``env.py`` so that it gets access
+to a table metadata object that contains the target. Suppose our application
+has a `declarative base <http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#synopsis>`_
+in ``myapp.mymodel``. This base contains a :class:`~sqlalchemy.schema.MetaData` object which
+contains :class:`~sqlalchemy.schema.Table` objects defining our database. We make sure this
+is loaded in ``env.py`` and then passed to :meth:`.EnvironmentContext.configure` via the
+``target_metadata`` argument. The ``env.py`` sample script already has a
+variable declaration near the top for our convenience, where we replace ``None``
+with our :class:`~sqlalchemy.schema.MetaData`. Starting with::
+
+ # add your model's MetaData object here
+ # for 'autogenerate' support
+ # from myapp import mymodel
+ # target_metadata = mymodel.Base.metadata
+ target_metadata = None
+
+we change to::
+
+ from myapp.mymodel import Base
+ target_metadata = Base.metadata
+
+If we look later in the script, down in ``run_migrations_online()``,
+we can see the directive passed to :meth:`.EnvironmentContext.configure`::
+
+ def run_migrations_online():
+ engine = engine_from_config(
+ config.get_section(config.config_ini_section), prefix='sqlalchemy.')
+
+ connection = engine.connect()
+ context.configure(
+ connection=connection,
+ target_metadata=target_metadata
+ )
+
+ trans = connection.begin()
+ try:
+ context.run_migrations()
+ trans.commit()
+ except:
+ trans.rollback()
+ raise
+
+We can then use the ``alembic revision`` command in conjunction with the
+``--autogenerate`` option. Suppose
+our :class:`~sqlalchemy.schema.MetaData` contained a definition for the ``account`` table,
+and the database did not. We'd get output like::
+
+ $ alembic revision --autogenerate -m "Added account table"
+ INFO [alembic.context] Detected added table 'account'
+ Generating /path/to/foo/alembic/versions/27c6a30d7c24.py...done
+
+We can then view our file ``27c6a30d7c24.py`` and see that a rudimentary migration
+is already present::
+
+ """empty message
+
+ Revision ID: 27c6a30d7c24
+ Revises: None
+ Create Date: 2011-11-08 11:40:27.089406
+
+ """
+
+ # revision identifiers, used by Alembic.
+ revision = '27c6a30d7c24'
+ down_revision = None
+
+ from alembic import op
+ import sqlalchemy as sa
+
+ def upgrade():
+ ### commands auto generated by Alembic - please adjust! ###
+ op.create_table(
+ 'account',
+ sa.Column('id', sa.Integer()),
+ sa.Column('name', sa.String(length=50), nullable=False),
+ sa.Column('description', sa.VARCHAR(200)),
+ sa.Column('last_transaction_date', sa.DateTime()),
+ sa.PrimaryKeyConstraint('id')
+ )
+ ### end Alembic commands ###
+
+ def downgrade():
+ ### commands auto generated by Alembic - please adjust! ###
+ op.drop_table("account")
+ ### end Alembic commands ###
+
+The migration hasn't actually run yet, of course. We do that via the usual ``upgrade``
+command. We should also go into our migration file and alter it as needed, including
+adjustments to the directives as well as the addition of other directives which these may
+be dependent on - specifically data changes in between creates/alters/drops.
+
+Autogenerate will by default detect:
+
+* Table additions, removals.
+* Column additions, removals.
+* Change of nullable status on columns.
+* Basic changes in indexes and explcitly-named unique constraints
+
+.. versionadded:: 0.6.1 Support for autogenerate of indexes and unique constraints.
+
+Autogenerate can *optionally* detect:
+
+* Change of column type. This will occur if you set
+ the :paramref:`.EnvironmentContext.configure.compare_type` parameter
+ to ``True``, or to a custom callable.
+ The feature works well in most cases,
+ but is off by default so that it can be tested on the target schema
+ first. It can also be customized by passing a callable here; see the
+ function's documentation for details.
+* Change of server default. This will occur if you set
+ the :paramref:`.EnvironmentContext.configure.compare_server_default`
+ paramter to ``True``, or to a custom callable.
+ This feature works well for simple cases but cannot always produce
+ accurate results. The Postgresql backend will actually invoke
+ the "detected" and "metadata" values against the database to
+ determine equivalence. The feature is off by default so that
+ it can be tested on the target schema first. Like type comparison,
+ it can also be customized by passing a callable; see the
+ function's documentation for details.
+
+Autogenerate can *not* detect:
+
+* Changes of table name. These will come out as an add/drop of two different
+ tables, and should be hand-edited into a name change instead.
+* Changes of column name. Like table name changes, these are detected as
+ a column add/drop pair, which is not at all the same as a name change.
+* Anonymously named constraints. Give your constraints a name,
+ e.g. ``UniqueConstraint('col1', 'col2', name="my_name")``
+* Special SQLAlchemy types such as :class:`~sqlalchemy.types.Enum` when generated
+ on a backend which doesn't support ENUM directly - this because the
+ representation of such a type
+ in the non-supporting database, i.e. a CHAR+ CHECK constraint, could be
+ any kind of CHAR+CHECK. For SQLAlchemy to determine that this is actually
+ an ENUM would only be a guess, something that's generally a bad idea.
+ To implement your own "guessing" function here, use the
+ :meth:`sqlalchemy.events.DDLEvents.column_reflect` event
+ to alter the SQLAlchemy type passed for certain columns and possibly
+ :meth:`sqlalchemy.events.DDLEvents.after_parent_attach` to intercept
+ unwanted CHECK constraints.
+
+Autogenerate can't currently, but will *eventually* detect:
+
+* Some free-standing constraint additions and removals,
+ like CHECK and FOREIGN KEY - these are not fully implemented.
+* Sequence additions, removals - not yet implemented.
+
+
+.. _autogen_render_types:
+
+Rendering Custom Types in Autogenerate
+--------------------------------------
+
+The methodology Alembic uses to generate SQLAlchemy type constructs
+as Python code is plain old ``__repr__()``. SQLAlchemy's built-in types
+for the most part have a ``__repr__()`` that faithfully renders a
+Python-compatible constructor call, but there are some exceptions, particularly
+in those cases when a constructor accepts arguments that aren't compatible
+with ``__repr__()``, such as a pickling function.
+
+When building a custom type that will be rendered into a migration script,
+it is often necessary to explicitly give the type a ``__repr__()`` that will
+faithfully reproduce the constructor for that type. But beyond that, it
+also is usually necessary to change how the enclosing module or package
+is rendered as well;
+this is accomplished using the :paramref:`.EnvironmentContext.configure.render_item`
+configuration option::
+
+ def render_item(type_, obj, autogen_context):
+ """Apply custom rendering for selected items."""
+
+ if type_ == 'type' and isinstance(obj, MySpecialType):
+ return "mypackage.%r" % obj
+
+ # default rendering for other objects
+ return False
+
+ def run_migrations_online():
+ # ...
+
+ context.configure(
+ connection=connection,
+ target_metadata=target_metadata,
+ render_item=render_item,
+ # ...
+ )
+
+ # ...
+
+Above, we also need to make sure our ``MySpecialType`` includes an appropriate
+``__repr__()`` method, which is invoked when we call it against ``"%r"``.
+
+The callable we use for :paramref:`.EnvironmentContext.configure.render_item`
+can also add imports to our migration script. The ``autogen_context`` passed in
+contains an entry called ``autogen_context['imports']``, which is a Python
+``set()`` for which we can add new imports. For example, if ``MySpecialType``
+were in a module called ``mymodel.types``, we can add the import for it
+as we encounter the type::
+
+ def render_item(type_, obj, autogen_context):
+ """Apply custom rendering for selected items."""
+
+ if type_ == 'type' and isinstance(obj, MySpecialType):
+ # add import for this type
+ autogen_context['imports'].add("from mymodel import types")
+ return "types.%r" % obj
+
+ # default rendering for other objects
+ return False
+
+The finished migration script will include our imports where the
+``${imports}`` expression is used, producing output such as::
+
+ from alembic import op
+ import sqlalchemy as sa
+ from mymodel import types
+
+ def upgrade():
+ op.add_column('sometable', Column('mycolumn', types.MySpecialType()))
+
+.. _autogen_module_prefix:
+
+Controlling the Module Prefix
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When using :paramref:`.EnvironmentContext.configure.render_item`, note that
+we deliver not just the reproduction of the type, but we can also deliver the
+"module prefix", which is a module namespace from which our type can be found
+within our migration script. When Alembic renders SQLAlchemy types, it will
+typically use the value of
+:paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`,
+which defaults to ``"sa."``, to achieve this::
+
+ Column("my_column", sa.Integer())
+
+When we use a custom type that is not within the ``sqlalchemy.`` module namespace,
+by default Alembic will use the **value of __module__ for the custom type**::
+
+ Column("my_column", myapp.models.utils.types.MyCustomType())
+
+Above, it seems our custom type is in a very specific location, based on
+the length of what ``__module__`` reports. It's a good practice to
+not have this long name render into our migration scripts, as it means
+this long and arbitrary name will be hardcoded into all our migration
+scripts; instead, we should create a module that is
+explicitly for custom types that our migration files will use. Suppose
+we call it ``myapp.migration_types``::
+
+ # myapp/migration_types.py
+
+ from myapp.models.utils.types import MyCustomType
+
+We can provide the name of this module to our autogenerate context using
+:paramref:`.EnvironmentContext.configure.user_module_prefix`
+option::
+
+
+ def run_migrations_online():
+ # ...
+
+ context.configure(
+ connection=connection,
+ target_metadata=target_metadata,
+ user_module_prefix="myapp.migration_types.",
+ # ...
+ )
+
+ # ...
+
+Where we'd get a migration like::
+
+ Column("my_column", myapp.migration_types.MyCustomType())
+
+Now, when we inevitably refactor our application to move ``MyCustomType``
+somewhere else, we only need modify the ``myapp.migration_types`` module,
+instead of searching and replacing all instances within our migration scripts.
+
+.. versionchanged:: 0.7.0
+ :paramref:`.EnvironmentContext.configure.user_module_prefix`
+ no longer defaults to the value of
+ :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
+ when left at ``None``; the ``__module__`` attribute is now used.
+
+.. versionadded:: 0.6.3 Added :paramref:`.EnvironmentContext.configure.user_module_prefix`.
+