Runtime Objects#
The “runtime” of Alembic involves the EnvironmentContext
and MigrationContext objects.   These are the objects that are
in play once the env.py script is loaded up by a command and
a migration operation proceeds.
The Environment Context#
The EnvironmentContext class provides most of the
API used within an env.py script.  Within env.py,
the instantiated EnvironmentContext is made available
via a special proxy module called alembic.context.   That is,
you can import alembic.context like a regular Python module,
and each name you call upon it is ultimately routed towards the
current EnvironmentContext in use.
In particular, the key method used within env.py is EnvironmentContext.configure(),
which establishes all the details about how the database will be accessed.
- class alembic.runtime.environment.EnvironmentContext(config: Config, script: ScriptDirectory, **kw: Any)#
- A configurational facade made available in an - env.pyscript.- The - EnvironmentContextacts as a facade to the more nuts-and-bolts objects of- MigrationContextas well as certain aspects of- Config, within the context of the- env.pyscript that is invoked by most Alembic commands.- EnvironmentContextis normally instantiated when a command in- alembic.commandis run. It then makes itself available in the- alembic.contextmodule for the scope of the command. From within an- env.pyscript, the current- EnvironmentContextis available by importing this module.- EnvironmentContextalso supports programmatic usage. At this level, it acts as a Python context manager, that is, is intended to be used using the- with:statement. A typical use of- EnvironmentContext:- from alembic.config import Config from alembic.script import ScriptDirectory config = Config() config.set_main_option("script_location", "myapp:migrations") script = ScriptDirectory.from_config(config) def my_function(rev, context): '''do something with revision "rev", which will be the current database revision, and "context", which is the MigrationContext that the env.py will create''' with EnvironmentContext( config, script, fn=my_function, as_sql=False, starting_rev="base", destination_rev="head", tag="sometag", ): script.run_env() - The above script will invoke the - env.pyscript within the migration environment. If and when- env.pycalls- MigrationContext.run_migrations(), the- my_function()function above will be called by the- MigrationContext, given the context itself as well as the current revision in the database.- Note - For most API usages other than full blown invocation of migration scripts, the - MigrationContextand- ScriptDirectoryobjects can be created and used directly. The- EnvironmentContextobject is only needed when you need to actually invoke the- env.pymodule present in the migration environment.- Construct a new - EnvironmentContext.- Parameters:
- script¶ – a - ScriptDirectoryinstance.
- **kw¶ – keyword options that will be ultimately passed along to the - MigrationContextwhen- EnvironmentContext.configure()is called.
 
 - begin_transaction() _ProxyTransaction | ContextManager[None, bool | None]#
- Return a context manager that will enclose an operation within a “transaction”, as defined by the environment’s offline and transactional DDL settings. - e.g.: - with context.begin_transaction(): context.run_migrations() - begin_transaction()is intended to “do the right thing” regardless of calling context:- If - is_transactional_ddl()is- False, returns a “do nothing” context manager which otherwise produces no transactional state or directives.
- If - is_offline_mode()is- True, returns a context manager that will invoke the- DefaultImpl.emit_begin()and- DefaultImpl.emit_commit()methods, which will produce the string directives- BEGINand- COMMITon the output stream, as rendered by the target backend (e.g. SQL Server would emit- BEGIN TRANSACTION).
- Otherwise, calls - sqlalchemy.engine.Connection.begin()on the current online connection, which returns a- sqlalchemy.engine.Transactionobject. This object demarcates a real transaction and is itself a context manager, which will roll back if an exception is raised.
 - Note that a custom - env.pyscript which has more specific transactional needs can of course manipulate the- Connectiondirectly to produce transactional state in “online” mode.
 - config: Config = None#
- An instance of - Configrepresenting the configuration file contents as well as other variables set programmatically within it.
 - configure(connection: Connection | None = None, url: str | URL | None = None, dialect_name: str | None = None, dialect_opts: Dict[str, Any] | None = None, transactional_ddl: bool | None = None, transaction_per_migration: bool = False, output_buffer: TextIO | None = None, starting_rev: str | None = None, tag: str | None = None, template_args: Dict[str, Any] | None = None, render_as_batch: bool = False, target_metadata: MetaData | Sequence[MetaData] | None = None, include_name: IncludeNameFn | None = None, include_object: IncludeObjectFn | None = None, include_schemas: bool = False, process_revision_directives: ProcessRevisionDirectiveFn | None = None, compare_type: bool | CompareType = True, compare_server_default: bool | CompareServerDefault = False, render_item: RenderItemFn | None = None, literal_binds: bool = False, upgrade_token: str = 'upgrades', downgrade_token: str = 'downgrades', alembic_module_prefix: str = 'op.', sqlalchemy_module_prefix: str = 'sa.', user_module_prefix: str | None = None, on_version_apply: OnVersionApplyFn | None = None, **kw: Any) None#
- Configure a - MigrationContextwithin this- EnvironmentContextwhich will provide database connectivity and other configuration to a series of migration scripts.- Many methods on - EnvironmentContextrequire that this method has been called in order to function, as they ultimately need to have database access or at least access to the dialect in use. Those which do are documented as such.- The important thing needed by - configure()is a means to determine what kind of database dialect is in use. An actual connection to that database is needed only if the- MigrationContextis to be used in “online” mode.- If the - is_offline_mode()function returns- True, then no connection is needed here. Otherwise, the- connectionparameter should be present as an instance of- sqlalchemy.engine.Connection.- This function is typically called from the - env.pyscript within a migration environment. It can be called multiple times for an invocation. The most recent- Connectionfor which it was called is the one that will be operated upon by the next call to- run_migrations().- General parameters: - Parameters:
- connection¶ – a - Connectionto use for SQL execution in “online” mode. When present, is also used to determine the type of dialect in use.
- url¶ – a string database url, or a - sqlalchemy.engine.url.URLobject. The type of dialect to be used will be derived from this if- connectionis not passed.
- dialect_name¶ – string name of a dialect, such as “postgresql”, “mssql”, etc. The type of dialect to be used will be derived from this if - connectionand- urlare not passed.
- dialect_opts¶ – dictionary of options to be passed to dialect constructor. 
- transactional_ddl¶ – Force the usage of “transactional” DDL on or off; this otherwise defaults to whether or not the dialect in use supports it. 
- transaction_per_migration¶ – if True, nest each migration script in a transaction rather than the full series of migrations to run. 
- output_buffer¶ – a file-like object that will be used for textual output when the - --sqloption is used to generate SQL scripts. Defaults to- sys.stdoutif not passed here and also not present on the- Configobject. The value here overrides that of the- Configobject.
- output_encoding¶ – when using - --sqlto generate SQL scripts, apply this encoding to the string output.
- literal_binds¶ – - when using - --sqlto generate SQL scripts, pass through the- literal_bindsflag to the compiler so that any literal values that would ordinarily be bound parameters are converted to plain strings.- Warning - Dialects can typically only handle simple datatypes like strings and numbers for auto-literal generation. Datatypes like dates, intervals, and others may still require manual formatting, typically using - Operations.inline_literal().- Note - the - literal_bindsflag is ignored on SQLAlchemy versions prior to 0.8 where this feature is not supported.- See also 
- starting_rev¶ – Override the “starting revision” argument when using - --sqlmode.
- tag¶ – a string tag for usage by custom - env.pyscripts. Set via the- --tagoption, can be overridden here.
- template_args¶ – dictionary of template arguments which will be added to the template argument environment when running the “revision” command. Note that the script environment is only run within the “revision” command if the –autogenerate option is used, or if the option “revision_environment=true” is present in the alembic.ini file. 
- version_table¶ – The name of the Alembic version table. The default is - 'alembic_version'.
- version_table_schema¶ – Optional schema to place version table within. 
- version_table_pk¶ – boolean, whether the Alembic version table should use a primary key constraint for the “value” column; this only takes effect when the table is first created. Defaults to True; setting to False should not be necessary and is here for backwards compatibility reasons. 
- on_version_apply¶ – - a callable or collection of callables to be run for each migration step. The callables will be run in the order they are given, once for each migration step, after the respective operation has been applied but before its transaction is finalized. Each callable accepts no positional arguments and the following keyword arguments: - ctx: the- MigrationContextrunning the migration,
- step: a- MigrationInforepresenting the step currently being applied,
- heads: a collection of version strings representing the current heads,
- run_args: the- **kwargspassed to- run_migrations().
 
 
 - Parameters specific to the autogenerate feature, when - alembic revisionis run with the- --autogeneratefeature:- Parameters:
- target_metadata¶ – a - sqlalchemy.schema.MetaDataobject, or a sequence of- MetaDataobjects, that will be consulted during autogeneration. The tables present in each- MetaDatawill be compared against what is locally available on the target- Connectionto produce candidate upgrade/downgrade operations.
- compare_type¶ – - Indicates type comparison behavior during an autogenerate operation. Defaults to - Trueturning on type comparison, which has good accuracy on most backends. See Comparing Types for an example as well as information on other type comparison options. Set to- Falsewhich disables type comparison. A callable can also be passed to provide custom type comparison, see Comparing Types for additional details.- Changed in version 1.12.0: The default value of - EnvironmentContext.configure.compare_typehas been changed to- True.
- compare_server_default¶ – - Indicates server default comparison behavior during an autogenerate operation. Defaults to - Falsewhich disables server default comparison. Set to- Trueto turn on server default comparison, which has varied accuracy depending on backend.- To customize server default comparison behavior, a callable may be specified which can filter server default comparisons during an autogenerate operation. defaults during an autogenerate operation. The format of this callable is: - def my_compare_server_default(context, inspected_column, metadata_column, inspected_default, metadata_default, rendered_metadata_default): # return True if the defaults are different, # False if not, or None to allow the default implementation # to compare these defaults return None context.configure( # ... compare_server_default = my_compare_server_default ) - inspected_columnis a dictionary structure as returned by- sqlalchemy.engine.reflection.Inspector.get_columns(), whereas- metadata_columnis a- sqlalchemy.schema.Columnfrom the local model environment.- A return value of - Noneindicates to allow default server default comparison to proceed. Note that some backends such as Postgresql actually execute the two defaults on the database side to compare for equivalence.
- include_name¶ – - A callable function which is given the chance to return - Trueor- Falsefor any database reflected object based on its name, including database schema names when the- EnvironmentContext.configure.include_schemasflag is set to- True.- The function accepts the following positional arguments: - name: the name of the object, such as schema name or table name. Will be- Nonewhen indicating the default schema name of the database connection.
- type: a string describing the type of object; currently- "schema",- "table",- "column",- "index",- "unique_constraint", or- "foreign_key_constraint"
- parent_names: a dictionary of “parent” object names, that are relative to the name being given. Keys in this dictionary may include:- "schema_name",- "table_name"or- "schema_qualified_table_name".
 - E.g.: - def include_name(name, type_, parent_names): if type_ == "schema": return name in ["schema_one", "schema_two"] else: return True context.configure( # ... include_schemas = True, include_name = include_name ) 
- include_object¶ – - A callable function which is given the chance to return - Trueor- Falsefor any object, indicating if the given object should be considered in the autogenerate sweep.- The function accepts the following positional arguments: - object: a- SchemaItemobject such as a- Table,- Column,- Index- UniqueConstraint, or- ForeignKeyConstraintobject
- name: the name of the object. This is typically available via- object.name.
- type: a string describing the type of object; currently- "table",- "column",- "index",- "unique_constraint", or- "foreign_key_constraint"
- reflected:- Trueif the given object was produced based on table reflection,- Falseif it’s from a local- MetaDataobject.
- compare_to: the object being compared against, if available, else- None.
 - E.g.: - def include_object(object, name, type_, reflected, compare_to): if (type_ == "column" and not reflected and object.info.get("skip_autogenerate", False)): return False else: return True context.configure( # ... include_object = include_object ) - For the use case of omitting specific schemas from a target database when - EnvironmentContext.configure.include_schemasis set to- True, the- schemaattribute can be checked for each- Tableobject passed to the hook, however it is much more efficient to filter on schemas before reflection of objects takes place using the- EnvironmentContext.configure.include_namehook.
- render_as_batch¶ – - if True, commands which alter elements within a table will be placed under a - with batch_alter_table():directive, so that batch migrations will take place.
- include_schemas¶ – - If True, autogenerate will scan across all schemas located by the SQLAlchemy - get_schema_names()method, and include all differences in tables found across all those schemas. When using this option, you may want to also use the- EnvironmentContext.configure.include_nameparameter to specify a callable which can filter the tables/schemas that get included.
- render_item¶ – - Callable that can be used to override how any schema item, i.e. column, constraint, type, etc., is rendered for autogenerate. The callable receives a string describing the type of object, the object, and the autogen context. If it returns False, the default rendering method will be used. If it returns None, the item will not be rendered in the context of a Table construct, that is, can be used to skip columns or constraints within op.create_table(): - def my_render_column(type_, col, autogen_context): if type_ == "column" and isinstance(col, MySpecialCol): return repr(col) else: return False context.configure( # ... render_item = my_render_column ) - Available values for the type string include: - "column",- "primary_key",- "foreign_key",- "unique",- "check",- "type",- "server_default".
- upgrade_token¶ – When autogenerate completes, the text of the candidate upgrade operations will be present in this template variable when - script.py.makois rendered. Defaults to- upgrades.
- downgrade_token¶ – When autogenerate completes, the text of the candidate downgrade operations will be present in this template variable when - script.py.makois rendered. Defaults to- downgrades.
- alembic_module_prefix¶ – When autogenerate refers to Alembic - alembic.operationsconstructs, this prefix will be used (i.e.- op.create_table) Defaults to “- op.”. Can be- Noneto indicate no prefix.
- sqlalchemy_module_prefix¶ – When autogenerate refers to SQLAlchemy - Columnor type classes, this prefix will be used (i.e.- sa.Column("somename", sa.Integer)) Defaults to “- sa.”. Can be- Noneto indicate no prefix. Note that when dialect-specific types are rendered, autogenerate will render them using the dialect module name, i.e.- mssql.BIT(),- postgresql.UUID().
- user_module_prefix¶ – - When autogenerate refers to a SQLAlchemy type (e.g. - TypeEngine) where the module name is not under the- sqlalchemynamespace, this prefix will be used within autogenerate. If left at its default of- None, the- __module__attribute of the type is used to render the import module. It’s a good practice to set this and to have all custom types be available from a fixed module space, in order to future-proof migration files against reorganizations in modules.- See also 
- process_revision_directives¶ – - a callable function that will be passed a structure representing the end result of an autogenerate or plain “revision” operation, which can be manipulated to affect how the - alembic revisioncommand ultimately outputs new revision scripts. The structure of the callable is:- def process_revision_directives(context, revision, directives): pass - The - directivesparameter is a Python list containing a single- MigrationScriptdirective, which represents the revision file to be generated. This list as well as its contents may be freely modified to produce any set of commands. The section Customizing Revision Generation shows an example of doing this. The- contextparameter is the- MigrationContextin use, and- revisionis a tuple of revision identifiers representing the current revision of the database.- The callable is invoked at all times when the - --autogenerateoption is passed to- alembic revision. If- --autogenerateis not passed, the callable is invoked only if the- revision_environmentvariable is set to True in the Alembic configuration, in which case the given- directivescollection will contain empty- UpgradeOpsand- DowngradeOpscollections for- .upgrade_opsand- .downgrade_ops. The- --autogenerateoption itself can be inferred by inspecting- context.config.cmd_opts.autogenerate.- The callable function may optionally be an instance of a - Rewriterobject. This is a helper object that assists in the production of autogenerate-stream rewriter functions.
 
 - Parameters specific to individual backends: - Parameters:
- mssql_batch_separator¶ – The “batch separator” which will be placed between each statement when generating offline SQL Server migrations. Defaults to - GO. Note this is in addition to the customary semicolon- ;at the end of each statement; SQL Server considers the “batch separator” to denote the end of an individual statement execution, and cannot group certain dependent operations in one step.
- oracle_batch_separator¶ – The “batch separator” which will be placed between each statement when generating offline Oracle migrations. Defaults to - /. Oracle doesn’t add a semicolon between statements like most other backends.
 
 
 - execute(sql: Executable | str, execution_options: Dict[str, Any] | None = None) None#
- Execute the given SQL using the current change context. - The behavior of - execute()is the same as that of- Operations.execute(). Please see that function’s documentation for full detail including caveats and limitations.- This function requires that a - MigrationContexthas first been made available via- configure().
 - get_bind() Connection#
- Return the current ‘bind’. - In “online” mode, this is the - sqlalchemy.engine.Connectioncurrently being used to emit SQL to the database.- This function requires that a - MigrationContexthas first been made available via- configure().
 - get_context() MigrationContext#
- Return the current - MigrationContextobject.- If - EnvironmentContext.configure()has not been called yet, raises an exception.
 - get_head_revision() str | Tuple[str, ...] | None#
- Return the hex identifier of the ‘head’ script revision. - If the script directory has multiple heads, this method raises a - CommandError;- EnvironmentContext.get_head_revisions()should be preferred.- This function does not require that the - MigrationContexthas been configured.
 - get_head_revisions() str | Tuple[str, ...] | None#
- Return the hex identifier of the ‘heads’ script revision(s). - This returns a tuple containing the version number of all heads in the script directory. - This function does not require that the - MigrationContexthas been configured.
 - get_revision_argument() str | Tuple[str, ...] | None#
- Get the ‘destination’ revision argument. - This is typically the argument passed to the - upgradeor- downgradecommand.- If it was specified as - head, the actual version number is returned; if specified as- base,- Noneis returned.- This function does not require that the - MigrationContexthas been configured.
 - get_starting_revision_argument() str | Tuple[str, ...] | None#
- Return the ‘starting revision’ argument, if the revision was passed using - start:end.- This is only meaningful in “offline” mode. Returns - Noneif no value is available or was configured.- This function does not require that the - MigrationContexthas been configured.
 - get_tag_argument() str | None#
- Return the value passed for the - --tagargument, if any.- The - --tagargument is not used directly by Alembic, but is available for custom- env.pyconfigurations that wish to use it; particularly for offline generation scripts that wish to generate tagged filenames.- This function does not require that the - MigrationContexthas been configured.- See also - EnvironmentContext.get_x_argument()- a newer and more open ended system of extending- env.pyscripts via the command line.
 - get_x_argument(as_dictionary: Literal[False]) List[str]#
- get_x_argument(as_dictionary: Literal[True]) Dict[str, str]
- get_x_argument(as_dictionary: bool = False) List[str] | Dict[str, str]
- Return the value(s) passed for the - -xargument, if any.- The - -xargument is an open ended flag that allows any user-defined value or values to be passed on the command line, then available here for consumption by a custom- env.pyscript.- The return value is a list, returned directly from the - argparsestructure. If- as_dictionary=Trueis passed, the- xarguments are parsed using- key=valueformat into a dictionary that is then returned. If there is no- =in the argument, value is an empty string.- Changed in version 1.13.1: Support - as_dictionary=Truewhen arguments are passed without the- =symbol.- For example, to support passing a database URL on the command line, the standard - env.pyscript can be modified like this:- cmd_line_url = context.get_x_argument( as_dictionary=True).get('dbname') if cmd_line_url: engine = create_engine(cmd_line_url) else: engine = engine_from_config( config.get_section(config.config_ini_section), prefix='sqlalchemy.', poolclass=pool.NullPool) - This then takes effect by running the - alembicscript as:- alembic -x dbname=postgresql://user:pass@host/dbname upgrade head - This function does not require that the - MigrationContexthas been configured.
 - is_offline_mode() bool#
- Return True if the current migrations environment is running in “offline mode”. - This is - Trueor- Falsedepending on the- --sqlflag passed.- This function does not require that the - MigrationContexthas been configured.
 - is_transactional_ddl() bool#
- Return True if the context is configured to expect a transactional DDL capable backend. - This defaults to the type of database in use, and can be overridden by the - transactional_ddlargument to- configure()- This function requires that a - MigrationContexthas first been made available via- configure().
 - run_migrations(**kw: Any) None#
- Run migrations as determined by the current command line configuration as well as versioning information present (or not) in the current database connection (if one is present). - The function accepts optional - **kwarguments. If these are passed, they are sent directly to the- upgrade()and- downgrade()functions within each target revision file. By modifying the- script.py.makofile so that the- upgrade()and- downgrade()functions accept arguments, parameters can be passed here so that contextual information, usually information to identify a particular database in use, can be passed from a custom- env.pyscript to the migration functions.- This function requires that a - MigrationContexthas first been made available via- configure().
 - script: ScriptDirectory = None#
- An instance of - ScriptDirectorywhich provides programmatic access to version files within the- versions/directory.
 
The Migration Context#
The MigrationContext handles the actual work to be performed
against a database backend as migration operations proceed.  It is generally
not exposed to the end-user, except when the
on_version_apply callback hook is used.
- class alembic.runtime.migration.MigrationContext(dialect: Dialect, connection: Connection | None, opts: Dict[str, Any], environment_context: EnvironmentContext | None = None)#
- Represent the database state made available to a migration script. - MigrationContextis the front end to an actual database connection, or alternatively a string output stream given a particular database dialect, from an Alembic perspective.- When inside the - env.pyscript, the- MigrationContextis available via the- EnvironmentContext.get_context()method, which is available at- alembic.context:- # from within env.py script from alembic import context migration_context = context.get_context() - For usage outside of an - env.pyscript, such as for utility routines that want to check the current version in the database, the- MigrationContext.configure()method to create new- MigrationContextobjects. For example, to get at the current revision in the database using- MigrationContext.get_current_revision():- # in any application, outside of an env.py script from alembic.migration import MigrationContext from sqlalchemy import create_engine engine = create_engine("postgresql://mydatabase") conn = engine.connect() context = MigrationContext.configure(conn) current_rev = context.get_current_revision() - The above context can also be used to produce Alembic migration operations with an - Operationsinstance:- # in any application, outside of the normal Alembic environment from alembic.operations import Operations op = Operations(context) op.alter_column("mytable", "somecolumn", nullable=True) - autocommit_block() Iterator[None]#
- Enter an “autocommit” block, for databases that support AUTOCOMMIT isolation levels. - This special directive is intended to support the occasional database DDL or system operation that specifically has to be run outside of any kind of transaction block. The PostgreSQL database platform is the most common target for this style of operation, as many of its DDL operations must be run outside of transaction blocks, even though the database overall supports transactional DDL. - The method is used as a context manager within a migration script, by calling on - Operations.get_context()to retrieve the- MigrationContext, then invoking- MigrationContext.autocommit_block()using the- with:statement:- def upgrade(): with op.get_context().autocommit_block(): op.execute("ALTER TYPE mood ADD VALUE 'soso'") - Above, a PostgreSQL “ALTER TYPE..ADD VALUE” directive is emitted, which must be run outside of a transaction block at the database level. The - MigrationContext.autocommit_block()method makes use of the SQLAlchemy- AUTOCOMMITisolation level setting, which against the psycogp2 DBAPI corresponds to the- connection.autocommitsetting, to ensure that the database driver is not inside of a DBAPI level transaction block.- Warning - As is necessary, the database transaction preceding the block is unconditionally committed. This means that the run of migrations preceding the operation will be committed, before the overall migration operation is complete. - It is recommended that when an application includes migrations with “autocommit” blocks, that - EnvironmentContext.transaction_per_migrationbe used so that the calling environment is tuned to expect short per-file migrations whether or not one of them has an autocommit block.
 - begin_transaction(_per_migration: bool = False) _ProxyTransaction | ContextManager[None, bool | None]#
- Begin a logical transaction for migration operations. - This method is used within an - env.pyscript to demarcate where the outer “transaction” for a series of migrations begins. Example:- def run_migrations_online(): connectable = create_engine(...) with connectable.connect() as connection: context.configure( connection=connection, target_metadata=target_metadata ) with context.begin_transaction(): context.run_migrations() - Above, - MigrationContext.begin_transaction()is used to demarcate where the outer logical transaction occurs around the- MigrationContext.run_migrations()operation.- A “Logical” transaction means that the operation may or may not correspond to a real database transaction. If the target database supports transactional DDL (or - EnvironmentContext.configure.transactional_ddlis true), the- EnvironmentContext.configure.transaction_per_migrationflag is not set, and the migration is against a real database connection (as opposed to using “offline”- --sqlmode), a real transaction will be started. If- --sqlmode is in effect, the operation would instead correspond to a string such as “BEGIN” being emitted to the string output.- The returned object is a Python context manager that should only be used in the context of a - with:statement as indicated above. The object has no other guaranteed API features present.- See also 
 - property bind: Connection | None#
- Return the current “bind”. - In online mode, this is an instance of - sqlalchemy.engine.Connection, and is suitable for ad-hoc execution of any kind of usage described in SQLAlchemy Core documentation as well as for usage with the- sqlalchemy.schema.Table.create()and- sqlalchemy.schema.MetaData.create_all()methods of- Table,- MetaData.- Note that when “standard output” mode is enabled, this bind will be a “mock” connection handler that cannot return results and is only appropriate for a very limited subset of commands. 
 - classmethod configure(connection: Connection | None = None, url: str | URL | None = None, dialect_name: str | None = None, dialect: Dialect | None = None, environment_context: EnvironmentContext | None = None, dialect_opts: Dict[str, str] | None = None, opts: Any | None = None) MigrationContext#
- Create a new - MigrationContext.- This is a factory method usually called by - EnvironmentContext.configure().- Parameters:
- connection¶ – a - Connectionto use for SQL execution in “online” mode. When present, is also used to determine the type of dialect in use.
- url¶ – a string database url, or a - sqlalchemy.engine.url.URLobject. The type of dialect to be used will be derived from this if- connectionis not passed.
- dialect_name¶ – string name of a dialect, such as “postgresql”, “mssql”, etc. The type of dialect to be used will be derived from this if - connectionand- urlare not passed.
- opts¶ – dictionary of options. Most other options accepted by - EnvironmentContext.configure()are passed via this dictionary.
 
 
 - execute(sql: Executable | str, execution_options: Dict[str, Any] | None = None) None#
- Execute a SQL construct or string statement. - The underlying execution mechanics are used, that is if this is “offline mode” the SQL is written to the output buffer, otherwise the SQL is emitted on the current SQLAlchemy connection. 
 - get_current_heads() Tuple[str, ...]#
- Return a tuple of the current ‘head versions’ that are represented in the target database. - For a migration stream without branches, this will be a single value, synonymous with that of - MigrationContext.get_current_revision(). However when multiple unmerged branches exist within the target database, the returned tuple will contain a value for each head.- If this - MigrationContextwas configured in “offline” mode, that is with- as_sql=True, the- starting_revparameter is returned in a one-length tuple.- If no version table is present, or if there are no revisions present, an empty tuple is returned. 
 - get_current_revision() str | None#
- Return the current revision, usually that which is present in the - alembic_versiontable in the database.- This method intends to be used only for a migration stream that does not contain unmerged branches in the target database; if there are multiple branches present, an exception is raised. The - MigrationContext.get_current_heads()should be preferred over this method going forward in order to be compatible with branch migration support.- If this - MigrationContextwas configured in “offline” mode, that is with- as_sql=True, the- starting_revparameter is returned instead, if any.
 - run_migrations(**kw: Any) None#
- Run the migration scripts established for this - MigrationContext, if any.- The commands in - alembic.commandwill set up a function that is ultimately passed to the- MigrationContextas the- fnargument. This function represents the “work” that will be done when- MigrationContext.run_migrations()is called, typically from within the- env.pyscript of the migration environment. The “work function” then provides an iterable of version callables and other version information which in the case of the- upgradeor- downgradecommands are the list of version scripts to invoke. Other commands yield nothing, in the case that a command wants to run some other operation against the database such as the- currentor- stampcommands.- Parameters:
- **kw¶ – keyword arguments here will be passed to each migration callable, that is the - upgrade()or- downgrade()method within revision scripts.
 
 - stamp(script_directory: ScriptDirectory, revision: str) None#
- Stamp the version table with a specific revision. - This method calculates those branches to which the given revision can apply, and updates those branches as though they were migrated towards that revision (either up or down). If no current branches include the revision, it is added as a new branch head.