diff options
author | Alex Grönholm <alex.gronholm@nextday.fi> | 2017-10-21 14:35:03 +0300 |
---|---|---|
committer | Alex Grönholm <alex.gronholm@nextday.fi> | 2017-10-21 15:08:49 +0300 |
commit | 7263f9d4fd595478c918981b3deb391df6b191eb (patch) | |
tree | 720d82d028d9570a89ca19ecefb722b0ddb62f2f | |
parent | 6e2e380854901cca22d69dea258299d9d1d897b0 (diff) | |
download | apscheduler-7263f9d4fd595478c918981b3deb391df6b191eb.tar.gz |
Updated the docs to conform to the 99 column limit
-rw-r--r-- | docs/contributing.rst | 42 | ||||
-rw-r--r-- | docs/extending.rst | 72 | ||||
-rw-r--r-- | docs/migration.rst | 40 | ||||
-rw-r--r-- | docs/userguide.rst | 208 | ||||
-rw-r--r-- | docs/versionhistory.rst | 20 |
5 files changed, 209 insertions, 173 deletions
diff --git a/docs/contributing.rst b/docs/contributing.rst index 7b63c2c..fdb0660 100644 --- a/docs/contributing.rst +++ b/docs/contributing.rst @@ -2,8 +2,8 @@ Contributing to APScheduler ########################### -If you wish to add a feature or fix a bug in APScheduler, you need to follow certain procedures and rules to get your -changes accepted. This is to maintain the high quality of the code base. +If you wish to add a feature or fix a bug in APScheduler, you need to follow certain procedures and +rules to get your changes accepted. This is to maintain the high quality of the code base. Contribution Process @@ -18,9 +18,9 @@ Contribution Process 7. Push the changes to your Github fork 8. Make a pull request on Github -There is no need to update the change log -- this will be done prior to the next release at the latest. -Should the test suite fail even before your changes (which should be rare), make sure you're at least not adding to the -failures. +There is no need to update the change log -- this will be done prior to the next release at the +latest. Should the test suite fail even before your changes (which should be rare), make sure +you're at least not adding to the failures. Development Dependencies @@ -32,33 +32,33 @@ To fully run the test suite, you will need at least: * A Redis server * A Zookeeper server -For other dependencies, it's best to look in tox.ini and install what is appropriate for the Python version you're -using. +For other dependencies, it's best to look in tox.ini and install what is appropriate for the Python +version you're using. Code Style ========== -This project uses PEP 8 rules with a maximum column limit of 120 characters instead of the standard 79. +This project uses PEP 8 rules with its maximum allowed column limit of 99 characters. This limit applies to all text files (source code, tests, documentation). -In particular, remember to group the imports correctly (standard library imports first, third party libs second, -project libraries third, conditional imports last). The PEP 8 checker does not check for this. -If in doubt, just follow the surrounding code style as closely as possible. +In particular, remember to group the imports correctly (standard library imports first, third party +libs second, project libraries third, conditional imports last). The PEP 8 checker does not check +for this. If in doubt, just follow the surrounding code style as closely as possible. Testing ======= -Running the test suite is done using the tox utility. This will test the code base against all supported Python -versions and checks for PEP 8 violations as well. +Running the test suite is done using the tox utility. This will test the code base against all +supported Python versions and checks for PEP 8 violations as well. -Since running the tests on every supported Python version can take quite a long time, it is recommended that during the -development cycle py.test is used directly. Before finishing, tox should however be used to make sure the code works on -all supported Python versions. +Since running the tests on every supported Python version can take quite a long time, it is +recommended that during the development cycle py.test is used directly. Before finishing, tox +should however be used to make sure the code works on all supported Python versions. Any nontrivial code changes must be accompanied with the appropriate tests. -The tests should not only maintain the coverage, but should test any new functionality or bug fixes reasonably well. -If you're fixing a bug, first make sure you have a test which fails against the unpatched codebase and succeeds against -the fixed version. Naturally, the test suite has to pass on every Python version. If setting up all the required Python -interpreters seems like too much trouble, make sure that it at least passes on the lowest supported versions of both -Python 2 and 3. +The tests should not only maintain the coverage, but should test any new functionality or bug fixes +reasonably well. If you're fixing a bug, first make sure you have a test which fails against the +unpatched codebase and succeeds against the fixed version. Naturally, the test suite has to pass on +every Python version. If setting up all the required Python interpreters seems like too much +trouble, make sure that it at least passes on the lowest supported versions of both Python 2 and 3. diff --git a/docs/extending.rst b/docs/extending.rst index 4347e82..111f047 100644 --- a/docs/extending.rst +++ b/docs/extending.rst @@ -2,25 +2,29 @@ Extending APScheduler ##################### -This document is meant to explain how to develop your custom triggers, job stores, executors and schedulers. +This document is meant to explain how to develop your custom triggers, job stores, executors and +schedulers. Custom triggers --------------- The built-in triggers cover the needs of the majority of all users. -However, some users may need specialized scheduling logic. To that end, the trigger system was made pluggable. +However, some users may need specialized scheduling logic. To that end, the trigger system was made +pluggable. To implement your scheduling logic, subclass :class:`~apscheduler.triggers.base.BaseTrigger`. -Look at the interface documentation in that class. Then look at the existing trigger implementations. -That should give you a good idea what is expected of a trigger implementation. +Look at the interface documentation in that class. Then look at the existing trigger +implementations. That should give you a good idea what is expected of a trigger implementation. -To use your trigger, you can use :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` like this:: +To use your trigger, you can use :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` like +this:: trigger = MyTrigger(arg1='foo') scheduler.add_job(target, trigger) -You can also register it as a plugin so you can use can use the alternate form of ``add_jobstore``:: +You can also register it as a plugin so you can use can use the alternate form of +``add_jobstore``:: scheduler.add_job(target, 'my_trigger', arg1='foo') @@ -37,26 +41,27 @@ This is done by adding an entry point in your project's :file:`setup.py`: Custom job stores ----------------- -If you want to store your jobs in a fancy new NoSQL database, or a totally custom datastore, you can implement your -own job store by subclassing :class:`~apscheduler.jobstores.base.BaseJobStore`. +If you want to store your jobs in a fancy new NoSQL database, or a totally custom datastore, you +can implement your own job store by subclassing :class:`~apscheduler.jobstores.base.BaseJobStore`. -A job store typically serializes the :class:`~apscheduler.job.Job` objects given to it, and constructs new Job objects -from binary data when they are loaded from the backing store. It is important that the job store restores the -``_scheduler`` and ``_jobstore_alias`` attribute of any Job that it creates. Refer to existing implementations for -examples. +A job store typically serializes the :class:`~apscheduler.job.Job` objects given to it, and +constructs new Job objects from binary data when they are loaded from the backing store. It is +important that the job store restores the ``_scheduler`` and ``_jobstore_alias`` attribute of any +Job that it creates. Refer to existing implementations for examples. -It should be noted that :class:`~apscheduler.jobstores.memory.MemoryJobStore` is special in that it does not -deserialize the jobs. This comes with its own problems, which it handles in its own way. +It should be noted that :class:`~apscheduler.jobstores.memory.MemoryJobStore` is special in that it +does not deserialize the jobs. This comes with its own problems, which it handles in its own way. If your job store does serialize jobs, you can of course use a serializer other than pickle. -You should, however, use the ``__getstate__`` and ``__setstate__`` special methods to respectively get and set the Job -state. Pickle uses them implicitly. +You should, however, use the ``__getstate__`` and ``__setstate__`` special methods to respectively +get and set the Job state. Pickle uses them implicitly. To use your job store, you can add it to the scheduler like this:: jobstore = MyJobStore() scheduler.add_jobstore(jobstore, 'mystore') -You can also register it as a plugin so you can use can use the alternate form of ``add_jobstore``:: +You can also register it as a plugin so you can use can use the alternate form of +``add_jobstore``:: scheduler.add_jobstore('my_jobstore', 'mystore') @@ -74,24 +79,28 @@ Custom executors ---------------- If you need custom logic for executing your jobs, you can create your own executor classes. -One scenario for this would be if you want to use distributed computing to run your jobs on other nodes. +One scenario for this would be if you want to use distributed computing to run your jobs on other +nodes. Start by subclassing :class:`~apscheduler.executors.base.BaseExecutor`. The responsibilities of an executor are as follows: * Performing any initialization when ``start()`` is called * Releasing any resources when ``shutdown()`` is called - * Keeping track of the number of instances of each job running on it, and refusing to run more than the maximum + * Keeping track of the number of instances of each job running on it, and refusing to run more + than the maximum * Notifying the scheduler of the results of the job -If your executor needs to serialize the jobs, make sure you either use pickle for it, or invoke the ``__getstate__`` and -``__setstate__`` special methods to respectively get and set the Job state. Pickle uses them implicitly. +If your executor needs to serialize the jobs, make sure you either use pickle for it, or invoke the +``__getstate__`` and ``__setstate__`` special methods to respectively get and set the Job state. +Pickle uses them implicitly. To use your executor, you can add it to the scheduler like this:: executor = MyExecutor() scheduler.add_executor(executor, 'myexecutor') -You can also register it as a plugin so you can use can use the alternate form of ``add_executor``:: +You can also register it as a plugin so you can use can use the alternate form of +``add_executor``:: scheduler.add_executor('my_executor', 'myexecutor') @@ -108,11 +117,13 @@ This is done by adding an entry point in your project's :file:`setup.py`: Custom schedulers ----------------- -A typical situation where you would want to make your own scheduler subclass is when you want to integrate it with your +A typical situation where you would want to make your own scheduler subclass is when you want to +integrate it with your application framework of choice. -Your custom scheduler should always be a subclass of :class:`~apscheduler.schedulers.base.BaseScheduler`. -But if you're not adapting to a framework that relies on callbacks, consider subclassing +Your custom scheduler should always be a subclass of +:class:`~apscheduler.schedulers.base.BaseScheduler`. But if you're not adapting to a framework that +relies on callbacks, consider subclassing :class:`~apscheduler.schedulers.blocking.BlockingScheduler` instead. The most typical extension points for scheduler subclasses are: @@ -127,9 +138,10 @@ The most typical extension points for scheduler subclasses are: * :meth:`~apscheduler.schedulers.base.BaseScheduler._create_default_executor` override if you need to use an alternative default executor -.. important:: Remember to call the superclass implementations of overridden methods, even abstract ones - (unless they're empty). +.. important:: Remember to call the superclass implementations of overridden methods, even abstract + ones (unless they're empty). -The most important responsibility of the scheduler subclass is to manage the scheduler's sleeping based on the return -values of ``_process_jobs()``. This can be done in various ways, including setting timeouts in ``wakeup()`` or running -a blocking loop in ``start()``. Again, see the existing scheduler classes for examples. +The most important responsibility of the scheduler subclass is to manage the scheduler's sleeping +based on the return values of ``_process_jobs()``. This can be done in various ways, including +setting timeouts in ``wakeup()`` or running a blocking loop in ``start()``. Again, see the existing +scheduler classes for examples. diff --git a/docs/migration.rst b/docs/migration.rst index d7bc201..72c07fd 100644 --- a/docs/migration.rst +++ b/docs/migration.rst @@ -21,17 +21,18 @@ Scheduler changes ----------------- * The concept of "standalone mode" is gone. For ``standalone=True``, use - :class:`~apscheduler.schedulers.blocking.BlockingScheduler` instead, and for ``standalone=False``, use - :class:`~apscheduler.schedulers.background.BackgroundScheduler`. BackgroundScheduler matches the old default - semantics. -* Job defaults (like ``misfire_grace_time`` and ``coalesce``) must now be passed in a dictionary as the - ``job_defaults`` option to :meth:`~apscheduler.schedulers.base.BaseScheduler.configure`. When supplying an ini-style - configuration as the first argument, they will need a corresponding ``job_defaults.`` prefix. -* The configuration key prefix for job stores was changed from ``jobstore.`` to ``jobstores.`` to match the dict-style - configuration better. -* The ``max_runs`` option has been dropped since the run counter could not be reliably preserved when replacing a job - with another one with the same ID. To make up for this, the ``end_date`` option was added to cron and interval - triggers. + :class:`~apscheduler.schedulers.blocking.BlockingScheduler` instead, and for + ``standalone=False``, use :class:`~apscheduler.schedulers.background.BackgroundScheduler`. + BackgroundScheduler matches the old default semantics. +* Job defaults (like ``misfire_grace_time`` and ``coalesce``) must now be passed in a dictionary as + the ``job_defaults`` option to :meth:`~apscheduler.schedulers.base.BaseScheduler.configure`. When + supplying an ini-style configuration as the first argument, they will need a corresponding + ``job_defaults.`` prefix. +* The configuration key prefix for job stores was changed from ``jobstore.`` to ``jobstores.`` to + match the dict-style configuration better. +* The ``max_runs`` option has been dropped since the run counter could not be reliably preserved + when replacing a job with another one with the same ID. To make up for this, the ``end_date`` + option was added to cron and interval triggers. * The old thread pool is gone, replaced by ``ThreadPoolExecutor``. This means that the old ``threadpool`` options are no longer valid. See :ref:`scheduler-config` on how to configure executors. @@ -42,10 +43,10 @@ Scheduler changes * The ``shutdown_threadpool`` and ``close_jobstores`` options have been removed from the :meth:`~apscheduler.schedulers.base.BaseScheduler.shutdown` method. Executors and job stores are now always shut down on scheduler shutdown. -* :meth:`~apscheduler.scheduler.Scheduler.unschedule_job` and :meth:`~apscheduler.scheduler.Scheduler.unschedule_func` - have been replaced by :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job`. - You can also unschedule a job by using the job handle returned from - :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job`. +* :meth:`~apscheduler.scheduler.Scheduler.unschedule_job` and + :meth:`~apscheduler.scheduler.Scheduler.unschedule_func` have been replaced by + :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job`. You can also unschedule a job by + using the job handle returned from :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job`. Job store changes ----------------- @@ -60,11 +61,12 @@ Use SQLAlchemyJobStore with SQLite instead. Trigger changes --------------- -From 3.0 onwards, triggers now require a pytz timezone. This is normally provided by the scheduler, but if you were -instantiating triggers manually before, then one must be supplied as the ``timezone`` argument. +From 3.0 onwards, triggers now require a pytz timezone. This is normally provided by the scheduler, +but if you were instantiating triggers manually before, then one must be supplied as the +``timezone`` argument. -The only other backwards incompatible change was that ``get_next_fire_time()`` takes two arguments now: the previous -fire time and the current datetime. +The only other backwards incompatible change was that ``get_next_fire_time()`` takes two arguments +now: the previous fire time and the current datetime. From v1.x to 2.0 diff --git a/docs/userguide.rst b/docs/userguide.rst index 736fbfc..a0e45fa 100644 --- a/docs/userguide.rst +++ b/docs/userguide.rst @@ -22,8 +22,8 @@ If, for some reason, pip won't work, you can manually `download the APScheduler Code examples ------------- -The source distribution contains the :file:`examples` directory where you can find many working examples for using -APScheduler in different ways. The examples can also be +The source distribution contains the :file:`examples` directory where you can find many working +examples for using APScheduler in different ways. The examples can also be `browsed online <https://github.com/agronholm/apscheduler/tree/master/examples/?at=master>`_. @@ -37,36 +37,37 @@ APScheduler has four kinds of components: * executors * schedulers -*Triggers* contain the scheduling logic. Each job has its own trigger which determines when the job should be run next. -Beyond their initial configuration, triggers are completely stateless. +*Triggers* contain the scheduling logic. Each job has its own trigger which determines when the job +should be run next. Beyond their initial configuration, triggers are completely stateless. -*Job stores* house the scheduled jobs. The default job store simply keeps the jobs in memory, but others store them in -various kinds of databases. A job's data is serialized when it is saved to a persistent job store, and deserialized when -it's loaded back from it. Job stores (other than the default one) don't keep the job data in memory, but act as -middlemen for saving, loading, updating and searching jobs in the backend. Job stores must never be shared between -schedulers. +*Job stores* house the scheduled jobs. The default job store simply keeps the jobs in memory, but +others store them in various kinds of databases. A job's data is serialized when it is saved to a +persistent job store, and deserialized when it's loaded back from it. Job stores (other than the +default one) don't keep the job data in memory, but act as middlemen for saving, loading, updating +and searching jobs in the backend. Job stores must never be shared between schedulers. -*Executors* are what handle the running of the jobs. They do this typically by submitting the designated callable in a -job to a thread or process pool. When the job is done, the executor notifies the scheduler which then emits an -appropriate event. +*Executors* are what handle the running of the jobs. They do this typically by submitting the +designated callable in a job to a thread or process pool. When the job is done, the executor +notifies the scheduler which then emits an appropriate event. -*Schedulers* are what bind the rest together. You typically have only one scheduler running in your application. -The application developer doesn't normally deal with the job stores, executors or triggers directly. Instead, the -scheduler provides the proper interface to handle all those. Configuring the job stores and executors is done through -the scheduler, as is adding, modifying and removing jobs. +*Schedulers* are what bind the rest together. You typically have only one scheduler running in your +application. The application developer doesn't normally deal with the job stores, executors or +triggers directly. Instead, the scheduler provides the proper interface to handle all those. +Configuring the job stores and executors is done through the scheduler, as is adding, modifying and +removing jobs. Choosing the right scheduler, job store(s), executor(s) and trigger(s) ---------------------------------------------------------------------- -Your choice of scheduler depends mostly on your programming environment and what you'll be using APScheduler for. -Here's a quick guide for choosing a scheduler: +Your choice of scheduler depends mostly on your programming environment and what you'll be using +APScheduler for. Here's a quick guide for choosing a scheduler: * :class:`~apscheduler.schedulers.blocking.BlockingScheduler`: use when the scheduler is the only thing running in your process * :class:`~apscheduler.schedulers.background.BackgroundScheduler`: - use when you're not using any of the frameworks below, and want the scheduler to run in the background inside your - application + use when you're not using any of the frameworks below, and want the scheduler to run in the + background inside your application * :class:`~apscheduler.schedulers.asyncio.AsyncIOScheduler`: use if your application uses the asyncio module * :class:`~apscheduler.schedulers.gevent.GeventScheduler`: @@ -80,19 +81,20 @@ Here's a quick guide for choosing a scheduler: Simple enough, yes? -To pick the appropriate job store, you need to determine whether you need job persistence or not. If you always recreate -your jobs at the start of your application, then you can probably go with the default -(:class:`~apscheduler.jobstores.memory.MemoryJobStore`). But if you need your jobs to persist over scheduler restarts or -application crashes, then your choice usually boils down to what tools are used in your programming environment. -If, however, you are in the position to choose freely, then -:class:`~apscheduler.jobstores.sqlalchemy.SQLAlchemyJobStore` on a `PostgreSQL <http://www.postgresql.org/>`_ backend is -the recommended choice due to its strong data integrity protection. +To pick the appropriate job store, you need to determine whether you need job persistence or not. +If you always recreate your jobs at the start of your application, then you can probably go with +the default (:class:`~apscheduler.jobstores.memory.MemoryJobStore`). But if you need your jobs to +persist over scheduler restarts or application crashes, then your choice usually boils down to what +tools are used in your programming environment. If, however, you are in the position to choose +freely, then :class:`~apscheduler.jobstores.sqlalchemy.SQLAlchemyJobStore` on a +`PostgreSQL <http://www.postgresql.org/>`_ backend is the recommended choice due to its strong data +integrity protection. Likewise, the choice of executors is usually made for you if you use one of the frameworks above. -Otherwise, the default :class:`~apscheduler.executors.pool.ThreadPoolExecutor` should be good enough for most purposes. -If your workload involves CPU intensive operations, you should consider using -:class:`~apscheduler.executors.pool.ProcessPoolExecutor` instead to make use of multiple CPU cores. -You could even use both at once, adding the process pool executor as a secondary executor. +Otherwise, the default :class:`~apscheduler.executors.pool.ThreadPoolExecutor` should be good +enough for most purposes. If your workload involves CPU intensive operations, you should consider +using :class:`~apscheduler.executors.pool.ProcessPoolExecutor` instead to make use of multiple CPU +cores. You could even use both at once, adding the process pool executor as a secondary executor. When you schedule a job, you need to choose a _trigger_ for it. The trigger determines the logic by which the dates/times are calculated when the job will be run. APScheduler comes with three @@ -114,16 +116,18 @@ documentation pages. Configuring the scheduler ------------------------- -APScheduler provides many different ways to configure the scheduler. You can use a configuration dictionary or you can -pass in the options as keyword arguments. You can also instantiate the scheduler first, add jobs and configure the -scheduler afterwards. This way you get maximum flexibility for any environment. +APScheduler provides many different ways to configure the scheduler. You can use a configuration +dictionary or you can pass in the options as keyword arguments. You can also instantiate the +scheduler first, add jobs and configure the scheduler afterwards. This way you get maximum +flexibility for any environment. The full list of scheduler level configuration options can be found on the API reference of the -:class:`~apscheduler.schedulers.base.BaseScheduler` class. Scheduler subclasses may also have additional options which -are documented on their respective API references. Configuration options for individual job stores and executors can -likewise be found on their API reference pages. +:class:`~apscheduler.schedulers.base.BaseScheduler` class. Scheduler subclasses may also have +additional options which are documented on their respective API references. Configuration options +for individual job stores and executors can likewise be found on their API reference pages. -Let's say you want to run BackgroundScheduler in your application with the default job store and the default executor:: +Let's say you want to run BackgroundScheduler in your application with the default job store and +the default executor:: from apscheduler.schedulers.background import BackgroundScheduler @@ -132,11 +136,11 @@ Let's say you want to run BackgroundScheduler in your application with the defau # Initialize the rest of the application here, or before the scheduler initialization -This will get you a BackgroundScheduler with a MemoryJobStore named "default" and a ThreadPoolExecutor named "default" -with a default maximum thread count of 10. +This will get you a BackgroundScheduler with a MemoryJobStore named "default" and a +ThreadPoolExecutor named "default" with a default maximum thread count of 10. -Now, suppose you want more. You want to have *two* job stores using *two* executors and you also want to tweak the -default values for new jobs and set a different timezone. +Now, suppose you want more. You want to have *two* job stores using *two* executors and you also +want to tweak the default values for new jobs and set a different timezone. The following three examples are completely equivalent, and will get you: * a MongoDBJobStore named "mongo" @@ -229,12 +233,15 @@ Method 3:: Starting the scheduler ---------------------- -Starting the scheduler is done by simply calling :meth:`~apscheduler.schedulers.base.BaseScheduler.start` on the -scheduler. For schedulers other than `~apscheduler.schedulers.blocking.BlockingScheduler`, this call will return -immediately and you can continue the initialization process of your application, possibly adding jobs to the scheduler. +Starting the scheduler is done by simply calling +:meth:`~apscheduler.schedulers.base.BaseScheduler.start` on the scheduler. For schedulers other +than `~apscheduler.schedulers.blocking.BlockingScheduler`, this call will return immediately and +you can continue the initialization process of your application, possibly adding jobs to the +scheduler. -For BlockingScheduler, you will only want to call :meth:`~apscheduler.schedulers.base.BaseScheduler.start` after you're -done with any initialization steps. +For BlockingScheduler, you will only want to call +:meth:`~apscheduler.schedulers.base.BaseScheduler.start` after you're done with any initialization +steps. .. note:: After the scheduler has been started, you can no longer alter its settings. @@ -247,15 +254,17 @@ There are two ways to add jobs to a scheduler: #. by calling :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` #. by decorating a function with :meth:`~apscheduler.schedulers.base.BaseScheduler.scheduled_job` -The first way is the most common way to do it. The second way is mostly a convenience to declare jobs that don't change -during the application's run time. The :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` method returns a +The first way is the most common way to do it. The second way is mostly a convenience to declare +jobs that don't change during the application's run time. The +:meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` method returns a :class:`apscheduler.job.Job` instance that you can use to modify or remove the job later. -You can schedule jobs on the scheduler **at any time**. If the scheduler is not yet running when the job is added, the -job will be scheduled *tentatively* and its first run time will only be computed when the scheduler starts. +You can schedule jobs on the scheduler **at any time**. If the scheduler is not yet running when +the job is added, the job will be scheduled *tentatively* and its first run time will only be +computed when the scheduler starts. -It is important to note that if you use an executor or job store that serializes the job, it will add a couple -requirements on your job: +It is important to note that if you use an executor or job store that serializes the job, it will +add a couple requirements on your job: #. The target callable must be globally accessible #. Any arguments to the callable must be serializable @@ -263,9 +272,9 @@ requirements on your job: Of the builtin job stores, only MemoryJobStore doesn't serialize jobs. Of the builtin executors, only ProcessPoolExecutor will serialize jobs. -.. important:: If you schedule jobs in a persistent job store during your application's initialization, you **MUST** - define an explicit ID for the job and use ``replace_existing=True`` or you will get a new copy of the job every time - your application restarts! +.. important:: If you schedule jobs in a persistent job store during your application's + initialization, you **MUST** define an explicit ID for the job and use ``replace_existing=True`` + or you will get a new copy of the job every time your application restarts! .. tip:: To run a job immediately, omit ``trigger`` argument when adding the job. @@ -273,10 +282,11 @@ Of the builtin executors, only ProcessPoolExecutor will serialize jobs. Removing jobs ------------- -When you remove a job from the scheduler, it is removed from its associated job store and will not be executed anymore. -There are two ways to make this happen: +When you remove a job from the scheduler, it is removed from its associated job store and will not +be executed anymore. There are two ways to make this happen: -#. by calling :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job` with the job's ID and job store alias +#. by calling :meth:`~apscheduler.schedulers.base.BaseScheduler.remove_job` with the job's ID and + job store alias #. by calling :meth:`~apscheduler.job.Job.remove` on the Job instance you got from :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` @@ -284,7 +294,8 @@ The latter method is probably more convenient, but it requires that you store so :class:`~apscheduler.job.Job` instance you received when adding the job. For jobs scheduled via the :meth:`~apscheduler.schedulers.base.BaseScheduler.scheduled_job`, the first way is the only way. -If the job's schedule ends (i.e. its trigger doesn't produce any further run times), it is automatically removed. +If the job's schedule ends (i.e. its trigger doesn't produce any further run times), it is +automatically removed. Example:: @@ -300,9 +311,9 @@ Same, using an explicit job ID:: Pausing and resuming jobs ------------------------- -You can easily pause and resume jobs through either the :class:`~apscheduler.job.Job` instance or the scheduler itself. -When a job is paused, its next run time is cleared and no further run times will be calculated for it until the job is -resumed. To pause a job, use either method: +You can easily pause and resume jobs through either the :class:`~apscheduler.job.Job` instance or +the scheduler itself. When a job is paused, its next run time is cleared and no further run times +will be calculated for it until the job is resumed. To pause a job, use either method: * :meth:`apscheduler.job.Job.pause` * :meth:`apscheduler.schedulers.base.BaseScheduler.pause_job` @@ -318,26 +329,29 @@ Getting a list of scheduled jobs To get a machine processable list of the scheduled jobs, you can use the :meth:`~apscheduler.schedulers.base.BaseScheduler.get_jobs` method. It will return a list of -:class:`~apscheduler.job.Job` instances. If you're only interested in the jobs contained in a particular job store, -then give a job store alias as the second argument. +:class:`~apscheduler.job.Job` instances. If you're only interested in the jobs contained in a +particular job store, then give a job store alias as the second argument. -As a convenience, you can use the :meth:`~apscheduler.schedulers.base.BaseScheduler.print_jobs` method which will print -out a formatted list of jobs, their triggers and next run times. +As a convenience, you can use the :meth:`~apscheduler.schedulers.base.BaseScheduler.print_jobs` +method which will print out a formatted list of jobs, their triggers and next run times. Modifying jobs -------------- You can modify any job attributes by calling either :meth:`apscheduler.job.Job.modify` or -:meth:`~apscheduler.schedulers.base.BaseScheduler.modify_job`. You can modify any Job attributes except for ``id``. +:meth:`~apscheduler.schedulers.base.BaseScheduler.modify_job`. You can modify any Job attributes +except for ``id``. Example:: job.modify(max_instances=6, name='Alternate name') If you want to reschedule the job -- that is, change its trigger, you can use either -:meth:`apscheduler.job.Job.reschedule` or :meth:`~apscheduler.schedulers.base.BaseScheduler.reschedule_job`. -These methods construct a new trigger for the job and recalculate its next run time based on the new trigger. +:meth:`apscheduler.job.Job.reschedule` or +:meth:`~apscheduler.schedulers.base.BaseScheduler.reschedule_job`. +These methods construct a new trigger for the job and recalculate its next run time based on the +new trigger. Example:: @@ -351,8 +365,8 @@ To shut down the scheduler:: scheduler.shutdown() -By default, the scheduler shuts down its job stores and executors and waits until all currently executing jobs are -finished. If you don't want to wait, you can do:: +By default, the scheduler shuts down its job stores and executors and waits until all currently +executing jobs are finished. If you don't want to wait, you can do:: scheduler.shutdown(wait=False) @@ -383,9 +397,10 @@ Limiting the number of concurrently executing instances of a job ---------------------------------------------------------------- By default, only one instance of each job is allowed to be run at the same time. -This means that if the job is about to be run but the previous run hasn't finished yet, then the latest run is -considered a misfire. It is possible to set the maximum number of instances for a particular job that the scheduler will -let run concurrently, by using the ``max_instances`` keyword argument when adding the job. +This means that if the job is about to be run but the previous run hasn't finished yet, then the +latest run is considered a misfire. It is possible to set the maximum number of instances for a +particular job that the scheduler will let run concurrently, by using the ``max_instances`` keyword +argument when adding the job. .. _missed-job-executions: @@ -393,22 +408,25 @@ let run concurrently, by using the ``max_instances`` keyword argument when addin Missed job executions and coalescing ------------------------------------ -Sometimes the scheduler may be unable to execute a scheduled job at the time it was scheduled to run. -The most common case is when a job is scheduled in a persistent job store and the scheduler is shut down and restarted -after the job was supposed to execute. When this happens, the job is considered to have "misfired". -The scheduler will then check each missed execution time against the job's ``misfire_grace_time`` option (which can be -set on per-job basis or globally in the scheduler) to see if the execution should still be triggered. -This can lead into the job being executed several times in succession. +Sometimes the scheduler may be unable to execute a scheduled job at the time it was scheduled to +run. The most common case is when a job is scheduled in a persistent job store and the scheduler +is shut down and restarted after the job was supposed to execute. When this happens, the job is +considered to have "misfired". The scheduler will then check each missed execution time against the +job's ``misfire_grace_time`` option (which can be set on per-job basis or globally in the +scheduler) to see if the execution should still be triggered. This can lead into the job being +executed several times in succession. -If this behavior is undesirable for your particular use case, it is possible to use `coalescing` to roll all these -missed executions into one. In other words, if coalescing is enabled for the job and the scheduler sees one or more -queued executions for the job, it will only trigger it once. No misfire events will be sent for the "bypassed" runs. +If this behavior is undesirable for your particular use case, it is possible to use `coalescing` to +roll all these missed executions into one. In other words, if coalescing is enabled for the job and +the scheduler sees one or more queued executions for the job, it will only trigger it once. No +misfire events will be sent for the "bypassed" runs. .. note:: - If the execution of a job is delayed due to no threads or processes being available in the pool, the executor may - skip it due to it being run too late (compared to its originally designated run time). - If this is likely to happen in your application, you may want to either increase the number of threads/processes in - the executor, or adjust the ``misfire_grace_time`` setting to a higher value. + If the execution of a job is delayed due to no threads or processes being available in the + pool, the executor may skip it due to it being run too late (compared to its originally + designated run time). If this is likely to happen in your application, you may want to either + increase the number of threads/processes in the executor, or adjust the ``misfire_grace_time`` + setting to a higher value. .. _scheduler-events: @@ -416,14 +434,14 @@ queued executions for the job, it will only trigger it once. No misfire events w Scheduler events ---------------- -It is possible to attach event listeners to the scheduler. Scheduler events are fired on certain occasions, and may -carry additional information in them concerning the details of that particular event. -It is possible to listen to only particular types of events by giving the appropriate ``mask`` argument to -:meth:`~apscheduler.schedulers.base.BaseScheduler.add_listener`, OR'ing the different constants together. -The listener callable is called with one argument, the event object. +It is possible to attach event listeners to the scheduler. Scheduler events are fired on certain +occasions, and may carry additional information in them concerning the details of that particular +event. It is possible to listen to only particular types of events by giving the appropriate +``mask`` argument to :meth:`~apscheduler.schedulers.base.BaseScheduler.add_listener`, OR'ing the +different constants together. The listener callable is called with one argument, the event object. -See the documentation for the :mod:`~apscheduler.events` module for specifics on the available events and their -attributes. +See the documentation for the :mod:`~apscheduler.events` module for specifics on the available +events and their attributes. Example:: diff --git a/docs/versionhistory.rst b/docs/versionhistory.rst index 34da4c0..ab80329 100644 --- a/docs/versionhistory.rst +++ b/docs/versionhistory.rst @@ -145,8 +145,8 @@ APScheduler, see the :doc:`migration section <migration>`. * A wider variety of target callables can now be scheduled so that the jobs are still serializable (static methods on Python 3.3+, unbound methods on all except Python 3.2) -* Attempting to serialize a non-serializable Job now raises a helpful exception during serialization. - Thanks to Jeremy Morgan for pointing this out. +* Attempting to serialize a non-serializable Job now raises a helpful exception during + serialization. Thanks to Jeremy Morgan for pointing this out. * Fixed table creation with SQLAlchemyJobStore on MySQL/InnoDB @@ -160,8 +160,8 @@ APScheduler, see the :doc:`migration section <migration>`. * Added support for timezones (special thanks to Curtis Vogt for help with this one) -* Split the old Scheduler class into BlockingScheduler and BackgroundScheduler and added integration for - asyncio (PEP 3156), Gevent, Tornado, Twisted and Qt event loops +* Split the old Scheduler class into BlockingScheduler and BackgroundScheduler and added + integration for asyncio (PEP 3156), Gevent, Tornado, Twisted and Qt event loops * Overhauled the job store system for much better scalability @@ -171,11 +171,13 @@ APScheduler, see the :doc:`migration section <migration>`. * Dropped the max_runs option and run counting of jobs since it could not be implemented reliably -* Adding jobs is now done exclusively through ``add_job()`` -- the shortcuts to triggers were removed +* Adding jobs is now done exclusively through ``add_job()`` -- the shortcuts to triggers were + removed * Added the ``end_date`` parameter to cron and interval triggers -* It is now possible to add a job directly to an executor without scheduling, by omitting the trigger argument +* It is now possible to add a job directly to an executor without scheduling, by omitting the + trigger argument * Replaced the thread pool with a pluggable executor system @@ -205,11 +207,13 @@ APScheduler, see the :doc:`migration section <migration>`. 2.0.3 ----- -* The scheduler now closes the job store that is being removed, and all job stores on shutdown() by default +* The scheduler now closes the job store that is being removed, and all job stores on shutdown() by + default * Added the ``last`` expression in the day field of CronTrigger (thanks rcaselli) -* Raise a TypeError when fields with invalid names are passed to CronTrigger (thanks Christy O'Reilly) +* Raise a TypeError when fields with invalid names are passed to CronTrigger (thanks Christy + O'Reilly) * Fixed the persistent.py example by shutting down the scheduler on Ctrl+C |