summaryrefslogtreecommitdiff
path: root/src/buildstream/_scheduler
Commit message (Collapse)AuthorAgeFilesLines
* Remove uneccesary _platform.multiprocessingaevri/nompAngelos Evripiotis2019-08-202-21/+27
| | | | | | | | It turns out we don't need to use multiprocessing.Manager() queues when using the 'spawn' method - the regular multiprocessing queues are also picklable, if passed as parameters to the new process. Thanks to @BenjaminSchubert for pointing this out.
* _context.py: Add disable_fork() methodJürg Billeter2019-08-201-0/+2
| | | | | Calling disable_fork() will prevent the scheduler from running but will allow communication with casd in the main process.
* _scheduler: Remove cache size jobJürg Billeter2019-08-205-197/+2
| | | | Cache size will be tracked by buildbox-casd.
* _scheduler: Remove cleanup jobJürg Billeter2019-08-203-108/+1
| | | | Cache expiry will be managed by buildbox-casd.
* _frontend/app.py: Stop handling Element instances directlyTom Pollard2019-08-191-3/+7
| | | | | | App shouldn't need to inspect Elements directly when handling failures, and after frontend process separation pickling the object is not plausible.
* Support pickling jobs if the platform requires itAngelos Evripiotis2019-08-161-6/+44
| | | | | | | | Add support for using `multiprocessing.Manager` and the associated queues. Downgrade the queue event callback guarantees accordingly. In later work we may be able to support callbacks in all scenarios. Pickle and unpickle the child job if the platform requires it.
* Abstract mp Queue usage, prep to spawn processesAngelos Evripiotis2019-08-162-25/+19
| | | | | | | Pave the way to supporting starting processes by the 'spawn' method, by abstracting our usage of `multiprocessing.Queue`. This means we can easily switch to using a multiprocessing.Manager() and associated queues instead when necessary.
* job.py: Report error when job process unexpectedly dies (#1089)tmewett/report-weird-return-codesTom Mewett2019-08-121-1/+5
|
* _message.py: Use element_name & element_key instead of unique_idtpollard/messageobjectTom Pollard2019-08-084-61/+81
| | | | | | | | | | | | | Adding the element full name and display key into all element related messages removes the need to look up the plugintable via a plugin unique_id just to retrieve the same values for logging and widget frontend display. Relying on plugintable state is also incompatible if the frontend will be running in a different process, as it will exist in multiple states. The element full name is now displayed instead of the unique_id, such as in the debugging widget. It is also displayed in place of 'name' (i.e including any junction prepend) to be more informative.
* job: fix exception caught from enum translationBenjamin Schubert2019-07-311-1/+1
| | | | | The exception was incorrectly marked as 'KeyError', but enum throw 'ValueError' instead.
* types: Add a 'FastEnum' implementation and replace Enum by itBenjamin Schubert2019-07-292-10/+12
| | | | | | | 'Enum' has a big performance impact on the running code. Replacing it with a safe subset of functionality removes lots of this overhead without removing the benefits of using enums (safe comparisions, uniqueness)
* job: try pickling child jobs if BST_TEST_SUITEaevri/pickleAngelos Evripiotis2019-07-241-0/+7
| | | | | | | | If we're running BuildStream tests then pickle child jobs. This ensures that we keep things picklable, whilst we work towards being able to support platforms that need to use the 'spawn' method of starting processes.
* Make ChildJobs and friends picklableAngelos Evripiotis2019-07-242-0/+147
| | | | | | | | | Pave the way toward supporting the 'spawn' method of creating jobs, by adding support for pickling ChildJobs. Introduce a new 'jobpickler' module that provides an entrypoint for this functionality. This also makes replays of jobs possible, which has made the debugging of plugins much easier for me.
* scheduler: rm unused _exclusive_* membersAngelos Evripiotis2019-07-091-9/+0
| | | | The {,un}register_exclusive_interest() mechanism is used for this now.
* Store core state for the frontend separatelyJonathan Maw2019-07-097-35/+52
|
* Stream: Fix the existence of duplicate queuesJonathan Maw2019-07-091-0/+11
| | | | | | | | | | | | | | | It was possible for multiple Queues of the same type to exist. Currently, there is no desired reason for this to happen. These changes add an explicit function call to the Scheduler that destroys the queues, to be used before constructing the next list of queues to pass into the Scheduler. It also calls this in all the places before we construct the queues. Further, it alters Stream.fetch_subprojects because there is currently no reason why we'd want to preserve the Stream's list of queues before running.
* Queue: Make queues store counts of the number of skipped/processed elementsJonathan Maw2019-07-091-9/+9
| | | | | | | | We only seen to generate the list so we can get its length, so it is more efficient to only store a count of skipped/processed elements. failed_elements needs to remain a list for the moment, as it's used to retry a failed element job.
* job: only pass Messenger to child, not all ContextAngelos Evripiotis2019-07-051-6/+8
| | | | | | | | | Reduce the amount of context shared with child jobs, by only sending the messenger portion of it rather than the whole thing. Also send the logdir. This also means that we will need to pickle less stuff when using the 'spawn' method of multi-processing, as opposed to the 'fork' method.
* Refactor, use context.messenger directlyAngelos Evripiotis2019-07-052-6/+6
| | | | | | Instead of having methods in Context forward calls on to the Messenger, have folks call the Messenger directly. Remove the forwarding methods in Context.
* Refactor: message handlers take 'is_silenced'Angelos Evripiotis2019-07-051-4/+4
| | | | | | | | Remove the need to pass the Context object to message handlers, by passing what is usually requested from the context instead. This paves the way to sharing less information with some child jobs - they won't need the whole context object, just the messenger.
* _scheduler: don't pass whole queue to child jobAngelos Evripiotis2019-07-047-32/+65
| | | | | | | | | | | Stop passing the scheduler's job queue's across to child jobs, via the 'action_cb' parameter. Instead pass a module-level function, which will pickle nicely. This isn't much of a problem while we are in the 'fork' multiprocessing model. As we move towards supporting the 'spawn' model for win32, then we need to consider what we will be pickling and unpickling, to cross the process boundary.
* _scheduler/./queue.py: remove unused 'e' varsAngelos Evripiotis2019-07-041-2/+2
|
* jobs/job: send ChildJob the context, not schedulerAngelos Evripiotis2019-06-191-6/+6
| | | | | | | Instead of passing the whole scheduler to the ChildJob, only pass the part that is used - the context. Reducing the amount of shared state makes it easier to follow what's going on, and will make it more economical to move to away from the 'fork' model later.
* _scheduler/scheduler.py: Remove unused elapsed_time() callstpollard/elapsedtimeTom Pollard2019-06-131-5/+2
| | | | | | Both run() and the App callback _ticker_callback() call & return elapsed_time(), which is the unused by the respective callers and as such is unnecessary.
* queue.py: Use heapq for the ready queuejennis/push_based_pipelineJames Ennis2019-06-071-3/+4
| | | | | | | This patch includes setting a _depth to each element once the pipeline has been sorted. This is necessary as we need to store elements in the heapq sorted by their depth.
* queue.py: Push-based queuesJames Ennis2019-06-074-55/+89
| | | | | | | | | | | | | | | * Queue.enqueue() and Queue.harvest_jobs() now exhibit push-based behaviour. Although most of the logic from Queue.enqueue() has been moved to Queue._enqueue_element() * QueueStatus.WAIT has been replaced with QueueStatus.PENDING to reflect the new push-based nature of the queues * There now exists a virtual method in Queue: register_pending_element which is used to register am element which is not immediately ready to be processed in the queue with specific callbacks. These callbacks will enqueue the element when called.
* Use 'is' when comparing against JobStatusaevri/job_msg_enumAngelos Evripiotis2019-06-068-10/+10
| | | | | | Since JobStatus is an enum, it's clearer to compare using 'is' - equality comparison will fail in the same cases, but might lull folks into thinking that comparison with integer would also work.
* _scheduler/jobs/job: make JobStatus an enumAngelos Evripiotis2019-06-061-1/+2
| | | | | | | | This provides some minor guards against mistakes, and we'll be able to do type-checking later. This does open the possibility of problems if folks mistakenly try to pass off an integer as a JobStatus.
* _scheduler/jobs/job: use enum for return codesAngelos Evripiotis2019-06-061-14/+18
|
* _scheduler/jobs/job: use enum for message typesAngelos Evripiotis2019-06-061-11/+21
|
* Rename (spawn, fork) -> 'start process'Angelos Evripiotis2019-06-063-21/+20
| | | | | | | | | | | | | | Avoid confusion by not referring to starting another process as 'spawning'. Note that 'spawn' is a process creation method, which is an alternative to forking. Say 'create child process' instead of 'fork' where it doesn't harm understanding. Although we currently only use the 'fork' method for creating subprocesses, there are reasons for us to support 'spawn' in the future. More information on forking and spawning: https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods
* _scheduler/jobs: refactor, defensive send_messageAngelos Evripiotis2019-06-062-30/+36
| | | | | | | | | | | | Simplify the custom 'handle_message' / 'send_message' protocol by not requiring a message_type. These message types share a namespace with the base Job implementation, which could cause trouble. Introduce a new private '_send_message' to implement the old functionality. Subclasses are free to pack a message type into their own messages, this isn't necessary at present and simplifies existing subclass code.
* jobs/job: lint fixes, overhang + unused varAngelos Evripiotis2019-06-061-2/+2
|
* _scheduler/jobs/job: elaborate on 'simple' objectsAngelos Evripiotis2019-06-051-5/+12
|
* _scheduler/jobs/job: refactor, use send_messageAngelos Evripiotis2019-06-051-7/+5
|
* _scheduler/jobs/job: document send_messageAngelos Evripiotis2019-06-051-3/+13
|
* _scheduler/jobs: split jobs into parent and childAngelos Evripiotis2019-06-054-62/+195
| | | | | | | | | | Make it clearer what happens in which process by splitting out a 'ChildJob', which encapsulates the work that happens in the child process. This also makes it possible to control what is transferred to the child process. This is very useful for adding support for the 'spawn' method of creating child processes as opposed to the 'fork' method.
* cachesizejob: remove redundant child_process_dataAngelos Evripiotis2019-06-051-3/+0
| | | | This just does the default behaviour, clearer to remove it.
* jobs/job: Add a fullstop to Job explanationAngelos Evripiotis2019-06-051-1/+1
|
* jobs: refactor, use new set_message_unique_idAngelos Evripiotis2019-05-232-26/+37
| | | | | | | | | | | Ease the burden on subclasses of Job slightly, by providing a new set_message_unique_id() method. It ensures that created Message instances will use that id. This removes the need to override the message() method, so it is no longer in the 'abstract method' section. Enable callers to Job's message() method to override the 'unique_id'.
* _scheduler/jobs/job: mv _parent* above _child*Angelos Evripiotis2019-05-221-131/+131
| | | | | | | | Move the parent-specific methods above the child-specific methods. This makes slightly more sense chronologically, as the parent creates the child. It will also make diffs cleaner when splitting parent and child into separate classes.
* Move source from 'buildstream' to 'src/buildstream'Chandan Singh2019-05-2116-0/+2449
This was discussed in #1008. Fixes #1009.