| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add osprofiler wsgi middleware. This middleware is used for 2 things:
1) It checks that person who want to trace is trusted and knows
secret HMAC key.
2) It starts tracing in case of proper trace headers
and adds the first wsgi trace point with info about the HTTP request
* Add initialization of osprofiler on start of a service
Currently that includes oslo.messaging notifier instance creation
to send Ceilometer backend notifications.
oslo-spec: https://review.openstack.org/#/c/103825/
python-novaclient change: https://review.openstack.org/#/c/254699/
based on: https://review.openstack.org/#/c/105096/
Co-Authored-By: Boris Pavlovic <boris@pavlovic.me>
Co-Authored-By: Munoz, Obed N <obed.n.munoz@intel.com>
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Co-Authored-By: Tovin Seven <vinhnt@vn.fujitsu.com>
Implements: blueprint osprofiler-support-in-nova
Change-Id: I82d2badc8c1fcec27c3fce7c3c20e0f3b76414f1
|
|
|
|
|
|
|
|
|
|
|
| |
The config options of the section
"nova/netconf" got moved to the
new central location
"nova/conf/netconf.py"
Change-Id: I8a17b6f00b15e03de55385fc0206bdc82441304a
Depends-On: I0da2ad7daa942b85c3395dc4861c6e18368ece88
Implements: blueprint centralize-config-options-newton
|
|
|
|
|
|
|
|
| |
In some modules the global LOG is not used any more. And the import
of logging is not used. This patch removes the unused logging import
and LOG vars.
Change-Id: I28572c325f8c31ff38161010047bba00c5d5b833
|
|
|
|
|
|
|
|
| |
This links Service.reset() to the manager, which for compute will
rebuild the compute_rpcapi and thus re-determine the appropriate
RPC versions.
Change-Id: Ifec7f6ff604d1e5f3663633065e9a55baacffec8
|
|
|
|
|
|
|
|
|
|
| |
Modules eventlet_backdoor, loopingcall, periodic_task,
service, sslutils, systemd, threadgroup were removed
from nova. These modules were imported from oslo.service
library.
Co-Authored-By: Marian Horban <mhorban@mirantis.com>
Depends-On: I305cf53bad6213c151395e93d656b53a8a28e1db
Change-Id: Iaef67e16af3d69f845742f7bdcb43667bf1576ee
|
|
|
|
|
|
|
|
|
|
| |
Convert the use of the incubated version of the log module
to the new oslo.log library.
Sync oslo-incubator modules to update their imports as well.
Co-Authored-By: Doug Hellmann <doug@doughellmann.com>
Change-Id: Ic4932e3f58191869c30bd07a010a6e9fdcb2a12c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The oslo team is recommending everyone to switch to the
non-namespaced versions of libraries. Updating the hacking
rule to include a check to prevent oslo.* import from
creeping back in.
This commit includes:
- using oslo_utils instead of oslo.utils
- using oslo_serialization instead of oslo.serialization
- using oslo_db instead of oslo.db
- using oslo_i18n instead of oslo.i18n
- using oslo_middleware instead of oslo.middleware
- using oslo_config instead of oslo.config
- using oslo_messaging instead of "from oslo import messaging"
- using oslo_vmware instead of oslo.vmware
Change-Id: I3e2eb147b321ce3e928817b62abcb7d023c5f13f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Provides a place for any drivers to potentially
setup graceful-shutdown code.
VMware driver's proper driver-lifecycle code included.
This is critical in environments where the the vmware
driver is setup and torn down at high frequency.
Prevents run-away vSphere by closing stateless
HTTP management sessions gracefully.
Change-Id: I67a91613643540243ab1210b333ed8e121f05802
related to bug: 1262288
Closes-bug: 1292583
|
|
|
|
|
|
|
|
|
|
| |
We don't need to have the vi modelines in each source file,
it can be set in a user's vimrc if required.
Also a check is added to hacking to detect if they are re-added.
Change-Id: I347307a5145b2760c69085b6ca850d6a9137ffc6
Closes-Bug: #1229324
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The oslo.messaging library takes the existing RPC code from oslo and
wraps it in a sane API with well defined semantics around which we can
make a commitment to retain compatibility in future.
The patch is large, but the changes can be summarized as:
* oslo.messaging>=1.3.0a4 is required; a proper 1.3.0 release will be
pushed before the icehouse release candidates.
* The new rpc module has init() and cleanup() methods which manage the
global oslo.messaging transport state. The TRANSPORT and NOTIFIER
globals are conceptually similar to the current RPCIMPL global,
except we're free to create and use alternate Transport objects
in e.g. the cells code.
* The rpc.get_{client,server,notifier}() methods are just helpers
which wrap the global messaging state, specifiy serializers and
specify the use of the eventlet executor.
* In oslo.messaging, a request context is expected to be a dict so
we add a RequestContextSerializer which can serialize to and from
dicts using RequestContext.{to,from}_dict()
* The allowed_rpc_exception_modules configuration option is replaced
by an allowed_remote_exmods get_transport() parameter. This is not
something that users ever need to configure, but it is something
each project using oslo.messaging needs to be able to customize.
* The nova.rpcclient module is removed; it was only a helper class
to allow us split a lot of the more tedious changes out of this
patch.
* Finalizing the port from RpcProxy to RPCClient is straightforward.
We put the default topic, version and namespace into a Target and
contstruct the client using that.
* Porting endpoint classes (like ComputeManager) just involves setting
a target attribute on the class.
* The @client_exceptions() decorator has been renamed to
@expected_exceptions since it's used on the server side to designate
exceptions we expect the decorated method to raise.
* We maintain a global NOTIFIER object and create specializations of
it with specific publisher IDs in order to avoid notification driver
loading overhead.
* rpc.py contains transport aliases for backwards compatibility
purposes. setup.cfg also contains notification driver aliases for
backwards compat.
* The messaging options are moved about in nova.conf.sample because
the options are advertised via a oslo.config.opts entry point and
picked up by the generator.
* We use messaging.ConfFixture in tests to override oslo.messaging
config options, rather than making assumptions about the options
registered by the library.
The porting of cells code is particularly tricky:
* messaging.TransportURL parse() and str() replaces the
[un]parse_transport_url() methods. Note the complication that an
oslo.messaging transport URL can actually have multiple hosts in
order to support message broker clustering. Also the complication
of transport aliases in rpc.get_transport_url().
* proxy_rpc_to_manager() is fairly nasty. Right now, we're proxying
the on-the-wire message format over this call, but you can't supply
such messages to oslo.messaging's cast()/call() methods. Rather than
change the inter-cell RPC API to suit oslo.messaging, we instead
just unpack the topic, server, method and args from the message on
the remote side.
cells_api.RPCClientCellsProxy is a mock RPCClient implementation
which allows us to wrap up a RPC in the message format currently
used for inter-cell RPCs.
* Similarly, proxy_rpc_to_manager uses the on-the-wire format for
exception serialization, but this format is an implementation detail
of oslo.messaging's transport drivers. So, we need to duplicate the
exception serialization code in cells.messaging. We may find a way
to reconcile this in future - for example a ExceptionSerializer
class might work, but with the current format it might be difficult
for the deserializer to generically detect a serialized exception.
* CellsRPCDriver.start_servers() and InterCellRPCAPI._get_client()
need close review, but they're pretty straightforward ports of code
to listen on some specialized topics and connect to a remote cell
using its transport URL.
blueprint: oslo-messaging
Change-Id: Ib613e6300f2c215be90f924afbd223a3da053a69
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The update_service_capabilities method of this class has been unused
since it was disabled as a periodic task in commit
674ef05b0ac668e16529a3ddc2b8fe26dc729dad. The compute manager's update
to a v3 rpc api removed the last references to publishing capabilities,
so this can now be completely removed.
After removing the capabilities bits, the only value left in this class
was providing an instance of the scheduler rpcapi class. This seemed
like overkill, so I removed this class completely and added the
scheduler rpcapi instance to the compute manager directly.
Related to blueprint rpc-major-version-updates-icehouse
Related to blueprint no-compute-fanout-to-scheduler
Change-Id: I72fd16fc96938a9ea9d509e24319a5273de900a9
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A previous change adding support for the 3.x rpc interface but retained
compatibility with 2.x as a transition point. This commit removes the
old API from the server side.
The object_compat decorator is unused as of this commit. However, I'm
leaving it, because it's going to start getting used very soon again as
more of the compute rpc api gets updated to accept Instance objects.
UpgradeImpact
Part of blueprint rpc-major-version-updates-icehouse
Change-Id: I31c21055163e94b712d337568b16b9b7a224b52f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a temporary nova.notifier.Notifier helper class which translates
oslo.messaging.Notifier compatible calls into openstack.common.notifier
compatible calls.
This allows us to port the notifier code over to the oslo.messaging API
before actually switching over oslo.messaging fully.
This patch contains no functional changes at all, except that all
notifications go through this temporary helper class.
Some notes on the new API:
* The notifier API is changed so that what was previously global state
is now encapsulated in a Notifier object. This object also includes
the publisher_id and has error()/info()/etc. methods rather than
just notify().
* The notify_decorator() helper wasn't carried across to the new API
because its semantics are a bit weird. Something along these lines
could be added in future, though.
* We use a fake Notifier implementation for tests because there's no
API in oslo.messaging to actually get the notifications queued
up in the fake notification driver, which is a bit dumb. However,
this feels like the right thing to do anyway. We're not wanting
to test oslo.messaging.Notifier itself, but rather we want to test
how we call it.
blueprint: oslo-messaging
Change-Id: I262163c7e05e6a6fb79265e904ce761fc3ac5806
|
|
|
|
|
|
|
|
|
|
| |
Now that nothing is using publish_service_capabilities it can be
disabled, but not removed in order to not break compute rpcapi backwards
compatibility.
Part of bp no-compute-fanout-to-scheduler.
Change-Id: I80c49c46138fd6ee89cb08ffbbced72ada4de72e
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In review I5bf7795fca21627566ef4f688d45dc83bb953d1b (commit 3349417) we
passed an rpc_connection argument to pre_start_hook() so that we could
create additional queues.
See review Idf12c418a8ce1bc873e7ad6f702351e95d31aca3 for how this was
intended to be used. With oslo.messaging, we'd support doing something
like this by adding support for multiple targets to RPCServer since the
concept of a messaging connection isn't really exposed by the API in
quite the same way.
In any case, this rpc_connection parameter has never been used since it
was introduced, so let's remove it. Let's also move the invocation of
this hook back to where it originally was.
blueprint: oslo-messaging
Change-Id: I9181a3567a1b5a8a6077e78c85f4d28660f275a6
|
|
|
|
|
|
|
|
| |
Previous _ was monkey patched into builtins whenever
certain modules were imported. This removes that and
simply imports it when it is needed.
Change-Id: I0af2c6d8a230e94440d655d13cab9107ac20d13c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This framework was merged a year ago and AFAICT hasn't seen use beyond
the two initial wikimedia extensions.
The framework basically allows a way for a single plugin to register API
extensions and notification drivers. Both of these can be done by
directly using config opts like osapi_compute_extension and
notification_driver so this framework really only helps if we expected
to (and wanted to) lots of these plugins in the wild.
Change-Id: I09a11f9931ee0436a56e8b0d925683b54f73b104
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the base infrastructure for a unified object model in
Nova. It provides a somewhat-magical metaclass which serves to
automatically register objects that are capable of being remoted.
Magic remoting of methods happens via two decorators, allowing
a given nova service to globally declare that such actions should
happen over RPC to avoid DB accesses.
For more details, see blueprint unified-object-model
Change-Id: Iecd54ca22666f730c41d2278f74b86d922624a4e
|
|
|
|
|
|
|
|
|
| |
Add an additional argument to the base create_rpc_dispatcher() method.
When a manager wants to override create_rpc_dispatcher() to add more
callbacks in their own rpc namespaces, it can just call the parent class
with the additional APIs to include.
Change-Id: I9cba8b176b35f55ba9d71365d0a8bf25d2ae311f
|
|
|
|
|
|
|
| |
Convert nova to using the oslo periodic tasks implementation. There
are no functional changes in this review.
Change-Id: I767e0ad17781d5f9d5e987e0a4ad65796243ae5c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Each service implemented the get_backdoor_port method individually. This
patch moves the implementation of this method to the base rpc API
instead, and removes the now unnecessary code from each of the services.
The server side method was left on all of the managers for rpc backwards
copmatibility. They can be removed on the next major rpc version bump
of those APIs.
Part of blueprint base-rpc-api.
Change-Id: Ia8838fafd80eb86a1c2d66f5e97370042d8d8c53
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds an rpc API that is exposed by all services. The methods
in this API exist in their own namespace and are versioned independently
of the main API for the service.
The first method for this API is a simple ping() method. This method
exists in the conductor rpc API already, and could be more generally
useful. Other methods will be added in later patches.
The base rpc API will be exposed from all services automatically unless
they override the create_rpc_dispatcher method in the base manager
class. All services need to pass a service_name into the base manager
constructor. Some services already did this, but now it's needed for
all of them.
Implements blueprint base-rpc-api.
Change-Id: I02ab1970578bc53ba26461b533d06d1055c2d88e
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update time calculations in the periodic task handling to use the
timeutils module from oslo. This provides benefits for unit testing,
since we can easily override the time functions to simulate time
differences without having to actually sleep and make the unit tests
run unnecessarily long.
Resolves bug 1098979.
Change-Id: I1e6a0a0b1622a3f8c37c42376f5261f5f2dbf6fe
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cfg API is now available via the oslo-config library, so switch to
it and remove the copied-and-pasted version.
Add the 2013.1b4 tarball to tools/pip-requires - this will be changed
to 'oslo-config>=2013.1' when oslo-config is published to pypi. This
will happen in time for grizzly final.
Add dependency_links to setup.py so that oslo-config can be installed
from the tarball URL specified in pip-requires.
Remove the 'deps = pep8==1.3.3' from tox.ini as it means all the other
deps get installed with easy_install which can't install oslo-config
from the URL.
Make tools/hacking.py include oslo in IMPORT_EXCEPTIONS like it already
does for paste. It turns out imp.find_module() doesn't correct handle
namespace packages.
Retain dummy cfg.py file until keystoneclient middleware has been
updated (I18c450174277c8e2d15ed93879da6cd92074c27a).
Change-Id: I4815aeb8a9341a31a250e920157f15ee15cfc5bc
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds a parameter to allow periodic tasks that would
otherwise have to wait for a lengthy initial interval to run immediately
when the periodic tasks are started. This patch also applies the new
facility in the the _sync_power_states task in the compute manager to
get more timely access to Openstack server state when the compute node
starts up.
Change-Id: I461a5dbaa7d18919941ec31402910eef188d8e8c
|
|
|
|
|
|
|
| |
These methods were added in 3f7353d14183a93099c99dc2fc72614265f1c72c but
are not used anywhere in the code. Just remove them.
Change-Id: I2b55901bb6119d83586fb2ffa877bbd00a8d43b3
|
|
|
|
|
|
|
|
|
| |
I was calculating the time to wait for the next run of a periodic
task incorrectly.
Resolves bug 1098819.
Change-Id: Ida60b69014aa06229111e58024e35268262f18fb
|
|
|
|
|
|
|
|
|
|
|
| |
The my_ip, host and use_ipv6 options are used all over the codebase
and they're pretty well related to each other. Create a new netconf
module for them to live in.
There are now no options registered globally in nova.config!
blueprint: scope-config-opts
Change-Id: Ifde37839ae6f38e6bf99dff1e80b8e25fd68ed25
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This review allows periodic tasks to be enabled or disabled in the
decorator, as well as by specifying an interval which is negative.
The spacing between runs of a periodic task is now specified in
seconds, with zero meaning the default spacing which is currently 60
seconds.
There is also a new argument to the decorator which indicates if a
periodic task _needs_ to be run in the nova-compute process. There is
also a flag (run_external_periodic_tasks) which can be used to move
these periodic tasks out of the nova-compute process.
I also remove the periodic_interval flag to services, as the interval
between runs is now dynamic based on the number of seconds that a
periodic task wants to wait for its next run. For callers who want to
twiddle the sleep period (for example unit tests), there is a
create() argument periodic_interval_max which lets the period
periodic_tasks() specifies be overridden. This is not exposed as a
flag because I cannot see a use case for that. It is needed for unit
testing however.
DocImpact. Resolves bug 939087.
Change-Id: I7f245a88b8d229a481c1b65a4c0f1e2769bf3901
|
|
|
|
|
|
| |
This makes it possible to send multiple capabilites by single RPC.
Change-Id: I7c1b75eada17181c4fe08c55992b34d66276f498
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
if the user sets say image_cache_manager_interval=-1, then
don't add the task to the list of periodic tasks
remove extra braces that i had added
DocImpact
Fixes LP #1084232
Change-Id: Ieecd67ddbc70b815a88f40e72ca2899787d75988
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The only reason for importing nova.config now is where one of the
options defined in that file is needed. Rather than importing
nova.config using an import statement, use CONF.import_opt() so
that it is clear which option we actually require.
In future, we will move many options out of nova.config so many
of these import_opt() calls will either go away or cause a module
other than nova.config to be imported.
Change-Id: I0646efddecdf2530903afd50c1f4364cb1d5dce1
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Modules import nova.config for two reasons right now - firstly, to
reference nova.config.CONF and, secondly, if they use one of the
options defined in nova.config.
Often modules import nova.openstack.common.cfg and nova.config
which is a bit pointless since they could just use cfg.CONF if
they just want to nova.config in order to reference CONF.
Let's just use cfg.CONF everywhere and we can explicitly state
where we actually require options defined in nova.config.
Change-Id: Ie4184a74e3e78c99658becb18dce1c2087e450bb
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pre_start_hook allows a manager to perform some additional service
setup before the service starts reading messages from message queues.
This patch moves the hook just a bit so that it's after creating the rpc
connection, but before creating the default queues, and most importantly
still before the service starts reading from the queues.
The pre_start_hook now gets the rpc_connection as an argument. That
will allow this hook to set up some additional queues beyond the ones
that are set up by default in the base service code.
Change-Id: I5bf7795fca21627566ef4f688d45dc83bb953d1b
|
|
|
|
|
|
|
|
| |
Now that options have all moved from nova.flags to nova.config, we can
safely remove the nova.flags imports and replace them with nova.config
imports.
Change-Id: Ic077a72dd6419bbf1e1babe71acfa43c4e8b55c8
|
|
|
|
|
|
|
|
| |
This adds an rpc call for compute and network that will return
the eventlet_backdoor port for the service.
Change-Id: I95fdb5ca9bce9f3128300e3b5601fb2b2fc5e82f
Signed-off-by: Matthew Treinish <treinish@linux.vnet.ibm.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Part 1 of 6: blueprint general-bare-metal-provisioning-framework.
This patch includes updates on scheduler and compute codes for
multiple capabilities. This feature is needed in bare-metal
provisioning which is implemented in later patches --- a bare-metal
nova-compute manages multiple bare-metal nodes where instances are
provisioned. Nova DB's compute_nodes entry needs to be created for
each bare-metal node, and a scheduler can choose an appropriate
bare-metal node to provision an instance.
With this patch, one service entry with multiple compute_node entries
can be registered by nova-compute. Distinct 'node name' is given for
each node and is stored at compute_node['hypervisor_hostname'].
And we added a new column "node" to "instances" table in Nova DB to
associate instances with compute_node. FilterScheduler puts <nodename>
to the column when it provisions the instance. And nova-computes
respect <nodename> when run/stop instances and when calculate
resources.
Also, 'capability’ is extended from a dictionary to a list of
dictionaries to describe the multiple capabilities of the multiple
nodes.
Change-Id: I527febe4dbd887b2e6596ce7226c1ae3386e2ae6
Co-authored-by: Mikyung Kang <mkkang@isi.edu>
Co-authored-by: David Kang <dkang@isi.edu>
Co-authored-by: Ken Igarashi <igarashik@nttdocomo.co.jp>
Co-authored-by: Arata Notsu <notsu@virtualtech.jp>
|
|
|
|
|
|
|
|
|
|
|
| |
Use the global CONF variable instead of FLAGS. This is purely a cleanup
since FLAGS is already just another reference to CONF.
We leave the nova.flags imports until a later cleanup commit since
removing them may cause unpredictable problems due to config options not
being registered.
Change-Id: Ib110ba8d1837780e90b0d3fe13f8e6b68ed15f65
|
|
|
|
|
|
|
| |
Adds pre_start_hook() and post_start_hook() and fixes a couple of hard
coded binary name checks in service.py
Change-Id: I062790a88ed7f15a6f28961d6ddc1f230e19e0cb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes bug 1071254.
Changes:
* Add new rpc-api(fanout) of compute "publish_service_capabilities"
This rpc-api urges services to send its capabilites to the scheduler.
* Scheduler calls publish_service_capabilities right after the start
By them, the scheduler get to know the capabilities earlier.
Now we can expect that the scheduler always holds the capabilities. So it
is reasonable to change HostManager to ignore hosts whose capabilities are
"None" since it becomes a rare case; this will make scheduling more
reliable. This will achieved by Another patch.
Change-Id: If6582765011fd5e1b794bfdc068e17630ba381cb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Partially addresses bug #1045152
On a heavily loaded compute node, it can be observed that periodic tasks
take so long to run that the report_state() looping call can be blocked from
running long enough that the scheduler thinks the host is dead.
Reduce the chance of this happening by yielding to another greenthread
after each periodic task has completed and each loop in some methods
that has linear relationship with the number of instances.
Change-Id: If2b125708da8298b20497e2e08e52280c102f1e1
|
|
|
|
|
|
| |
For blueprint novaplugins.
Change-Id: Id4a5ae3ebb91f941956e2f73ecfd9ea1d290a235
|
|
|
|
|
|
| |
I only just moved logging from nova to common, so behavior should remain the same.
Change-Id: I1d7304ca200f9d024bb7244d25be2f9a670318fb
|
|
|
|
|
|
|
|
| |
Final patch for blueprint common-rpc.
This patch removes nova.rpc in favor of the copy in openstack-common.
Change-Id: I9c2f6bdbe8cd0c44417f75284131dbf3c126d1dd
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Part of blueprint versioned-rpc-apis.
One side effect of this change was that nova.scheduler.api was removed
in favor of nova.scheduler.rpcapi. In this case, the api was just a
direct wrapper around rpc usage. For other APIs, I've been following
the pattern that the rpcapi module provides the rpc client wrapper, and
if any other client-side logic is needed, that's where an api module is
used.
Change-Id: Ibd0292936f9afc77aeb5d040660bfa857861eed1
|
|
|
|
|
|
|
|
|
|
|
|
| |
Part of blueprint versioned-rpc-apis.
This commit includes the base support for versioned RPC APIs. It
introduces the RpcProxy and RpcDispatcher classes that have common code
for handling versioning on the client and server sides, respectively.
RPC APIs will be converted one at a time using this infrastructure.
Change-Id: I07bd82e9ff60c356123950e466caaffdfce79eba
|
|
|
|
|
|
| |
* Make modules use getLogger(__name__) and log to the result
Change-Id: Ib6d69b4be140ec89affc86ed11e65e422d551df1
|
|
|
|
| |
Change-Id: Ida7cf1ff0cbf94ad82c7a75708c79ad7bb27f7fd
|