summaryrefslogtreecommitdiff
path: root/libnm/libnm.ver
Commit message (Collapse)AuthorAgeFilesLines
* nm-setting-bridge: add 'multicast-querier' bridge optionac/bridge_optionsAntonio Cardace2020-04-061-0/+1
| | | | https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* nm-setting-bridge: add 'multicast-query-use-ifaddr' bridge optionAntonio Cardace2020-04-061-0/+1
| | | | https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* nm-setting-bridge: add 'multicast-router' bridge optionAntonio Cardace2020-04-061-0/+1
| | | | | | Also add related unit test. https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* nm-setting-bridge: add 'vlan-stats-enabled' bridge optionAntonio Cardace2020-04-061-0/+1
| | | | https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* nm-setting-bridge: add 'vlan-protocol' bridge optionAntonio Cardace2020-04-061-0/+1
| | | | | | Also add related unit test. https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* nm-setting-bridge: add 'group-address' bridge optionAntonio Cardace2020-04-061-0/+1
| | | | | | Also add related unit test. https://bugzilla.redhat.com/show_bug.cgi?id=1755768
* Add domain_match mode for wifi certificate domain comparisonNiklas Goerke2020-03-231-0/+2
| | | | | https://gitlab.freedesktop.org/NetworkManager/NetworkManager/issues/308 https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/437
* libnm: add nm_client_dbus_set_property() APIThomas Haller2020-03-231-0/+2
| | | | | | | | | | | | | | Similar to nm_client_dbus_call(), but useful for setting a D-Bus property on NetworkManager's D-Bus interface. Note that we currently have various synchronous API for setting D-Bus properties (like nm_client_networking_set_enabled()). Synchronous API does not play well with the content of NMClient's cache, and was thus deprecated. However, until now no async variant exists. Instead of adding multiple async operations, I think it should be sufficient to only add one nm_client_dbus_set_property() property. It's still reasonably convenient to use for setting a property.
* libnm: add nm_client_dbus_call() APIThomas Haller2020-03-231-0/+2
| | | | | | | | | | | | | | | | Add an API for calling D-Bus methods arbitrary objects of NetworkManager's API. Of course, this is basically just a call to g_dbus_connection_call(), using the current name owner, nm_client_get_dbus_connection() and nm_client_get_main_context(). All of this could also be achieved without this new API. However, nm_client_dbus_call() also gracefully handles if the current name owner is %NULL. It's a valid concern whether such API is useful, as the users already have all pieces to do it themself. I think it is.
* nm-setting-bond: add API to libnm to get the normalized bond option valueAntonio Cardace2020-03-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | Add 'nm_setting_bond_get_option_normalized()', the purpose of this API is to retrieve a bond option normalized value which is the option that NetworkManager will actually apply to the bond when activating the connection, this takes into account default values for some options that NM assumes. For example, if you create a connection: $ nmcli c add type bond con-name nm-bond ifname bond0 bond.options mode=0 Calling 'nm_setting_bond_get_option_normalized(s_bond, "miimon")' would return "100" as even if not specified NetworkManager enables miimon for bond connections. Another example: $ nmcli c add type bond con-name nm-bond ifname bond0 bond.options mode=0,arp_interval=100 Calling 'nm_setting_bond_get_option_normalized(s_bond, "miimon")' would return NULL in this case because NetworkManager disables miimon if 'arp_interval' is set explicitly but 'miimon' is not.
* libnm: move nm_setting_ip6_config_get_ra_timeout() to "libnm_1_22_8" symbol ↵Thomas Haller2020-02-171-2/+6
| | | | | | | | | | | | | | version nm_setting_ip6_config_get_ra_timeout() was backported to nm-1-22 branch, and will be released as 1.22.8. As such, on the stable branch the symbol will be placed in a separate symbol version ("libnm_1_22_8"). To support the upgrade path from 1.22.8+ to 1.23+, we want this symbol also present on master. At that point, we don't need to duplicate the symbol. Just add the same linker symbol version also to master.
* libnm,cli,ifcfg-rh: add ipv6.ra-timeout configuration optionThomas Haller2020-02-171-0/+1
|
* libnm/secret-agent: rework NMSecretAgentOldThomas Haller2020-01-281-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Note that the name "NMSecretAgentOld" comes from when libnm was forked from libnm-glib. There was a plan to rework the secret agent API and replace it by a better one. That didn't happen (yet), instead our one and only agent implementation is still lacking. Don't add a new API, instead try to improve the existing one, without breaking existing users. Just get over the fact that the name "NMSecretAgentOld" is ugly. Also note how nm-applet uses NMSecretAgentOld. It subtypes a class AppletAgent. The constructor applet_agent_new() is calling the synchronous g_initable_init() initialization with auto-register enabled. As it was, g_initable_init() would call nm_secret_agent_old_register(), and if the "Register" call failed, initialization failed for good. There are even unit tests that test this behavior. This is bad behavior. It means, when you start nm-applet without NetworkManager running, it will fail to create the AppletAgent instance. It would hence be the responsibility of the applet to recover from this situation (e.g. by retrying after timeout or watching the D-Bus name owner). Of course, nm-applet doesn't do that and won't recover from such a failure. NMSecretAgentOld must try hard not to fail and recover automatically. The user of the API is not interested in implementing the registration, unregistration and retry handling. Instead, it should just work best effort and transparently to the user of the API. Differences: - no longer use gdbus-codegen generate bindings. Use GDBusConnection directly instead. These generated proxies complicate the code by introducing an additional, stateful layer. - properly handle GMainContext and synchronous initialization by using an internal GMainContext. With this NMSecretAgentOld can be used in a multi threaded context with separate GMainContext. This does not mean that the object itself became thread safe, but that the GMainContext gives the means to coordinate multi-threaded access. - there are no more blocking calls except g_initiable_init() which iterates an internal GMainContext until initialization completes. - obtaining the Unix user ID with "GetConnectionUnixUser" to authenticate the server is now done asynchronously and only once per name-owner. - NMSecretAgentOld will now register/export the Agent D-Bus object already during initialization and stay registered as long as the instance is alive. This is because usually registering a D-Bus object would not fail, unless the D-Bus path is already taken. Such an error would mean that another agent is registered for the same GDBusConnection, that likely would be a bug in the caller. Hence, such an issue is truly non-recoverable and should be reported early to the user. There is a change in behavior compared to before, where previously the D-Bus object would only be registered while the instance is enabled. This makes a difference if the user intended to keep the NMSecretAgentOld instance around in an unregistered state. Note that nm_secret_agent_old_destroy() was added to really unregister the D-Bus object. A destroyed instance can no longer be registered. - the API no longer fully exposes the current registration state. The user either enables or disables the agent. Then, in the background NMSecretAgentOld will register, and serve requests as they come. It will also always automatically re-register and it can de-facto no longer fail. That is, there might be a failure to register, or the NetworkManager peer might not be authenticated (non-root) or there might be some other error, or NetworkManager might not be running. But such errors are not exposed to the user. The instance is just not able to provide the secrets in those cases, but it may recover if the problem can be resolved. - In particular, it makes no sense that nm_secret_agent_old_register*() fails, returns an error, or waits until registration is complete. This API is now only to enable/disable the agent. It is idempotent and won't fail (there is a catch, see next point). In particular, nm_secret_agent_old_unregister*() cannot fail anymore. - However, with the previous point there is a problem/race. When you create a NMSecretAgentOld instance and immediately afterwards activate a profile, then you want to be sure that the registration is complete first. Otherwise, NetworkManager might fail the activation because no secret agent registered yet. A partial solution for this is that g_initiable_init()/g_async_initable_init_async() will block until registration is complete (or with or without success). That means, if NetworkManager is running, initializing the NMSecretAgentOld will wait until registration is complete (or failed). However, that does not solve the race if NetworkManager was not running when creating the instance. To solve that race, the user may call nm_secret_agent_old_register_async() and wait for the command to finish before starting activating. While async registration no longer fails (in the sense of leaving the agent permanently disconnected), it will try to ensure that we are successfully registered and ready to serve requests. By using this API correctly, a race can be avoided and the user can know that the instance is now ready to serve request.
* libnm/secret-agent: add dbus-connection and main-context properties to ↵Thomas Haller2020-01-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | NMSecretAgentOld The NMSecretAgentOld is build very much around a GDBusConnection, and GDBusConnection is build around GMainContext. That means, a NMSecretAgentOld instance is strongly related to these two. That is because NMSecretAgentOld will register to signals on D-Bus, using GDBusConnection. Hence, that GDBusConnection instance and the calling GMainContext becomes central to the NMSecretAgentOld instance. Also, the GMainContext is the way to synchronize access to the NMSecretAgentOld. Used properly, this allows using the API in multi threaded context. Expose these two in the public API. Since NMSecretAgentOld is part of libnm and supposed to provide a flexible API, this is just useful to have. Also, allow to provide a GDBusConnection as construct-only property. This way, the instance can be used independent of g_bus_get() and the user has full control. There is no setter for the GMainContext, because it just takes the g_main_context_get_thread_default() instance at the time of construction.
* core,libnm: add VRF supportBeniamino Galvani2020-01-141-0/+2
| | | | | | | | Add VRF support to the daemon. When the device we are activating is a VRF or a VRF's slave, put routes in the table specified by the VRF connection. Also, introduce a VRF device type in libnm.
* libnm-core,cli: add VRF settingBeniamino Galvani2020-01-141-0/+3
| | | | | Add new VRF setting and connection types to libnm-core and support them in nmcli.
* client: add nm_client_get_object_by_path() and nm_object_get_client() APIThomas Haller2020-01-081-0/+2
| | | | | | | | | | | | | | | | | | When iterating the GMainContext of the NMClient instance, D-Bus events get processed. That means, every time you iterate the context (or "return to the main loop"), the content of the cache might change completely. It makes sense to keep a reference to an NMObject instance, do something, and afterwards check whether the instance can still be found in the cache. Add an API for that. nm_object_get_client() allows to know whether the object is still cached. Likewise, while NMClient abstracts D-Bus, it should still provide a way to look up an NMObject by D-Bus path. Add nm_client_get_object_by_path() for that. https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/384
* libnm: move nm_client_get_capabilities() to separate linker versionThomas Haller2019-12-241-1/+5
| | | | | | | | | | | nm_client_get_capabilities() was backported to 1.22.2. Add to to the appropriate linker version. Officially (and according to docs) nm_client_get_capabilities() still appears first in libnm 1.24.0. However, as it got backported to 1.22.2, it needs to be part of a different symbol version on 1.22. Instead of adding the symbol twice (once for libnm_1_24_0 and libnm_1_22_2), move it only to the libnm_1_22_2 symbol version, also on master.
* libnm: add nm_client_get_capabilities() to expose server CapabilitiesThomas Haller2019-12-211-0/+1
| | | | | | | | I hesitated to add this to libnm, because it's hardly used. However, we already fetch the property during GetManagedObjects(), we we should make it accessible, instead of requiring the user to make another D-Bus call.
* libnm: allow to enable/disable fetching of permissions in NMClientThomas Haller2019-12-101-0/+1
| | | | | | | | | | | | | | | | | | | Currently, NMClient by default always fetches the permissions ("GetPermissions()") and refreshes them on "CheckPermissions" signal. Fetching permissions is relatively expensive, while they are not used most of the time. Allow the user to opt out of this. For that, have a NMClientInstanceFlags to enable/disable automatic fetching. Also add a "permissions-state" property that allows the user to understand whether the cached permissions are up to date or not. This is a bit an awkward API for handling this. E.g. you cannot explicitly request permissions, you can just enable/disable fetching permissions. And then you can watch the permission-state to know whether you are ready. It's done this way because it fits the previous model and extends the API with a (relative) small amount of new functions and properties.
* libnm: add NMClient:instance-flags propertyThomas Haller2019-12-101-0/+6
| | | | | | | | | | | | | | | | | | Add a flags property to control behavior of NMClient. Possible future use cases: - currently it would always automatically fetch permissions. Often that is not used and the user could opt out of it. - currently, using sync init creates an internal GMainContext. This has an overhead and may be undesirable. We could implement another "sync" initialization that would merely iterate the callers mainloop until the initialization completes. A flag would allow to opt in. - currently, NMClient always fetches all connection settings automatically. Via a flag the user could opt out of that. Instead NMClient could provide an API so the user can request settings as they are needed.
* libnm: add nm_ip_address_cmp_full() functionThomas Haller2019-11-281-0/+2
| | | | | | | | | | | | | | | | | | Not being able to compare two NMIPAddress instances is a major limitation. Add nm_ip_address_cmp_full(). The choice here for adding a "cmp()" function instead of a "equals()" function is that cmp is more useful. We only want to add one of the two, so choose the more powerful one. Yes, usually its also not the variant we want or the variant that is convenient to use, such is life. Compare this to: - nm_ip_route_equal_full(), which is an equal() method and not a cmp(). - nm_ip_route_equal_full() which has a guint flags argument, instead of a typedef for an enum, with a proper generated GType.
* libnm: add ipvx.dhcp-hostname-flags propertiesBeniamino Galvani2019-11-281-0/+2
| | | | | | | | | | | | When using the dhclient DHCP backend users can tweak the behavior in the dhclient configuration file. One of the options that was reported as useful in the past was the FQDN flags [1] [2]. Add native support for FQDN flags to NM by introducing new ipv{4,6}.dhcp-hostname-flags properties. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1684595 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1255507
* libnm: add nm_client_get_main_context() functionThomas Haller2019-11-261-0/+1
| | | | | | | | The NMClient is associated with a certain context. Add a getter function to give the context. The context is really not internal API of NMClient, that is because the user must iterate this context and be aware of it.
* libnm: fix leaking internal GMainContext for synchronously initialized NMClientThomas Haller2019-11-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NMClient makes asynchronous D-Bus calls via g_dbus_connection_call(). This references the current GMainContext to later invoke the asynchronous callback. Even when we cancel the asynchronous call, the callback will still be invoked (later) to complete the request. In particular this means when we destroy (unref) an NMClient, there are quite possibly pending requests in the GMainContext. Although they are cancelled, they keep the GMainContext alive. With synchronous initialization, we have an internal GMainContext. When we destroy the NMClient, we cannot just unhook the integrated source, instead, we need to keep it integrated in the caller's main context, as long as there are pending requests. Add a mechanism to track those pending requests and fix the leak for the internal GMainContext. Also expose the same mechanism to the user via a new API called nm_client_get_context_busy_watcher(). This allows the user to know when it can stop iterating the main context and when all resources are reclaimed. For example the following will lead to a crash: for i in range(1,2000): nmc = NM.Client.new(None) This creates a number of NMClient instances and destroys them again. Note that here the GMainContext is never iterated, because synchronous initialization does not iterate the caller's context. So, while we correctly unref and dispose the created NMClient instances, there are pending requests left in the inner GMainContext. These pile up and soon the program will crash because it runs out of file descriptors. We can have a similar problem with asynchronous initialization, when we create a new GMainContext per client, and don't iterate it after we are done with the client. Note that this patch does not avoid the problem in general. The problem cannot be avoided, the user must iterate the main contex at some point. Otherwise resources (memory and file descriptors) will be leaked. Fixes: ce0e898fb476 ('libnm: refactor caching of D-Bus objects in NMClient') https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/347
* libnm: refactor caching of D-Bus objects in NMClientThomas Haller2019-11-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes for the NMClient cache. Instead, use GDBusConnection directly and a custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager data. CHANGES ------- - This is a complete rework. I think the previous implementation was difficult to understand. There were unfixed bugs and nobody understood the code well enough to fix them. Maybe somebody out there understood the code, but I certainly did not. At least nobody provided patches to fix those issues. I do believe that this implementation is more straightforward and easier to understand. It removes a lot of layers of code. Whether this claim of simplicity is true, each reader must decide for himself/herself. Note that it is still fairly complex. - There was a lingering performance issue with large number of D-Bus objects. The patch tries hard that the implementation scales well. Of course, when we cache N objects that have N-to-M references to other, we still are fundamentally O(N*M) for runtime and memory consumption (with M being the number of references between objects). But each part should behave efficiently and well. - Play well with GMainContext. libnm code (NMClient) is generally not thread safe. However, it should work to use multiple instances in parallel, as long as each access to a NMClient is through the caller's GMainContext. This follows glib's style and effectively allows to use NMClient in a multi threaded scenario. This implies to stick to a main context upon construction and ensure that callbacks are only invoked when iterating that context. Also, NMClient itself shall never iterate the caller's context. This also means, libnm must never use g_idle_add() or g_timeout_add(), as those enqueue sources in the g_main_context_default() context. - Get ordering of messages right. All events are consistently enqueued in a GMainContext and processed strictly in order. For example, previously "nm-object.c" tried to combine signals and emit them on an idle handler. That is wrong, signals must be emitted in the right order and when they happen. Note that when using GInitable's synchronous initialization to initialize the NMClient instance, NMClient internally still operates fully asynchronously. In that case NMClient has an internal main context. - NMClient takes over most of the functionality. When using D-Bus' ObjectManager interface, one needs to handle basically the entire state of the D-Bus interface. That cannot be separated well into distinct parts, and even if you try, you just end up having closely related code in different source files. Spreading related code does not make it easier to understand, on the contrary. That means, NMClient is inherently complex as it contains most of the logic. I think that is not avoidable, but it's not as bad as it sounds. - NMClient processes D-Bus messages and state changes in separate steps. First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and keeps track of the changed data. Then we update the GObject instances (_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally, we emit all signals and notifications that were collected (_dbus_handle_changes_commit()). Note that for example during the initial GetManagedObjects() reply, NMClient receive a large amount of state at once. But we first apply all the changes to our GObject instances before emitting any signals. The result is that signals are always emitted in a moment when the cache is consistent. The unavoidable downside is that when you receive a property changed signal, possibly many other properties changed already and more signals are about to be emitted. - NMDeviceWifi no longer modifies the content of the cache from client side during poke_wireless_devices_with_rf_status(). The content of the cache should be determined by D-Bus alone and follow what NetworkManager service exposes. Local modifications should be avoided. - This aims to bring no API/ABI change, though it does of course bring various subtle changes in behavior. Those should be all for the better, but the goal is not to break any existing clients. This does change internal (albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER property and NMObject no longer implementing GInitableIface and GAsyncInitableIface. - Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin and NMSecretAgentOld. These are independent of NMClient/NMObject and should be reworked separately. - While we no longer use generated classes from gdbus-codegen, we don't need more glue code than before. Also before we constructed NMPropertiesInfo and a had large amount of code to propagate properties from NMDBus* to NMObject. That got completely reworked, but did not fundamentally change. You still need about the same effort to create the NMLDBusMetaIface. Not using generated bindings did not make anything worse (which tells about the usefulness of generated code, at least in the way it was used). - NMLDBusMetaIface and other meta data is static and immutable. This avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U() have compile time checks to ensure the property types matches. It's pretty hard to misuse them because it won't compile. - The meta data now explicitly encodes the expected D-Bus types and makes sure never to accept wrong data. That would only matter when the server (accidentally or intentionally) exposes unexpected types on D-Bus. I don't think that was previously ensured in all cases. For example, demarshal_generic() only cared about the GObject property type, it didn't know the expected D-Bus type. - Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those probably indicated real bugs. In any case, it prevented us from running CI with G_DEBUG=fatal-warnings, because there would be just too many unrelated crashes. Now we log debug messages that can be enabled with "LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error. Together with G_DEBUG=fatal-warnings, this turns them into assertions. Note that such "assertion failures" might also happen because of a server bug (or change). Thus these are not common assertions that indicate a bug in libnm and are thus not armed unless explicitly requested. In our CI we should now always run with LIBNM_CLIENT_DEBUG=warning,error and G_DEBUG=fatal-warnings and to catch bugs. Note that currently NetworkManager has bugs in this regard, so enabling this will result in assertion failures. That should be fixed first. - Note that this changes the order in which we emit "notify:devices" and "device-added" signals. I think it makes the most sense to emit first "device-removed", then "notify:devices", and finally "device-added" signals. This changes behavior for commit 52ae28f6e5bf ('libnm: queue added/removed signals and suppress uninitialized notifications'), but I don't think that users should actually rely on the order. Still, the new order makes the most sense to me. - In NetworkManager, profiles can be invisible to the user by setting "connection.permissions". Such profiles would be hidden by NMClient's nm_client_get_connections() and their "connection-added"/"connection-removed" signals. Note that NMActiveConnection's nm_active_connection_get_connection() and NMDevice's nm_device_get_available_connections() still exposes such hidden NMRemoteConnection instances. This behavior was preserved. NUMBERS ------- I compared 3 versions of libnm. [1] 962297f9085d, current tip of nm-1-20 branch [2] 4fad8c7c642e, current master, immediate parent of this patch [3] this patch All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31. The libraries were build with $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug Note that RPM build already stripped the library. --- N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but in these tests it had large (and bogus) ELF notes. Anyway, the point is to show the relative sizes, so it doesn't matter). [1] 4075552 (102.7%) [2] 3969624 (100.0%) [3] 3705208 ( 93.3%) --- N2) `size /usr/lib64/libnm.so.0.1.0`: text data bss dec hex filename [1] 1314569 (102.0%) 69980 ( 94.8%) 10632 ( 80.4%) 1395181 (101.4%) 1549ed /usr/lib64/libnm.so.0.1.0 [2] 1288410 (100.0%) 73796 (100.0%) 13224 (100.0%) 1375430 (100.0%) 14fcc6 /usr/lib64/libnm.so.0.1.0 [3] 1229066 ( 95.4%) 65248 ( 88.4%) 13400 (101.3%) 1307714 ( 95.1%) 13f442 /usr/lib64/libnm.so.0.1.0 --- N3) Performance test with test-client.py. With checkout of [2], run ``` prepare_checkout() { rm -rf /tmp/nm-test && \ git checkout -B test 4fad8c7c642e && \ git clean -fdx && \ ./autogen.sh --prefix=/tmp/nm-test && \ make -j 5 install && \ make -j 5 check-local-clients-tests-test-client } prepare_test() { NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v } do_test() { for i in {1..10}; do NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1 done echo "done!" } prepare_checkout prepare_test time do_test ``` [1] real 2m14.497s (101.3%) user 5m26.651s (100.3%) sys 1m40.453s (101.4%) [2] real 2m12.800s (100.0%) user 5m25.619s (100.0%) sys 1m39.065s (100.0%) [3] real 1m54.915s ( 86.5%) user 4m18.585s ( 79.4%) sys 1m32.066s ( 92.9%) --- N4) Performance. Run NetworkManager from build [2] and setup a large number of profiles (551 profiles and 515 devices, mostly unrealized). This setup is already at the edge of what NetworkManager currently can handle. Of course, that is a different issue. Here we just check how long plain `nmcli` takes on the system. ``` do_cleanup() { for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do nmcli connection delete uuid "$UUID" done for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do nmcli device delete "$DEVICE" done } do_setup() { do_cleanup for i in {1..30}; do nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore for j in $(seq $i 30); do nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i ipv4.method disabled ipv6.method ignore done done systemctl restart NetworkManager.service sleep 5 } do_test() { perf stat -r 50 -B nmcli 1>/dev/null } do_test ``` [1] Performance counter stats for 'nmcli' (50 runs): 456.33 msec task-clock:u # 1.093 CPUs utilized ( +- 0.44% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,900 page-faults:u # 0.013 M/sec ( +- 0.02% ) 1,408,675,453 cycles:u # 3.087 GHz ( +- 0.48% ) 1,594,741,060 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 368,744,018 branches:u # 808.061 M/sec ( +- 0.02% ) 4,566,058 branch-misses:u # 1.24% of all branches ( +- 0.76% ) 0.41761 +- 0.00282 seconds time elapsed ( +- 0.68% ) [2] Performance counter stats for 'nmcli' (50 runs): 477.99 msec task-clock:u # 1.088 CPUs utilized ( +- 0.36% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 5,948 page-faults:u # 0.012 M/sec ( +- 0.03% ) 1,471,133,482 cycles:u # 3.078 GHz ( +- 0.36% ) 1,655,275,369 instructions:u # 1.13 insn per cycle ( +- 0.02% ) 382,595,152 branches:u # 800.433 M/sec ( +- 0.02% ) 4,746,070 branch-misses:u # 1.24% of all branches ( +- 0.49% ) 0.43923 +- 0.00242 seconds time elapsed ( +- 0.55% ) [3] Performance counter stats for 'nmcli' (50 runs): 352.36 msec task-clock:u # 1.027 CPUs utilized ( +- 0.32% ) 0 context-switches:u # 0.000 K/sec 0 cpu-migrations:u # 0.000 K/sec 4,790 page-faults:u # 0.014 M/sec ( +- 0.26% ) 1,092,341,186 cycles:u # 3.100 GHz ( +- 0.26% ) 1,209,045,283 instructions:u # 1.11 insn per cycle ( +- 0.02% ) 281,708,462 branches:u # 799.499 M/sec ( +- 0.01% ) 3,101,031 branch-misses:u # 1.10% of all branches ( +- 0.61% ) 0.34296 +- 0.00120 seconds time elapsed ( +- 0.35% ) --- N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`: [1] Command being timed: "nmcli" User time (seconds): 0.42 System time (seconds): 0.04 Percent of CPU this job got: 107% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34456 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6128 Voluntary context switches: 1298 Involuntary context switches: 1106 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [2] Command being timed: "nmcli" User time (seconds): 0.44 System time (seconds): 0.04 Percent of CPU this job got: 108% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 34452 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 6169 Voluntary context switches: 1849 Involuntary context switches: 142 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 [3] Command being timed: "nmcli" User time (seconds): 0.32 System time (seconds): 0.02 Percent of CPU this job got: 102% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 29196 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 5059 Voluntary context switches: 919 Involuntary context switches: 685 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 --- N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for the RSS size. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND [1] me 1492900 21.0 0.2 461348 33248 pts/10 Sl+ 15:02 0:00 nmcli monitor [2] me 1490721 5.0 0.2 461496 33548 pts/10 Sl+ 15:00 0:00 nmcli monitor [3] me 1495801 16.5 0.1 459476 28692 pts/10 Sl+ 15:04 0:00 nmcli monitor
* libnm: export interface flagsBeniamino Galvani2019-11-221-0/+1
| | | | Add libnm support for the new InterfaceFlags property of NMDevice.
* core: export interface flags of devicesBeniamino Galvani2019-11-221-0/+1
| | | | | | Add a new read-only "InterfaceFlags" property to the Device interface to export via D-Bus kernel flags and possibly other NM specific flags. At the moment IFF_UP and IFF_LOWERUP are implemented.
* libnm: adjust symbol versioning after backporting 802-1x.optional to 1.20.6Beniamino Galvani2019-11-061-1/+5
| | | | | | NM 1.22 is not released yet and 1.20.6 will happen before 1.22.0, so we can introduce the new API with version libnm_1_20_6 in both releases without having duplicate symbols on 1.22.
* libnm: add NM_CLIENT_DBUS_NAME_OWNER propertyThomas Haller2019-10-181-0/+1
| | | | | | | | It's not yet implemented. But obviously it's interesting to get the name owner to which the NMClient is currently connected. Note only that: the name-owner property really says whether NM is currently running or not.
* libnm: add NM_CLIENT_DBUS_CONNECTION propertyThomas Haller2019-10-181-0/+1
| | | | | | The used GDBusConnection should be configurable when creating the NMClient instance. Automatically choosing one of the g_bus_get() singletons is fine by default, but it's an unnecessary limitation.
* all: add 802-1x.optional propertyBeniamino Galvani2019-10-151-0/+1
| | | | | | Introduce a 802-1x.optional boolean property that can be used to succeed the connection even after an authentication timeout or failure.
* libnm: add nm_client_reload()Beniamino Galvani2019-09-171-0/+2
| | | | | Introduce libnm API to reload NM configuration through the Reload() D-Bus method.
* libnm: export reload flagsBeniamino Galvani2019-09-171-0/+1
| | | | | Flags to the manager Reload() method are stable API but not exposed in a public header. Export them.
* setting-gsm: add auto-config propertyLubomir Rintel2019-09-111-0/+5
| | | | | | | | | | This will make NetworkManager look up APN, username, and password in the Mobile Broadband Provider database. It is mutually exclusive with the apn, username and password properties. If that is the case, the connection will be normalized to auto-config=false. This makes it convenient for the user to turn off the automatism by just setting the apn.
* wireguard: support configuring policy routing to avoid routing loopsThomas Haller2019-07-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For WireGuard (like for all IP-tunnels and IP-based VPNs), the IP addresses of the peers must be reached outside the tunnel/VPN itself. For VPN connections, NetworkManager usually adds a direct /32 route to the external VPN gateway to the underlying device. For WireGuard that is not done, because injecting a route to another device is ugly and error prone. Worse: WireGuard with automatic roaming and multiple peers makes this more complicated. This is commonly a problem when setting the default-route via the VPN, but there are also other subtle setups where special care must be taken to prevent such routing loops. WireGuard's wg-quick provides a simple, automatic solution by adding two policy routing rules and relying on the WireGuard packets having a fwmark set (see [1]). Let's also do that. Add new properties "wireguard.ip4-auto-default-route" and "wireguard.ip6-auto-default-route" to enable/disable this. Note that the default value lets NetworkManager automatically choose whether to enable it (depending on whether there are any peers that have a default route). This means, common scenarios should now work well without additional configuration. Note that this is also a change in behavior and upon package upgrade NetworkManager may start adding policy routes (if there are peers that have a default-route). This is a change in behavior, as the user already clearly had this setup working and configured some working solution already. The new automatism picks the rule priority automatically and adds the default-route to the routing table that has the same number as the fwmark. If any of this is unsuitable, then the user is free to disable this automatism. Note that since 1.18.0 NetworkManager supports policy routing (*). That means, what this automatism does can be also achieved via explicit configuration of the profile, which gives the user more flexibility to adjust all parameters explicitly). (*) but only since 1.20.0 NetworkManager supports the "suppress_prefixlength" rule attribute, which makes it impossible to configure exactly this rule-based solution with 1.18.0 NetworkManager. [1] https://www.wireguard.com/netns/#improved-rule-based-routing
* core,libnm: add AddConnection2() D-Bus API to block autoconnect from the startThomas Haller2019-07-251-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It should be possible to add a profile with autoconnect blocked form the start. Update2() has a %NM_SETTINGS_UPDATE2_FLAG_BLOCK_AUTOCONNECT flag to block autoconnect, and so we need something similar when adding a connection. As the existing AddConnection() and AddConnectionUnsaved() API is not extensible, add AddConnection2() that has flags and room for additional arguments. Then add and implement the new flag %NM_SETTINGS_ADD_CONNECTION2_FLAG_BLOCK_AUTOCONNECT for AddConnection2(). Note that libnm's nm_client_add_connection2() API can completely replace the existing nm_client_add_connection_async() call. In particular, it will automatically prefer to call the D-Bus methods AddConnection() and AddConnectionUnsaved(), in order to work with server versions older than 1.20. The purpose of this is that when upgrading the package, the running NetworkManager might still be older than the installed libnm. Anyway, so since nm_client_add_connection2_finish() also has a result output, the caller needs to decide whether he cares about that result. Hence it has an argument ignore_out_result, which allows to fallback to the old API. One might argue that a caller who doesn't care about the output results while still wanting to be backward compatible, should itself choose to call nm_client_add_connection_async() or nm_client_add_connection2(). But instead, it's more convenient if the new function can fully replace the old one, so that the caller does not need to switch which start/finish method to call. https://bugzilla.redhat.com/show_bug.cgi?id=1677068
* libnm,core: Add ConnectivityCheckUri property and accessorsIain Lane2019-07-221-0/+1
| | | | | | | So that applications like GNOME Shell can hit the same URI to show the captive portal login page. https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/209
* libnm,core: add support for "suppress_prefixlength" rule attributeThomas Haller2019-07-161-0/+2
| | | | | | | | | | | | | | | WireGuard's wq-quick configures such rules to avoid routing loops. While we currently don't have an automatic solution for this, at least we should support it via explicit user configuration. One problem is that suppress_prefixlength is relatively new and kernel might not support this attribute. That can lead to odd results, because the NetworkManager is valid but it cannot be configured on the current kernel. But this is a general problem, and we would require a general solution. The solution cannot be to only support rule attributes that are supported by the oldest possible kernel. It's not clear how much of a problem there really is, or which general solution is required (if any).
* libnm,cli,ifcfg-rh: add connection:wait-device-timeout propertyThomas Haller2019-07-101-0/+1
| | | | | | | | | | | | | | | | | Initscripts already honor the DEVTIMEOUT variable (rh #1171917). Don't make this a property only supported by initscripts. Every useful property should also be supported by keyfile and it should be accessible via D-Bus. Also, I will soon drop NMSIfcfgConnection, so handling this would require extra code. It's easier when DEVTIMEOUT is a regular property of the connection profile. The property is not yet implemented. ifcfg-rh still uses the old implementation, and keyfile is not yet adjusted. Since both keyfile and ifcfg-rh will both be rewritten soon, this property will be implemented then.
* libnm-core: add ovs-dpdk settingLubomir Rintel2019-06-141-0/+3
|
* libnm: belatedly expose nm_ethtool_optname_is_feature() in libnmThomas Haller2019-06-111-0/+4
| | | | | | | | Also, plan right away to backport this symbol all the way back to 1.14.8. As such, we only need to add it once, with the right linker version "libnm_1_14_8". But still, the symbols first appears on a major release 1.20.0.
* libnm: add nm_setting_ethtool_get_optnames() functionThomas Haller2019-06-111-0/+1
| | | | | | | | | | | | | It's rather limiting if we have no API to ask NMSettingEthtool which options are set. Note that currently NMSettingEthtool only supports offload features. In the future, it should also support other options like coalesce or ring options. Hence, this returns all option names, not only features. If a caller needs to know whether the name is an option name, he/she should call nm_ethtool_optname_is_feature().
* libnm/modem: add APN getterlr/modem-propertiesLubomir Rintel2019-06-051-0/+1
|
* libnm/modem: add network id getterLubomir Rintel2019-06-051-0/+1
|
* libnm/modem: add device id getterLubomir Rintel2019-06-051-0/+5
|
* all: support bridge vlan rangesBeniamino Galvani2019-04-181-1/+1
| | | | | | | | | | | | | In some cases it is convenient to specify ranges of bridge vlans, as already supported by iproute2 and natively by kernel. With this commit it becomes possible to add a range in this way: nmcli connection modify eth0-slave +bridge-port.vlans "100-200 untagged" vlan ranges can't be PVIDs because only one PVID vlan can exist. https://bugzilla.redhat.com/show_bug.cgi?id=1652910 (cherry picked from commit 70935157771b1de39f27d20e50112efcc50d1f5c)
* core/qdisc: add support for attributesLubomir Rintel2019-04-121-0/+3
|
* libnm: add API to NMSettingIPConfig for routing rulesThomas Haller2019-03-271-0/+5
|
* libnm: add NMIPRoutingRule APIThomas Haller2019-03-271-1/+45
| | | | | | | | | | | | Add NMIPRoutingRule API with a few basic rule properties. More properties will be added later as we want to support them. Also, add to/from functions for string/GVariant representations. These will be needed to persist/load/exchange rules. The to-string format follows the `ip rule add` syntax, with the aim to be partially compatible. Full compatibility is not possible though, for various reasons (see code comment).