summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* iwd: Update KnownNetwork.AutoConnect on NM connection changesth/pr/670Andrew Zaborowski2020-11-121-12/+74
| | | | | | | | | | Watch NMSettingConnection's changes using the NM_SETTINGS_CONNECTION_UPDATED_INTERNAL signal and update IWD KnownNetwork's AutoConnect property when NMSettingConnection's autoconnect property it changes. "notify::" NM_SETTING_CONNECTION_AUTOCONNECT signals don't seem to be emitted.
* iwd: Roughly respect the NMDevice::autoconnect propertyAndrew Zaborowski2020-11-121-29/+78
| | | | | | | | | | | | | | | | | | | | Watch the NMDevice::autoconnect property to disable IWD autoconnect if requested by user. We have no way to re-enable it when the device is idle though. Make sure to not disable IWD's autoconnect in .deactivate() if not necessary. There's not much we can do if we have to call Station.Disconnect() but we can avoid calling it if unnecessary -- a slight optimization regardless of the autoconnect block flags. Fortunately NM and IWD block autoconnect on a manual deactivation in the same way (in MANAGED mode) and unblock it on an activation in the same way too (in MANAGED mode). Also if wifi.iwd.autoconnect is in use, unset NM_DEVICE_AUTOCONNECT_BLOCKED_MANUAL_DISCONNECT under the same conditions as IWD normally would. This could be made optional but with wifi.iwd.autoconnect by default we should follow IWD's autoconnect logic.
* iwd: Add the wifi.iwd.autoconnect settingAndrew Zaborowski2020-11-123-56/+608
| | | | | | | | | | | | | | | | | | | | | | If this setting it true (or missing) we skip most of the D-Bus Disconnect() calls whoe purpose was to keep IWD's internal autoconnect mechanism always disabled. We use the IWD's Station.State property updates, and secrets requets through our IWD agent, to find out when IWD is trying to connect and create "assumed" activations on the NM side to mirror the IWD state. This is quite complicated due to the many possible combinations of NMDevice's state and IWD's state. A lot of them are "impossible" but we try to be careful to consider all the different possibilities. NM has a nice API for "assuming" connections, I'm not sure if that was designed only for when NM starts or also when a connection is made "under" it dynamically, but in any case this *could* be used for our use case here. However there were a few minor reasons I didn't want to use it, which are listed in the comment in assume_connection(). Those can be fixed or improved in NMManager / NMDevice and NMDeviceIiwd could then start using that API maybe, though there's really not much difference for NMDeviceIwd if that API was used. For now I think it's fine if we create a normal "managed"-type connection.
* iwd: Order objects from g_dbus_object_manager_get_objectsAndrew Zaborowski2020-11-122-2/+80
| | | | | | | | | | | | | | | Before we call interface_added for all interfaces and objects returned from g_dbus_object_manager_get_objects(), order the objects based on the interfaces present on them. This is to avoid processing Network.KnownNetwork properties referring to KnownNetwork objects that we haven't processed yet, and new Station.ConnectedNetwork properties referring to Network objects we haven't processed yet. In NMDeviceIwd make sure we don't emit unnecessary re-checks if device is not yet enabled because now we're always going to be adding the APs (representing IWD Network objects) before the device is enabled, i.e. before the nm_device_iwd_set_dbus_object() call, when NM first connects to IWD.
* iwd: Create mirror connections for non-802.1X IWD known networksAndrew Zaborowski2020-11-123-94/+269
| | | | | | | | | | | | | | Until now we'd only create mirror NMSettingsConnection objects for IWD KnownNetwork objects of the "8021x" type in the NMIwdManager class. Now create mirror connections, or track existing matching NMSettingsConnections, for every Known Network, for three reasons: * to allow NMDeviceIwd to easily look up the NMSettingsConnection matching an externally-triggered connection, specifically when we let IWD autoconnect, * to allow users to "forget" those Known Networks, * to allow us to synchronize the autoconnectable property between NM and IWD to later allow users toggling it (not done yet).
* iwd: Stop using _nm_utils_ssid_to_utf8()Andrew Zaborowski2020-11-124-111/+132
| | | | | | | | | | | | | | | | _nm_utils_ssid_to_utf8() can be quite heavy and also has this comment: * Again, this function should be used for debugging and display purposes * _only_. In most places that we used it, we have already validated the connection's SSID to be valid UTF-8 so we can simply g_strndup() it now, even in the two places where we actually only needed it for display purposes. And we definitely don't need or want the locale-specific conversions done in _nm_utils_ssid_to_utf8 when the SSID is *not* utf8. In mirror_8021x_connection we also optimize the lookup loop to avoid validating and strdup'ing all the SSID.
* iwd: Validate UTF-8 SSID early in ↵Andrew Zaborowski2020-11-121-30/+46
| | | | | | | | | | | | | | | | | | check_connection_compatible/complete_connection IWD only supports UTF-8 SSIDs internally, any BSS who's SSID doesn't validate as UTF-8 is ignored. There's also no way to ask IWD to connect to such network/start AP/Adhoc etc. because SSIDs are passed as D-Bus strings. So validate that connection SSIDs are UTF-8 early in check_connection_compatible/complete_connection and refactor check_connection slightly to avoid duplication. Since NMWifiAPs are created by us, we already know those have valid SSIDs so once we've also checked new NMConnections in check_connection_compatible there should be no possibility that an SSID anywhere else in the code is not UTF8. We should be able to treat the GBytes values as UTF8 without redundant validation or the complex locale-dependent conversions in _nm_utils_ssid_to_utf8.
* iwd: Never lock to BSSID in complete_connectionAndrew Zaborowski2020-11-121-4/+1
| | | | | | | The AP BSSIDs created by the iwd backend are made up so never lock the connections to them. It probably wouldn't matter as long as the iwd backend is used but the fake BSSID could stay in the connection properties even if the user switches to wpa_supplicant.
* iwd: Move scheduling periodic scan out of set_current_ap()Andrew Zaborowski2020-11-121-4/+7
| | | | | | | | | | | | set_current_ap() would always call schedule_periodic_scan() but: first it would do nothing when current_ap was non-NULL because we schedule_periodic_scan makes sure not to auto-scan when connected. Secondly state_changed() already calls schedule_periodic_scan indirectly through set_can_scan() so normally when we disconnect and current_ap becomes NULL we already do trigger a scan. The only situation where we didn't is when a connection is cancelled during NEED_AUTH because IWD's state doesn't change, so we add a schedule_periodic_scan() call in network_connect_cb() on error.
* iwd: Rename can_connect and iwd_connectionAndrew Zaborowski2020-11-121-15/+15
| | | | | | | | Rename NMDeviceIwdPrivate.can_connect to .nm_autoconnect in preparation to also add .iwd_autoconnect. Rename misnamed local variable iwd_connection to nm_connection, we'll need a new iwd_connection variable later.
* iwd: Allow scanning in NM_DEVICE_STATE_NEED_AUTHAndrew Zaborowski2020-11-121-1/+1
| | | | | | In this state, same as in DISCONNECTED or ACTIVATED, allow scanning if IWD is in the "connected" or "disconnected" states as there's no reason not to scan.
* iwd: Handle the net.connman.iwd.Agent.Cancel() methodAndrew Zaborowski2020-11-122-4/+63
| | | | | | | | | | Implement a Cancel method on our IWD secrets agent DBus object. This results in a call to nm_device_iwd_agent_query() for the device currently handling the request and the @invocation parameter is NULL to signal that the current query is being cancelled. nm_device_iwd_agent_query doesn't do much with this call just yet but the handling will be necessary when IWD autoconnect is used by NM.
* iwd: Fix a use after freeAndrew Zaborowski2020-11-121-2/+3
| | | | | | | In connection_removed we use the id.name that was being g_freed a few lines further down. Fixes: bea6c403677f ('wifi/iwd: handle forgetting connection profiles')
* core/trivial: fix clang-format code formattingThomas Haller2020-11-121-2/+2
|
* manager: Keep volatile/external connections while referenced by async_op_lstAndrew Zaborowski2020-11-121-2/+34
| | | | | | | | | | | | | | | | | | | | If an NMSettingsConnection with the VOLATILE or EXTENRAL flags is created and passed to nm_manager_activate_connection, it's immediately scheduled for deletion in an idle callback and will likely be deleted before the authorization step in nm_manager_activate_connection finishes and the connection will be aborted. This is because there's no NMActiveConnection in priv->active_connection_lst_head referencing it until _internal_activate_device(). Change active_connection_find_by_connection to also look for connections in priv->async_op_lst_head. New _delete_volatile_connection_do() calls are added. Previously it would be called when an active connection may have been removed from priv->active_connection_lst_head, now also call it when an active connection may have been removed from priv->async_op_lst_head without being added to priv->active_connection_lst_head. https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/671
* po: update Ukrainian (uk) translationYuri Chornoivan2020-11-111-260/+285
| | | | https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/676
* gitlab-ci: automatically run prep-container to fix hanging testsThomas Haller2020-11-112-23/+23
| | | | | | | | | | | | | | The goal is to run most distros only manually. However, it would be nice to avoid (manually) clicking twice to start the tests for one distro: once for the container preparation, and once for the actual test. Previously, the container prep part was set to manual and the actual test automatic. It worked almost as desired, except that this leads to the entire gitlab-ci pipeline be be in running state indefinitely. To fix that, always run the container prep steps. If the container is cached, this is supposed to be fast and cheap. Now only the actual tests are marked as "manual".
* gitlab-ci: add "needs" for pages testThomas Haller2020-11-112-0/+4
| | | | | | It seems "pages" test does not get properly triggered, if only t_fedora:33 completes. It should, because the other distros are optional. Try to set "needs" to fix that.
* device: fix _Generic() types for _NM_DEVICE_CAST() macro (2)Thomas Haller2020-11-101-1/+4
| | | | | | | | | | | | | | | | | | | | | | clang (x86_64, 3.4.2-9.el7) fails: ../src/devices/nm-device.c:957:9: error: controlling expression type 'typeof (*self) *const' (aka 'struct _NMDevice *const') not compatible with any generic association type _LOGT(LOGD_DEVICE, ^~~~~~~~~~~~~~~~~~ ../shared/nm-glib-aux/nm-logging-fwd.h:162:20: note: expanded from macro '_LOGT' #define _LOGT(...) _NMLOG(_LOGL_TRACE, __VA_ARGS__) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../src/devices/nm-device-logging.h:34:81: note: expanded from macro '_NMLOG' const char *const _ifname = _nm_device_get_iface(_NM_DEVICE_CAST(_self)); \ ^~~~~ ../src/devices/nm-device-logging.h:14:63: note: expanded from macro '_NM_DEVICE_CAST' #define _NM_DEVICE_CAST(self) _NM_ENSURE_TYPE(NMDevice *, self) ^ ../shared/nm-glib-aux/nm-macros-internal.h:664:53: note: expanded from macro '_NM_ENSURE_TYPE' #define _NM_ENSURE_TYPE(type, value) (_Generic((value), type : (value))) ^ Fixes: cc35dc3bdf5f ('device: improve "nm-device-logging.h" to support a self pointer of NMDevice type')
* build: fix handling NMTST_SKIP_PYTHON_BLACK for skipping `make ↵Thomas Haller2020-11-101-1/+1
| | | | | | check-python-black` test Fixes: c537852231a6 ('build: optionally skip python black check by setting NMTST_SKIP_PYTHON_BLACK=1')
* device: fix _Generic() types for _NM_DEVICE_CAST() macroThomas Haller2020-11-101-4/+6
| | | | | | | | | | | | | | | | | | | clang (x86_64, 3.4.2-9.el7) fails: ../src/devices/nm-device-6lowpan.c:161:9: error: controlling expression type 'typeof (*self) *const' (aka 'struct _NMDevice6Lowpan *const') not compatible with any generic association type _LOGW(LOGD_DEVICE, "could not get 6lowpan properties"); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../shared/nm-glib-aux/nm-logging-fwd.h:165:20: note: expanded from macro '_LOGW' #define _LOGW(...) _NMLOG(_LOGL_WARN, __VA_ARGS__) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../src/devices/nm-device-logging.h:32:81: note: expanded from macro '_NMLOG' const char *const _ifname = _nm_device_get_iface(_NM_DEVICE_CAST(_self)); \ ^~~~~ ../src/devices/nm-device-logging.h:17:19: note: expanded from macro '_NM_DEVICE_CAST' _Generic((self), _NMLOG_DEVICE_TYPE * \ ^ Fixes: cc35dc3bdf5f ('device: improve "nm-device-logging.h" to support a self pointer of NMDevice type')
* gitlab-ci: fix building artifacts (pages) during gitlab-ci testThomas Haller2020-11-103-1/+9
|
* gitlab-ci: skip python black check during `make check` for buildsThomas Haller2020-11-101-0/+6
| | | | | | | | | | | We now install black by default via REQUIRED_PACKAGES script. Thus, also when we build on Fedora 30, `make check` would run python black. However, the formatting depends on the version of python black, and the one in Fedora 30 is not right. Skip all black tests during `make check`. We have a deicated gitlab-ci test that runs black already (with the desired version of black).
* contrib/checkpatch: fix shallow repository for checkpatch scriptThomas Haller2020-11-101-0/+2
| | | | | | | | The checkpatch test tests the patches on the merg-request, as they branch off from master (or one of the stable branches). It thus need the full git history, but the git repository might be a shallow clone. Fix it.
* contrib/checkpatch: use random name for git remote and clean up afterwardsThomas Haller2020-11-101-5/+14
|
* gitlab-ci: bump default-tagThomas Haller2020-11-102-5/+5
|
* gitlab-ci: merge "check-ci-script" test with static checksThomas Haller2020-11-103-47/+37
| | | | | | | | | | | | | | | | | | | | | | Certain parts of the code are entirely generated or must follow a certain format that can be enforced by a tool. These invariants must never fail: - ci-fairy generate-template (check-ci-script) - black python formatting - clang-format C formatting - msgfmt -vs On the other hand, we also have a checkpatch script that checks the current patch for common errors. These are heuristics and only depend on the current patch (contrary to the previous type that depend on the entire source tree). Refactor the gitlab-ci tests: - split "checkpatch" into "check-patch" and "check-tree". - merge the "check-ci-script" test into "check-tree".
* gitlab-ci: cleanup ".gitlab-ci/{build,fedora-install,debian-install}.sh"Thomas Haller2020-11-103-45/+104
| | | | | | | Now that the individual steps are no longer in .gitlab.yml but we run a full shell script, clean it up to be better readable. Also, we need to fail the script when any command fails.
* contrib: install "udev" package with "debian/REQUIRED_PACKAGES"Thomas Haller2020-11-101-0/+2
| | | | | | | | | | | "debian/REQUIRED_PACKAGES" is used by gitlab-ci to prepare the image. We require "udev" package, if only to install "/usr/share/pkgconfig/udev.pc" to get the udev directory. Otherwise build fails with: Run-time dependency udev found: NO (tried pkgconfig) meson.build:371:2: ERROR: Dependency "udev" not found, tried pkgconfig
* contrib: install "python{3,}-setuptools" package with "debian/REQUIRED_PACKAGES"Thomas Haller2020-11-101-0/+2
| | | | | | | Odd, sometimes gitlab CI fails to use pip3 install, because setuptools module is not installed. But only sometimes... Explicitly install it.
* ovs: merge branch 'th/ovs-external-ids' (first part of feature)Thomas Haller2020-11-1058-631/+2182
|\ | | | | | | https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/673
| * core/ovs: refactor duplicate code in ovsdb_next_command()th/ovs-external-idsThomas Haller2020-11-091-52/+39
| |
| * core/ovs: split payload out of OvsdbMethodCall structThomas Haller2020-11-091-98/+125
| | | | | | | | | | | | | | | | Before, ovsdb_call_method() has a long list of arguments to account for all possible commands. That does not scale. Instead, introduce a separate OvsdbMethodPayload type and only add a macro to allow passing the right parameters.
| * core/ovs: rename logging output for _LOGT_call()Thomas Haller2020-11-091-3/+3
| | | | | | | | | | | | | | | | The text should match the OvsdbCommand enum. If the enum value is named OVSDB_ADD_INTERFACE, then we should print "add-interface". Or alternatively, if you think spelling out interface is too long, then the enum should be renamed. I don't care, but name should correspond.
| * core/ovs: name union fields in OvsdbMethodCallThomas Haller2020-11-091-33/+41
| | | | | | | | | | | | As we add more command types, the union gets more members. Name each union field explicitly to match the OvsdbCommand type.
| * core/ovs: cleanup debug logging for OVS commandThomas Haller2020-11-091-15/+9
| | | | | | | | | | | | | | - always print the JSON string as last (if present). Previously that didn't happen with OVSDB_SET_INTERFACE_MTU. - introduce _QUOTE_MSG() macro.
| * core/ovs: track external-ids for cached ovsdb objectsThomas Haller2020-11-092-45/+119
| | | | | | | | We will need them later.
| * core/ovs: cleanup logic in update handling of ovsdb_got_update()Thomas Haller2020-11-091-152/+269
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ovsdb sends monitor updates, with "new" and "old" values that indicate whether this is an addition, and update, or a removal. Since we also cache the entries, we might not agree with what ovsdb says. E.g. if ovsdb says this is an update, but we didn't have the interface in our cache, we should rather pretend that the interface was added. Even if this possibly indicates some inconsistency between what OVS says and what we have cached, we should make the best of it. Rework the code. On update, we compare the result with our cache and care less about the "new" / "old" values.
| * core/ovs: change function signature of _free_{bridge,port,interface}Thomas Haller2020-11-091-14/+11
| | | | | | | | | | | | | | | | We will call the function directly as well. Lets aim to get the types right. Also the compiler would warn if the cast to (GDestroyNotify) would be to a fundamtally different function signature.
| * core/ovs: use helper functions to emit NM_OVSDB_* signalsThomas Haller2020-11-092-47/+44
| |
| * core/ovs: move code in "nm-ovsdb.c" around to have simple helpers at the topThomas Haller2020-11-091-59/+67
| |
| * core/ovs: track key for OpenvswitchInterface in same structThomas Haller2020-11-091-6/+9
| |
| * core/ovs: track key for OpenvswitchPort in same structThomas Haller2020-11-091-6/+9
| |
| * core/ovs: track key for OpenvswitchBridge in same structThomas Haller2020-11-091-13/+14
| | | | | | | | | | | | | | GHashTable is optimized for data that has no separate value pointer. We can use the OpenvswitchBridge structs as key themselves, by having the id as first field of the structure and only use g_hash_table_add().
| * core/ovs: minor cleanup of logic in _add_interface()Thomas Haller2020-11-091-6/+8
| |
| * core/ovs: avoid possible crash in _add_interface()Thomas Haller2020-11-091-2/+2
| |
| * core/ovs: use streq() instead of strcmp()Thomas Haller2020-11-091-21/+19
| |
| * core/ovs: cleanup uses of g_slice_*() in "nm-ovsdb.c"Thomas Haller2020-11-091-20/+27
| |
| * core/ovs: fix using unsigned "mtu" value to json_pack()Thomas Haller2020-11-091-2/+2
| | | | | | | | | | | | | | Of course, in practice "mtu" is much smaller than 2^31, and also is sizeof(int) >= sizeof(uint32_t) (on our systems). Hence, this was correct. Still, it feels ugly to pass a unsigned integer where not the entire range is covered.
| * core/ovs: cleanup handling of call id for OVS commandsThomas Haller2020-11-091-18/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - rename "id" to something more distinct: "call_id". - consistently use guint64 type. We don't want nor need to handle negative values. For CALL_ID_UNSPEC we can use G_MAXUINT64. - don't use "i" format string for the call id. That expects an "int", so it's not clear how this was working correctly previously. Also, "int" has a smaller range than our 64bits. Use instead "json_int_t" and cast properly in the variadic arguments of json_pack().