| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Watch NMSettingConnection's changes using the
NM_SETTINGS_CONNECTION_UPDATED_INTERNAL signal and update IWD
KnownNetwork's AutoConnect property when NMSettingConnection's
autoconnect property it changes.
"notify::" NM_SETTING_CONNECTION_AUTOCONNECT signals don't seem to be
emitted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Watch the NMDevice::autoconnect property to disable IWD autoconnect if
requested by user. We have no way to re-enable it when the device is
idle though.
Make sure to not disable IWD's autoconnect in .deactivate() if not
necessary. There's not much we can do if we have to call
Station.Disconnect() but we can avoid calling it if unnecessary --
a slight optimization regardless of the autoconnect block flags.
Fortunately NM and IWD block autoconnect on a manual deactivation in
the same way (in MANAGED mode) and unblock it on an activation in the
same way too (in MANAGED mode).
Also if wifi.iwd.autoconnect is in use, unset
NM_DEVICE_AUTOCONNECT_BLOCKED_MANUAL_DISCONNECT under the same
conditions as IWD normally would. This could be made optional but
with wifi.iwd.autoconnect by default we should follow IWD's autoconnect
logic.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If this setting it true (or missing) we skip most of the D-Bus
Disconnect() calls whoe purpose was to keep IWD's internal autoconnect
mechanism always disabled. We use the IWD's Station.State property
updates, and secrets requets through our IWD agent, to find out when IWD
is trying to connect and create "assumed" activations on the NM side to
mirror the IWD state. This is quite complicated due to the many
possible combinations of NMDevice's state and IWD's state. A lot of
them are "impossible" but we try to be careful to consider all the
different possibilities.
NM has a nice API for "assuming" connections, I'm not sure if that was
designed only for when NM starts or also when a connection is made
"under" it dynamically, but in any case this *could* be used for our use
case here. However there were a few minor reasons I didn't want to use
it, which are listed in the comment in assume_connection(). Those can
be fixed or improved in NMManager / NMDevice and NMDeviceIiwd could then
start using that API maybe, though there's really not much difference
for NMDeviceIwd if that API was used. For now I think it's fine if we
create a normal "managed"-type connection.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before we call interface_added for all interfaces and objects returned
from g_dbus_object_manager_get_objects(), order the objects based on the
interfaces present on them. This is to avoid processing
Network.KnownNetwork properties referring to KnownNetwork objects that
we haven't processed yet, and new Station.ConnectedNetwork properties
referring to Network objects we haven't processed yet.
In NMDeviceIwd make sure we don't emit unnecessary re-checks if device
is not yet enabled because now we're always going to be adding the APs
(representing IWD Network objects) before the device is enabled, i.e.
before the nm_device_iwd_set_dbus_object() call, when NM first connects
to IWD.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now we'd only create mirror NMSettingsConnection objects for IWD
KnownNetwork objects of the "8021x" type in the NMIwdManager class. Now
create mirror connections, or track existing matching
NMSettingsConnections, for every Known Network, for three reasons:
* to allow NMDeviceIwd to easily look up the NMSettingsConnection
matching an externally-triggered connection, specifically when we let
IWD autoconnect,
* to allow users to "forget" those Known Networks,
* to allow us to synchronize the autoconnectable property between
NM and IWD to later allow users toggling it (not done yet).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_nm_utils_ssid_to_utf8() can be quite heavy and also has this comment:
* Again, this function should be used for debugging and display purposes
* _only_.
In most places that we used it, we have already validated the
connection's SSID to be valid UTF-8 so we can simply g_strndup() it now,
even in the two places where we actually only needed it for display
purposes. And we definitely don't need or want the locale-specific
conversions done in _nm_utils_ssid_to_utf8 when the SSID is *not* utf8.
In mirror_8021x_connection we also optimize the lookup loop to avoid
validating and strdup'ing all the SSID.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
check_connection_compatible/complete_connection
IWD only supports UTF-8 SSIDs internally, any BSS who's SSID doesn't
validate as UTF-8 is ignored. There's also no way to ask IWD to connect
to such network/start AP/Adhoc etc. because SSIDs are passed as D-Bus
strings. So validate that connection SSIDs are UTF-8 early in
check_connection_compatible/complete_connection and refactor
check_connection slightly to avoid duplication.
Since NMWifiAPs are created by us, we already know those have valid
SSIDs so once we've also checked new NMConnections in
check_connection_compatible there should be no possibility that an SSID
anywhere else in the code is not UTF8. We should be able to treat the
GBytes values as UTF8 without redundant validation or the complex
locale-dependent conversions in _nm_utils_ssid_to_utf8.
|
|
|
|
|
|
|
| |
The AP BSSIDs created by the iwd backend are made up so never lock the
connections to them. It probably wouldn't matter as long as the iwd
backend is used but the fake BSSID could stay in the connection
properties even if the user switches to wpa_supplicant.
|
|
|
|
|
|
|
|
|
|
|
|
| |
set_current_ap() would always call schedule_periodic_scan() but: first it
would do nothing when current_ap was non-NULL because we
schedule_periodic_scan makes sure not to auto-scan when connected.
Secondly state_changed() already calls schedule_periodic_scan
indirectly through set_can_scan() so normally when we disconnect and
current_ap becomes NULL we already do trigger a scan. The only
situation where we didn't is when a connection is cancelled during
NEED_AUTH because IWD's state doesn't change, so we add a
schedule_periodic_scan() call in network_connect_cb() on error.
|
|
|
|
|
|
|
|
| |
Rename NMDeviceIwdPrivate.can_connect to .nm_autoconnect in preparation
to also add .iwd_autoconnect.
Rename misnamed local variable iwd_connection to nm_connection, we'll
need a new iwd_connection variable later.
|
|
|
|
|
|
| |
In this state, same as in DISCONNECTED or ACTIVATED, allow scanning if
IWD is in the "connected" or "disconnected" states as there's no reason
not to scan.
|
|
|
|
|
|
|
|
|
|
| |
Implement a Cancel method on our IWD secrets agent DBus object. This
results in a call to nm_device_iwd_agent_query() for the device
currently handling the request and the @invocation parameter is NULL to
signal that the current query is being cancelled.
nm_device_iwd_agent_query doesn't do much with this call just yet but
the handling will be necessary when IWD autoconnect is used by NM.
|
|
|
|
|
|
|
| |
In connection_removed we use the id.name that was being g_freed a few
lines further down.
Fixes: bea6c403677f ('wifi/iwd: handle forgetting connection profiles')
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If an NMSettingsConnection with the VOLATILE or EXTENRAL flags is created
and passed to nm_manager_activate_connection, it's immediately scheduled
for deletion in an idle callback and will likely be deleted before the
authorization step in nm_manager_activate_connection finishes and the
connection will be aborted. This is because there's no
NMActiveConnection in priv->active_connection_lst_head referencing it
until _internal_activate_device(). Change
active_connection_find_by_connection to also look for connections in
priv->async_op_lst_head.
New _delete_volatile_connection_do() calls are added. Previously it
would be called when an active connection may have been removed from
priv->active_connection_lst_head, now also call it when an active
connection may have been removed from priv->async_op_lst_head without
being added to priv->active_connection_lst_head.
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/671
|
|
|
|
| |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/676
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The goal is to run most distros only manually. However, it would be nice
to avoid (manually) clicking twice to start the tests for one distro:
once for the container preparation, and once for the actual test.
Previously, the container prep part was set to manual and the actual
test automatic. It worked almost as desired, except that this leads
to the entire gitlab-ci pipeline be be in running state indefinitely.
To fix that, always run the container prep steps. If the container is
cached, this is supposed to be fast and cheap. Now only the actual tests
are marked as "manual".
|
|
|
|
|
|
| |
It seems "pages" test does not get properly triggered, if only
t_fedora:33 completes. It should, because the other distros are
optional. Try to set "needs" to fix that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clang (x86_64, 3.4.2-9.el7) fails:
../src/devices/nm-device.c:957:9: error: controlling expression type 'typeof (*self) *const' (aka 'struct _NMDevice *const') not compatible with any generic association type
_LOGT(LOGD_DEVICE,
^~~~~~~~~~~~~~~~~~
../shared/nm-glib-aux/nm-logging-fwd.h:162:20: note: expanded from macro '_LOGT'
#define _LOGT(...) _NMLOG(_LOGL_TRACE, __VA_ARGS__)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/devices/nm-device-logging.h:34:81: note: expanded from macro '_NMLOG'
const char *const _ifname = _nm_device_get_iface(_NM_DEVICE_CAST(_self)); \
^~~~~
../src/devices/nm-device-logging.h:14:63: note: expanded from macro '_NM_DEVICE_CAST'
#define _NM_DEVICE_CAST(self) _NM_ENSURE_TYPE(NMDevice *, self)
^
../shared/nm-glib-aux/nm-macros-internal.h:664:53: note: expanded from macro '_NM_ENSURE_TYPE'
#define _NM_ENSURE_TYPE(type, value) (_Generic((value), type : (value)))
^
Fixes: cc35dc3bdf5f ('device: improve "nm-device-logging.h" to support a self pointer of NMDevice type')
|
|
|
|
|
|
| |
check-python-black` test
Fixes: c537852231a6 ('build: optionally skip python black check by setting NMTST_SKIP_PYTHON_BLACK=1')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clang (x86_64, 3.4.2-9.el7) fails:
../src/devices/nm-device-6lowpan.c:161:9: error: controlling expression type 'typeof (*self) *const' (aka 'struct _NMDevice6Lowpan *const') not compatible with any generic association type
_LOGW(LOGD_DEVICE, "could not get 6lowpan properties");
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../shared/nm-glib-aux/nm-logging-fwd.h:165:20: note: expanded from macro '_LOGW'
#define _LOGW(...) _NMLOG(_LOGL_WARN, __VA_ARGS__)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../src/devices/nm-device-logging.h:32:81: note: expanded from macro '_NMLOG'
const char *const _ifname = _nm_device_get_iface(_NM_DEVICE_CAST(_self)); \
^~~~~
../src/devices/nm-device-logging.h:17:19: note: expanded from macro '_NM_DEVICE_CAST'
_Generic((self), _NMLOG_DEVICE_TYPE * \
^
Fixes: cc35dc3bdf5f ('device: improve "nm-device-logging.h" to support a self pointer of NMDevice type')
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We now install black by default via REQUIRED_PACKAGES script.
Thus, also when we build on Fedora 30, `make check` would run
python black. However, the formatting depends on the version
of python black, and the one in Fedora 30 is not right.
Skip all black tests during `make check`. We have a deicated
gitlab-ci test that runs black already (with the desired version
of black).
|
|
|
|
|
|
|
|
| |
The checkpatch test tests the patches on the merg-request, as they
branch off from master (or one of the stable branches).
It thus need the full git history, but the git repository might be a
shallow clone. Fix it.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Certain parts of the code are entirely generated or must follow
a certain format that can be enforced by a tool. These invariants
must never fail:
- ci-fairy generate-template (check-ci-script)
- black python formatting
- clang-format C formatting
- msgfmt -vs
On the other hand, we also have a checkpatch script that checks
the current patch for common errors. These are heuristics and
only depend on the current patch (contrary to the previous type
that depend on the entire source tree).
Refactor the gitlab-ci tests:
- split "checkpatch" into "check-patch" and "check-tree".
- merge the "check-ci-script" test into "check-tree".
|
|
|
|
|
|
|
| |
Now that the individual steps are no longer in .gitlab.yml but we
run a full shell script, clean it up to be better readable.
Also, we need to fail the script when any command fails.
|
|
|
|
|
|
|
|
|
|
|
| |
"debian/REQUIRED_PACKAGES" is used by gitlab-ci to prepare the image. We require
"udev" package, if only to install "/usr/share/pkgconfig/udev.pc" to get the
udev directory.
Otherwise build fails with:
Run-time dependency udev found: NO (tried pkgconfig)
meson.build:371:2: ERROR: Dependency "udev" not found, tried pkgconfig
|
|
|
|
|
|
|
| |
Odd, sometimes gitlab CI fails to use pip3 install, because setuptools
module is not installed. But only sometimes...
Explicitly install it.
|
|\
| |
| |
| | |
https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/673
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Before, ovsdb_call_method() has a long list of arguments
to account for all possible commands. That does not scale.
Instead, introduce a separate OvsdbMethodPayload type and
only add a macro to allow passing the right parameters.
|
| |
| |
| |
| |
| |
| |
| |
| | |
The text should match the OvsdbCommand enum. If the enum
value is named OVSDB_ADD_INTERFACE, then we should print
"add-interface". Or alternatively, if you think spelling
out interface is too long, then the enum should be renamed.
I don't care, but name should correspond.
|
| |
| |
| |
| |
| |
| | |
As we add more command types, the union gets more members.
Name each union field explicitly to match the OvsdbCommand
type.
|
| |
| |
| |
| |
| |
| |
| | |
- always print the JSON string as last (if present). Previously
that didn't happen with OVSDB_SET_INTERFACE_MTU.
- introduce _QUOTE_MSG() macro.
|
| |
| |
| |
| | |
We will need them later.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ovsdb sends monitor updates, with "new" and "old" values that indicate
whether this is an addition, and update, or a removal.
Since we also cache the entries, we might not agree with what ovsdb
says. E.g. if ovsdb says this is an update, but we didn't have the
interface in our cache, we should rather pretend that the interface
was added. Even if this possibly indicates some inconsistency between
what OVS says and what we have cached, we should make the best of it.
Rework the code. On update, we compare the result with our cache
and care less about the "new" / "old" values.
|
| |
| |
| |
| |
| |
| |
| |
| | |
We will call the function directly as well. Lets aim to
get the types right.
Also the compiler would warn if the cast to (GDestroyNotify)
would be to a fundamtally different function signature.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
GHashTable is optimized for data that has no separate value
pointer. We can use the OpenvswitchBridge structs as key themselves,
by having the id as first field of the structure and only use
g_hash_table_add().
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
Of course, in practice "mtu" is much smaller than 2^31, and
also is sizeof(int) >= sizeof(uint32_t) (on our systems). Hence,
this was correct. Still, it feels ugly to pass a unsigned integer
where not the entire range is covered.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- rename "id" to something more distinct: "call_id".
- consistently use guint64 type. We don't want nor need
to handle negative values. For CALL_ID_UNSPEC we can use
G_MAXUINT64.
- don't use "i" format string for the call id. That expects
an "int", so it's not clear how this was working correctly
previously. Also, "int" has a smaller range than our 64bits.
Use instead "json_int_t" and cast properly in the variadic
arguments of json_pack().
|