diff options
author | Brett Holman <brett.holman@canonical.com> | 2022-02-16 13:39:11 -0700 |
---|---|---|
committer | git-ubuntu importer <ubuntu-devel-discuss@lists.ubuntu.com> | 2022-02-17 05:21:09 +0000 |
commit | 9e224f959c1a7e537783fc606b8d2a548beff8d8 (patch) | |
tree | 36ca5f70bebf27c0b208360bec55c8eeb464335c | |
parent | 18646b226d564a758b8cfe036a38346fb571c6c4 (diff) | |
download | cloud-init-git-9e224f959c1a7e537783fc606b8d2a548beff8d8.tar.gz |
22.1-1-gb3d9acdd-0ubuntu1~22.04.1 (patches unapplied)
Imported using git-ubuntu import.
41 files changed, 1526 insertions, 306 deletions
diff --git a/.travis.yml b/.travis.yml index 2e79f2b7..f655fa50 100644 --- a/.travis.yml +++ b/.travis.yml @@ -133,6 +133,8 @@ matrix: - python: 3.6 env: TOXENV=flake8 - python: 3.6 + env: TOXENV=mypy + - python: 3.6 env: TOXENV=pylint - python: 3.6 env: TOXENV=black @@ -1,3 +1,176 @@ +22.1 + - sources/azure: report ready in local phase (#1265) [Chris Patterson] + - sources/azure: validate IMDS network configuration metadata (#1257) + [Chris Patterson] + - docs: Add more details to runcmd docs (#1266) + - use PEP 589 syntax for TypeDict (#1253) + - mypy: introduce type checking (#1254) [Chris Patterson] + - Fix extra ipv6 issues, code reduction and simplification (#1243) [eb3095] + - tests: when generating crypted password, generate in target env (#1252) + - sources/azure: address mypy/pyright typing complaints (#1245) + [Chris Patterson] + - Docs for x-shellscript* userdata (#1260) + - test_apt_security: azure platform has specific security URL overrides + (#1263) + - tests: lsblk --json output changes mountpoint key to mountpoinst [] + (#1261) + - mounts: fix mount opts string for ephemeral disk (#1250) + [Chris Patterson] + - Shell script handlers by freq (#1166) [Chris Lalos] + - minor improvements to documentation (#1259) [Mark Esler] + - cloud-id: publish /run/cloud-init/cloud-id-<cloud-type> files (#1244) + - add "eslerm" as contributor (#1258) [Mark Esler] + - sources/azure: refactor ssh key handling (#1248) [Chris Patterson] + - bump pycloudlib (#1256) + - sources/hetzner: Use EphemeralDHCPv4 instead of static configuration + (#1251) [Markus Schade] + - bump pycloudlib version (#1255) + - Fix IPv6 netmask format for sysconfig (#1215) [Harald] (LP: #1959148) + - sources/azure: drop debug print (#1249) [Chris Patterson] + - tests: do not check instance.pull_file().ok() (#1246) + - sources/azure: consolidate ephemeral DHCP configuration (#1229) + [Chris Patterson] + - cc_salt_minion freebsd fix for rc.conf (#1236) + - sources/azure: fix metadata check in _check_if_nic_is_primary() (#1232) + [Chris Patterson] + - Add _netdev option to mount Azure ephemeral disk (#1213) [Eduardo Otubo] + - testing: stop universally overwriting /etc/cloud/cloud.cfg.d (#1237) + - Integration test changes (#1240) + - Fix Gentoo Locales (#1205) + - Add "slingamn" as contributor (#1235) [Shivaram Lingamneni] + - integration: do not LXD bind mount /etc/cloud/cloud.cfg.d (#1234) + - Integration testing docs and refactor (#1231) + - vultr: Return metadata immediately when found (#1233) [eb3095] + - spell check docs with spellintian (#1223) + - docs: include upstream python version info (#1230) + - Schema a d (#1211) + - Move LXD to end ds-identify DSLIST (#1228) (LP: #1959118) + - fix parallel tox execution (#1214) + - sources/azure: refactor _report_ready_if_needed and _poll_imds (#1222) + [Chris Patterson] + - Do not support setting up archive.canonical.com as a source (#1219) + [Steve Langasek] (LP: #1959343) + - Vultr: Fix lo being used for DHCP, try next on cmd fail (#1208) [eb3095] + - sources/azure: refactor _should_reprovision[_after_nic_attach]() logic + (#1206) [Chris Patterson] + - update ssh logs to show ssh private key gens pub and simplify code + (#1221) [Steve Weber] + - Remove mitechie from stale PR github action (#1217) + - Include POST format in cc_phone_home docs (#1218) (LP: #1959149) + - Add json parsing of ip addr show (SC-723) (#1210) + - cc_rsyslog: fix typo in docstring (#1207) [Louis Sautier] + - Update .github-cla-signers (#1204) [Chris Lalos] + - sources/azure: drop unused case in _report_failure() (#1200) + [Chris Patterson] + - sources/azure: always initialize _ephemeral_dhcp_ctx on unpickle (#1199) + [Chris Patterson] + - Add support for gentoo templates and cloud.cfg (#1179) [vteratipally] + - sources/azure: unpack ret tuple in crawl_metadata() (#1194) + [Chris Patterson] + - tests: focal caplog has whitespace indentation for multi-line logs + (#1201) + - Seek interfaces, skip dummy interface, fix region codes (#1192) [eb3095] + - integration: test against the Ubuntu daily images (#1198) + [Paride Legovini] + - cmd: status and cloud-id avoid change in behavior for 'not run' (#1197) + - tox: pass PYCLOUDLIB_* env vars into integration tests when present + (#1196) + - sources/azure: set ovf_is_accessible when OVF is read successfully + (#1193) [Chris Patterson] + - Enable OVF environment transport via ISO in example (#1195) [Megian] + - sources/azure: consolidate DHCP variants to EphemeralDHCPv4WithReporting + (#1190) [Chris Patterson] + - Single JSON schema validation in early boot (#1175) + - Add DatasourceOVF network-config propery to Ubuntu OVF example (#1184) + [Megian] + - testing: support pycloudlib config file (#1189) + - Ensure system_cfg read before ds net config on Oracle (SC-720) (#1174) + (LP: #1956788) + - Test Optimization Proposal (SC-736) (#1188) + - cli: cloud-id report not-run or disabled state as cloud-id (#1162) + - Remove distutils usage (#1177) [Shreenidhi Shedi] + - add .python-version to gitignore (#1186) + - print error if datasource import fails (#1170) + [Emanuele Giuseppe Esposito] + - Add new config module to set keyboard layout (#1176) + [maxnet] (LP: #1951593) + - sources/azure: rename metadata_type -> MetadataType (#1181) + [Chris Patterson] + - Remove 3.5 and xenial support (SC-711) (#1167) + - tests: mock LXD datasource detection in ds-identify on LXD containers + (#1178) + - pylint: silence errors on compat code for old jsonschema (#1172) + [Paride Legovini] + - testing: Add 3.10 Test Coverage (#1173) + - Remove unittests from integration test job in travis (#1141) + - Don't throw exceptions for empty cloud config (#1130) + - bsd/resolv.d/ avoid duplicated entries (#1163) [Gonéri Le Bouder] + - sources/azure: do not persist failed_desired_api_version flag (#1159) + [Chris Patterson] + - Update cc_ubuntu_advantage calls to assume-yes (#1158) + [John Chittum] (LP: #1954842) + - openbsd: properly restart the network on 7.0 (#1150) [Gonéri Le Bouder] + - Add .git-blame-ignore-revs (#1161) + - Adopt Black and isort (SC-700) (#1157) + - Include dpkg frontend lock in APT_LOCK_FILES (#1153) + - tests/cmd/query: fix test run as root and add coverage for defaults + (#1156) [Chris Patterson] (LP: #1825027) + - Schema processing changes (SC-676) (#1144) + - Add dependency workaround for impish in bddeb (#1148) + - netbsd: install new dep packages (#1151) [Gonéri Le Bouder] + - find_devs_with_openbsd: ensure we return the last entry (#1149) + [Gonéri Le Bouder] + - sources/azure: remove unnecessary hostname bounce (#1143) + [Chris Patterson] + - find_devs/openbsd: accept ISO on disk (#1132) + [Gonéri Le Bouder] (GH: + https://github.com/ContainerCraft/kmi/issues/12) + - Improve error log message when mount failed (#1140) [Ksenija Stanojevic] + - add KsenijaS as a contributor (#1145) [Ksenija Stanojevic] + - travis - don't run integration tests if no deb (#1139) + - factor out function for getting top level directory of cloudinit (#1136) + - testing: Add deterministic test id (#1138) + - mock sleep() in azure test (#1137) + - Add miraclelinux support (#1128) [Haruki TSURUMOTO] + - docs: Make MACs lowercase in network config (#1135) (GH: #1876941) + - Add Strict Metaschema Validation (#1101) + - update dead link (#1133) + - cloudinit/net: handle two different routes for the same ip (#1124) + [Emanuele Giuseppe Esposito] + - docs: pin mistune dependency (#1134) + - Reorganize unit test locations under tests/unittests (#1126) + - Fix exception when no activator found (#1129) (GH: #1948681) + - jinja: provide and document jinja-safe key aliases in instance-data + (SC-622) (#1123) + - testing: Remove date from final_message test (SC-638) (#1127) + - Move GCE metadata fetch to init-local (SC-502) (#1122) + - Fix missing metadata routes for vultr (#1125) [eb3095] + - cc_ssh_authkey_fingerprints.py: prevent duplicate messages on console + (#1081) [dermotbradley] + - sources/azure: remove unused remnants related to agent command (#1119) + [Chris Patterson] + - github: update PR template's contributing URL (#1120) [Chris Patterson] + - docs: Rename HACKING.rst to CONTRIBUTING.rst (#1118) + - testing: monkeypatch system_info call in unit tests (SC-533) (#1117) + - Fix Vultr timeout and wait values (#1113) [eb3095] + - lxd: add preference for LXD cloud-init.* config keys over user keys + (#1108) + - VMware: source /etc/network/interfaces.d/* on Debian + [chengcheng-chcheng] (GH: #1950136) + - Add cjp256 as contributor (#1109) [Chris Patterson] + - integration_tests: Ensure log directory exists before symlinking to it + (#1110) + - testing: add growpart integration test (#1104) + - integration_test: Speed up CI run time (#1111) + - Some miscellaneous integration test fixes (SC-606) (#1103) + - tests: specialize lxd_discovery test for lxd_vm vendordata (#1106) + - Add convenience symlink to integration test output (#1105) + - Fix for set-name bug in networkd renderer (#1100) + [Andrew Kutz] (GH: #1949407) + - Wait for apt lock (#1034) (GH: #1944611) + - testing: stop chef test from running on openstack (#1102) + - alpine.py: add options to the apk upgrade command (#1089) [dermotbradley] + 21.4 - Azure: fallback nic needs to be reevaluated during reprovisioning (#1094) [Anh Vo] diff --git a/cloudinit/config/cc_apk_configure.py b/cloudinit/config/cc_apk_configure.py index 2cb2dad1..0952c971 100644 --- a/cloudinit/config/cc_apk_configure.py +++ b/cloudinit/config/cc_apk_configure.py @@ -10,7 +10,7 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import temp_utils, templater, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_INSTANCE LOG = logging.getLogger(__name__) @@ -53,7 +53,7 @@ REPOSITORIES_TEMPLATE = """\ frequency = PER_INSTANCE distros = ["alpine"] -meta = { +meta: MetaSchema = { "id": "cc_apk_configure", "name": "APK Configure", "title": "Configure apk repositories file", diff --git a/cloudinit/config/cc_apt_configure.py b/cloudinit/config/cc_apt_configure.py index 7fe0e343..c558311a 100644 --- a/cloudinit/config/cc_apt_configure.py +++ b/cloudinit/config/cc_apt_configure.py @@ -17,7 +17,7 @@ from textwrap import dedent from cloudinit import gpg from cloudinit import log as logging from cloudinit import subp, templater, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_INSTANCE LOG = logging.getLogger(__name__) @@ -32,7 +32,7 @@ CLOUD_INIT_GPG_DIR = "/etc/apt/cloud-init.gpg.d/" frequency = PER_INSTANCE distros = ["ubuntu", "debian"] -meta = { +meta: MetaSchema = { "id": "cc_apt_configure", "name": "Apt Configure", "title": "Configure apt for the user", diff --git a/cloudinit/config/cc_apt_pipelining.py b/cloudinit/config/cc_apt_pipelining.py index 34b6ac0e..901633d3 100644 --- a/cloudinit/config/cc_apt_pipelining.py +++ b/cloudinit/config/cc_apt_pipelining.py @@ -9,7 +9,7 @@ from textwrap import dedent from cloudinit import util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_INSTANCE frequency = PER_INSTANCE @@ -24,7 +24,7 @@ APT_PIPE_TPL = ( # A value of zero MUST be specified if the remote host does not properly linger # on TCP connections - otherwise data corruption will occur. -meta = { +meta: MetaSchema = { "id": "cc_apt_pipelining", "name": "Apt Pipelining", "title": "Configure apt pipelining", diff --git a/cloudinit/config/cc_bootcmd.py b/cloudinit/config/cc_bootcmd.py index 3a239376..bd14aede 100644 --- a/cloudinit/config/cc_bootcmd.py +++ b/cloudinit/config/cc_bootcmd.py @@ -13,14 +13,14 @@ import os from textwrap import dedent from cloudinit import subp, temp_utils, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_ALWAYS frequency = PER_ALWAYS distros = ["all"] -meta = { +meta: MetaSchema = { "id": "cc_bootcmd", "name": "Bootcmd", "title": "Run arbitrary commands early in the boot process", diff --git a/cloudinit/config/cc_byobu.py b/cloudinit/config/cc_byobu.py index b96736a4..fbc20410 100755 --- a/cloudinit/config/cc_byobu.py +++ b/cloudinit/config/cc_byobu.py @@ -9,7 +9,7 @@ """Byobu: Enable/disable byobu system wide and for default user.""" from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.distros import ug_util from cloudinit.settings import PER_INSTANCE @@ -32,7 +32,7 @@ Valid configuration options for this module are: """ distros = ["ubuntu", "debian"] -meta = { +meta: MetaSchema = { "id": "cc_byobu", "name": "Byobu", "title": "Enable/disable byobu system wide and for default user", diff --git a/cloudinit/config/cc_ca_certs.py b/cloudinit/config/cc_ca_certs.py index c46d0fbe..6084cb4c 100644 --- a/cloudinit/config/cc_ca_certs.py +++ b/cloudinit/config/cc_ca_certs.py @@ -8,7 +8,7 @@ import os from textwrap import dedent from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_INSTANCE DEFAULT_CONFIG = { @@ -45,7 +45,7 @@ can be removed from the system with the configuration option """ distros = ["alpine", "debian", "ubuntu", "rhel"] -meta = { +meta: MetaSchema = { "id": "cc_ca_certs", "name": "CA Certificates", "title": "Add ca certificates", diff --git a/cloudinit/config/cc_chef.py b/cloudinit/config/cc_chef.py index aaf7eaf1..fdb3a6e3 100644 --- a/cloudinit/config/cc_chef.py +++ b/cloudinit/config/cc_chef.py @@ -14,7 +14,7 @@ import os from textwrap import dedent from cloudinit import subp, temp_utils, templater, url_helper, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_ALWAYS RUBY_VERSION_DEFAULT = "1.8" @@ -92,7 +92,7 @@ CHEF_EXEC_DEF_ARGS = tuple(["-d", "-i", "1800", "-s", "20"]) frequency = PER_ALWAYS distros = ["all"] -meta = { +meta: MetaSchema = { "id": "cc_chef", "name": "Chef", "title": "module that configures, starts and installs chef", diff --git a/cloudinit/config/cc_debug.py b/cloudinit/config/cc_debug.py index 1a3c9346..c51818c3 100644 --- a/cloudinit/config/cc_debug.py +++ b/cloudinit/config/cc_debug.py @@ -9,7 +9,7 @@ from io import StringIO from textwrap import dedent from cloudinit import safeyaml, type_utils, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.distros import ALL_DISTROS from cloudinit.settings import PER_INSTANCE @@ -24,7 +24,7 @@ location that this cloud-init has been configured with when running. Log configurations are not output. """ -meta = { +meta: MetaSchema = { "id": "cc_debug", "name": "Debug", "title": "Helper to debug cloud-init *internal* datastructures", diff --git a/cloudinit/config/cc_disable_ec2_metadata.py b/cloudinit/config/cc_disable_ec2_metadata.py index 6a5e7eda..88cc28e2 100644 --- a/cloudinit/config/cc_disable_ec2_metadata.py +++ b/cloudinit/config/cc_disable_ec2_metadata.py @@ -11,14 +11,14 @@ from textwrap import dedent from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.distros import ALL_DISTROS from cloudinit.settings import PER_ALWAYS REJECT_CMD_IF = ["route", "add", "-host", "169.254.169.254", "reject"] REJECT_CMD_IP = ["ip", "route", "add", "prohibit", "169.254.169.254"] -meta = { +meta: MetaSchema = { "id": "cc_disable_ec2_metadata", "name": "Disable EC2 Metadata", "title": "Disable AWS EC2 Metadata", diff --git a/cloudinit/config/cc_disk_setup.py b/cloudinit/config/cc_disk_setup.py index c59d00cd..ee05ea87 100644 --- a/cloudinit/config/cc_disk_setup.py +++ b/cloudinit/config/cc_disk_setup.py @@ -13,7 +13,7 @@ import shlex from textwrap import dedent from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.distros import ALL_DISTROS from cloudinit.settings import PER_INSTANCE @@ -50,7 +50,7 @@ the ``fs_setup`` directive. This config directive accepts a list of filesystem configs. """ -meta = { +meta: MetaSchema = { "id": "cc_disk_setup", "name": "Disk Setup", "title": "Configure partitions and filesystems", diff --git a/cloudinit/config/cc_install_hotplug.py b/cloudinit/config/cc_install_hotplug.py index 952d9f13..34c4557e 100644 --- a/cloudinit/config/cc_install_hotplug.py +++ b/cloudinit/config/cc_install_hotplug.py @@ -4,7 +4,11 @@ import os from textwrap import dedent from cloudinit import stages, subp, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.distros import ALL_DISTROS from cloudinit.event import EventScope, EventType from cloudinit.settings import PER_INSTANCE @@ -12,7 +16,7 @@ from cloudinit.settings import PER_INSTANCE frequency = PER_INSTANCE distros = [ALL_DISTROS] -meta = { +meta: MetaSchema = { "id": "cc_install_hotplug", "name": "Install Hotplug", "title": "Install hotplug if supported and enabled", diff --git a/cloudinit/config/cc_keyboard.py b/cloudinit/config/cc_keyboard.py index 17eb9a54..98ef326a 100644 --- a/cloudinit/config/cc_keyboard.py +++ b/cloudinit/config/cc_keyboard.py @@ -10,7 +10,11 @@ from textwrap import dedent from cloudinit import distros from cloudinit import log as logging -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE frequency = PER_INSTANCE @@ -22,7 +26,7 @@ distros = distros.Distro.expand_osfamily(osfamilies) DEFAULT_KEYBOARD_MODEL = "pc105" -meta = { +meta: MetaSchema = { "id": "cc_keyboard", "name": "Keyboard", "title": "Set keyboard layout", diff --git a/cloudinit/config/cc_locale.py b/cloudinit/config/cc_locale.py index 487f58f7..29f6a9b6 100644 --- a/cloudinit/config/cc_locale.py +++ b/cloudinit/config/cc_locale.py @@ -11,12 +11,16 @@ from textwrap import dedent from cloudinit import util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE frequency = PER_INSTANCE distros = ["all"] -meta = { +meta: MetaSchema = { "id": "cc_locale", "name": "Locale", "title": "Set system locale", diff --git a/cloudinit/config/cc_ntp.py b/cloudinit/config/cc_ntp.py index a31da9bb..25bba764 100644 --- a/cloudinit/config/cc_ntp.py +++ b/cloudinit/config/cc_ntp.py @@ -12,7 +12,11 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import subp, temp_utils, templater, type_utils, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE LOG = logging.getLogger(__name__) @@ -148,7 +152,7 @@ DISTRO_CLIENT_CONFIG = { # configuration options before actually attempting to deploy with said # configuration. -meta = { +meta: MetaSchema = { "id": "cc_ntp", "name": "NTP", "title": "enable and configure ntp", diff --git a/cloudinit/config/cc_resizefs.py b/cloudinit/config/cc_resizefs.py index b009c392..19b923a8 100644 --- a/cloudinit/config/cc_resizefs.py +++ b/cloudinit/config/cc_resizefs.py @@ -14,7 +14,11 @@ import stat from textwrap import dedent from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_ALWAYS NOBLOCK = "noblock" @@ -22,7 +26,7 @@ NOBLOCK = "noblock" frequency = PER_ALWAYS distros = ["all"] -meta = { +meta: MetaSchema = { "id": "cc_resizefs", "name": "Resizefs", "title": "Resize filesystem", diff --git a/cloudinit/config/cc_runcmd.py b/cloudinit/config/cc_runcmd.py index 15cbaf1a..c5206003 100644 --- a/cloudinit/config/cc_runcmd.py +++ b/cloudinit/config/cc_runcmd.py @@ -12,7 +12,11 @@ import os from textwrap import dedent from cloudinit import util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.distros import ALL_DISTROS from cloudinit.settings import PER_INSTANCE @@ -24,7 +28,7 @@ from cloudinit.settings import PER_INSTANCE distros = [ALL_DISTROS] -meta = { +meta: MetaSchema = { "id": "cc_runcmd", "name": "Runcmd", "title": "Run arbitrary commands", @@ -32,10 +36,13 @@ meta = { """\ Run arbitrary commands at a rc.local like level with output to the console. Each item can be either a list or a string. If the item is a - list, it will be properly executed as if passed to ``execve()`` (with - the first arg as the command). If the item is a string, it will be - written to a file and interpreted - using ``sh``. + list, it will be properly quoted. Each item is written to + ``/var/lib/cloud/instance/runcmd`` to be later interpreted using + ``sh``. + + Note that the ``runcmd`` module only writes the script to be run + later. The module that actually runs the script is ``scripts-user`` + in the :ref:`Final` boot stage. .. note:: diff --git a/cloudinit/config/cc_snap.py b/cloudinit/config/cc_snap.py index 9c38046c..9f343df0 100644 --- a/cloudinit/config/cc_snap.py +++ b/cloudinit/config/cc_snap.py @@ -9,7 +9,11 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE from cloudinit.subp import prepend_base_command @@ -18,7 +22,7 @@ frequency = PER_INSTANCE LOG = logging.getLogger(__name__) -meta = { +meta: MetaSchema = { "id": "cc_snap", "name": "Snap", "title": "Install, configure and manage snapd and snap packages", diff --git a/cloudinit/config/cc_ubuntu_advantage.py b/cloudinit/config/cc_ubuntu_advantage.py index 9239f7de..e469bb22 100644 --- a/cloudinit/config/cc_ubuntu_advantage.py +++ b/cloudinit/config/cc_ubuntu_advantage.py @@ -6,14 +6,18 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import subp, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE UA_URL = "https://ubuntu.com/advantage" distros = ["ubuntu"] -meta = { +meta: MetaSchema = { "id": "cc_ubuntu_advantage", "name": "Ubuntu Advantage", "title": "Configure Ubuntu Advantage support services", diff --git a/cloudinit/config/cc_ubuntu_drivers.py b/cloudinit/config/cc_ubuntu_drivers.py index 6c8494c8..44a3bdb4 100644 --- a/cloudinit/config/cc_ubuntu_drivers.py +++ b/cloudinit/config/cc_ubuntu_drivers.py @@ -7,14 +7,18 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import subp, temp_utils, type_utils, util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE LOG = logging.getLogger(__name__) frequency = PER_INSTANCE distros = ["ubuntu"] -meta = { +meta: MetaSchema = { "id": "cc_ubuntu_drivers", "name": "Ubuntu Drivers", "title": "Interact with third party drivers in Ubuntu.", diff --git a/cloudinit/config/cc_write_files.py b/cloudinit/config/cc_write_files.py index 2c580328..37dae392 100644 --- a/cloudinit/config/cc_write_files.py +++ b/cloudinit/config/cc_write_files.py @@ -12,7 +12,11 @@ from textwrap import dedent from cloudinit import log as logging from cloudinit import util -from cloudinit.config.schema import get_meta_doc, validate_cloudconfig_schema +from cloudinit.config.schema import ( + MetaSchema, + get_meta_doc, + validate_cloudconfig_schema, +) from cloudinit.settings import PER_INSTANCE frequency = PER_INSTANCE @@ -43,7 +47,7 @@ supported_encoding_types = [ "base64", ] -meta = { +meta: MetaSchema = { "id": "cc_write_files", "name": "Write Files", "title": "write arbitrary files", diff --git a/cloudinit/config/cc_zypper_add_repo.py b/cloudinit/config/cc_zypper_add_repo.py index 41605b97..be444cce 100644 --- a/cloudinit/config/cc_zypper_add_repo.py +++ b/cloudinit/config/cc_zypper_add_repo.py @@ -12,12 +12,12 @@ import configobj from cloudinit import log as logging from cloudinit import util -from cloudinit.config.schema import get_meta_doc +from cloudinit.config.schema import MetaSchema, get_meta_doc from cloudinit.settings import PER_ALWAYS distros = ["opensuse", "sles"] -meta = { +meta: MetaSchema = { "id": "cc_zypper_add_repo", "name": "ZypperAddRepo", "title": "Configure zypper behavior and add zypper repositories", diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py index a261c16e..76acd6a3 100755 --- a/cloudinit/distros/__init__.py +++ b/cloudinit/distros/__init__.py @@ -16,7 +16,7 @@ import stat import string import urllib.parse from io import StringIO -from typing import Any, Mapping # noqa: F401 +from typing import Any, Mapping, Type from cloudinit import importer from cloudinit import log as logging @@ -26,7 +26,7 @@ from cloudinit.features import ALLOW_EC2_MIRRORS_ON_NON_AWS_INSTANCE_TYPES from cloudinit.net import activators, eni, network_state, renderers from cloudinit.net.network_state import parse_net_config_data -from .networking import LinuxNetworking +from .networking import LinuxNetworking, Networking # Used when a cloud-config module can be run on all cloud-init distibutions. # The value 'all' is surfaced in module documentation for distro support. @@ -76,9 +76,9 @@ class Distro(persistence.CloudInitPickleMixin, metaclass=abc.ABCMeta): hostname_conf_fn = "/etc/hostname" tz_zone_dir = "/usr/share/zoneinfo" init_cmd = ["service"] # systemctl, service etc - renderer_configs = {} # type: Mapping[str, Mapping[str, Any]] + renderer_configs: Mapping[str, Mapping[str, Any]] = {} _preferred_ntp_clients = None - networking_cls = LinuxNetworking + networking_cls: Type[Networking] = LinuxNetworking # This is used by self.shutdown_command(), and can be overridden in # subclasses shutdown_options_map = {"halt": "-H", "poweroff": "-P", "reboot": "-r"} @@ -91,7 +91,7 @@ class Distro(persistence.CloudInitPickleMixin, metaclass=abc.ABCMeta): self._paths = paths self._cfg = cfg self.name = name - self.networking = self.networking_cls() + self.networking: Networking = self.networking_cls() def _unpickle(self, ci_pkl_version: int) -> None: """Perform deserialization fixes for Distro.""" diff --git a/cloudinit/distros/networking.py b/cloudinit/distros/networking.py index e18a48ca..b24b6233 100644 --- a/cloudinit/distros/networking.py +++ b/cloudinit/distros/networking.py @@ -1,6 +1,7 @@ import abc import logging import os +from typing import List, Optional from cloudinit import net, subp, util @@ -22,7 +23,7 @@ class Networking(metaclass=abc.ABCMeta): """ def __init__(self): - self.blacklist_drivers = None + self.blacklist_drivers: Optional[List[str]] = None def _get_current_rename_info(self) -> dict: return net._get_current_rename_info() diff --git a/cloudinit/importer.py b/cloudinit/importer.py index f84ff4da..2bc210dd 100644 --- a/cloudinit/importer.py +++ b/cloudinit/importer.py @@ -12,21 +12,19 @@ import sys import typing # annotations add value for development, but don't break old versions -# pyver: 3.5 -> 3.8 +# pyver: 3.6 -> 3.8 # pylint: disable=E1101 -if sys.version_info >= (3, 8) and hasattr(typing, "TypeDict"): - MetaSchema = typing.TypedDict( - "MetaSchema", - { - "name": str, - "id": str, - "title": str, - "description": str, - "distros": typing.List[str], - "examples": typing.List[str], - "frequency": str, - }, - ) +if sys.version_info >= (3, 8): + + class MetaSchema(typing.TypedDict): + name: str + id: str + title: str + description: str + distros: typing.List[str] + examples: typing.List[str] + frequency: str + else: MetaSchema = dict # pylint: enable=E1101 diff --git a/cloudinit/sources/DataSourceAzure.py b/cloudinit/sources/DataSourceAzure.py index f4be4cda..359dfbde 100755 --- a/cloudinit/sources/DataSourceAzure.py +++ b/cloudinit/sources/DataSourceAzure.py @@ -13,7 +13,7 @@ import re import xml.etree.ElementTree as ET from enum import Enum from time import sleep, time -from typing import List, Optional +from typing import Any, Dict, List, Optional from xml.dom import minidom import requests @@ -83,7 +83,7 @@ class PPSType(Enum): UNKNOWN = "Unknown" -PLATFORM_ENTROPY_SOURCE = "/sys/firmware/acpi/tables/OEM0" +PLATFORM_ENTROPY_SOURCE: Optional[str] = "/sys/firmware/acpi/tables/OEM0" # List of static scripts and network config artifacts created by # stock ubuntu suported images. @@ -155,7 +155,7 @@ def find_busdev_from_disk(camcontrol_out, disk_drv): return None -def find_dev_from_busdev(camcontrol_out, busdev): +def find_dev_from_busdev(camcontrol_out: str, busdev: str) -> Optional[str]: # find the daX from 'camcontrol devlist' output # if busdev matches the specified value, i.e. 'scbus2' """ @@ -172,9 +172,29 @@ def find_dev_from_busdev(camcontrol_out, busdev): return None -def execute_or_debug(cmd, fail_ret=None): +def normalize_mac_address(mac: str) -> str: + """Normalize mac address with colons and lower-case.""" + if len(mac) == 12: + mac = ":".join( + [mac[0:2], mac[2:4], mac[4:6], mac[6:8], mac[8:10], mac[10:12]] + ) + + return mac.lower() + + +@azure_ds_telemetry_reporter +def get_hv_netvsc_macs_normalized() -> List[str]: + """Get Hyper-V NICs as normalized MAC addresses.""" + return [ + normalize_mac_address(n[1]) + for n in net.get_interfaces() + if n[2] == "hv_netvsc" + ] + + +def execute_or_debug(cmd, fail_ret=None) -> str: try: - return subp.subp(cmd)[0] + return subp.subp(cmd)[0] # type: ignore except subp.ProcessExecutionError: LOG.debug("Failed to execute: %s", " ".join(cmd)) return fail_ret @@ -192,7 +212,7 @@ def get_camcontrol_dev(): return execute_or_debug(["camcontrol", "devlist"]) -def get_resource_disk_on_freebsd(port_id): +def get_resource_disk_on_freebsd(port_id) -> Optional[str]: g0 = "00000000" if port_id > 1: g0 = "00000001" @@ -297,17 +317,16 @@ class DataSourceAzure(sources.DataSource): [util.get_cfg_by_path(sys_cfg, DS_CFG_PATH, {}), BUILTIN_DS_CONFIG] ) self.dhclient_lease_file = self.ds_cfg.get("dhclient_lease_file") + self._iso_dev = None self._network_config = None self._ephemeral_dhcp_ctx = None self._wireserver_endpoint = DEFAULT_WIRESERVER_ENDPOINT - self.iso_dev = None def _unpickle(self, ci_pkl_version: int) -> None: super()._unpickle(ci_pkl_version) self._ephemeral_dhcp_ctx = None - if not hasattr(self, "iso_dev"): - self.iso_dev = None + self._iso_dev = None self._wireserver_endpoint = DEFAULT_WIRESERVER_ENDPOINT def __str__(self): @@ -316,7 +335,9 @@ class DataSourceAzure(sources.DataSource): def _get_subplatform(self): """Return the subplatform metadata source details.""" - if self.seed.startswith("/dev"): + if self.seed is None: + subplatform_type = "unknown" + elif self.seed.startswith("/dev"): subplatform_type = "config-disk" elif self.seed.lower() == "imds": subplatform_type = "imds" @@ -419,7 +440,6 @@ class DataSourceAzure(sources.DataSource): cfg = {} files = {} - iso_dev = None if os.path.isfile(REPROVISION_MARKER_FILE): metadata_source = "IMDS" report_diagnostic_event( @@ -440,7 +460,7 @@ class DataSourceAzure(sources.DataSource): src, load_azure_ds_dir ) # save the device for ejection later - iso_dev = src + self._iso_dev = src else: md, userdata_raw, cfg, files = load_azure_ds_dir(src) ovf_is_accessible = True @@ -475,7 +495,7 @@ class DataSourceAzure(sources.DataSource): # not have UDF support. In either case, require IMDS metadata. # If we require IMDS metadata, try harder to obtain networking, waiting # for at least 20 minutes. Otherwise only wait 5 minutes. - requires_imds_metadata = bool(iso_dev) or not ovf_is_accessible + requires_imds_metadata = bool(self._iso_dev) or not ovf_is_accessible timeout_minutes = 5 if requires_imds_metadata else 20 try: self._setup_ephemeral_networking(timeout_minutes=timeout_minutes) @@ -492,8 +512,6 @@ class DataSourceAzure(sources.DataSource): report_diagnostic_event(msg) raise sources.InvalidMetaDataException(msg) - self.iso_dev = iso_dev - # Refresh PPS type using metadata. pps_type = self._determine_pps_type(cfg, imds_md) if pps_type != PPSType.NONE: @@ -511,6 +529,9 @@ class DataSourceAzure(sources.DataSource): # fetch metadata again as it has changed after reprovisioning imds_md = self.get_imds_data_with_api_fallback(retries=10) + # Report errors if IMDS network configuration is missing data. + self.validate_imds_network_metadata(imds_md=imds_md) + self.seed = metadata_source crawled_data.update( { @@ -541,9 +562,9 @@ class DataSourceAzure(sources.DataSource): if metadata_source == "IMDS" and not crawled_data["files"]: try: contents = build_minimal_ovf( - username=imds_username, - hostname=imds_hostname, - disableSshPwd=imds_disable_password, + username=imds_username, # type: ignore + hostname=imds_hostname, # type: ignore + disableSshPwd=imds_disable_password, # type: ignore ) crawled_data["files"] = {"ovf-env.xml": contents} except Exception as e: @@ -587,9 +608,23 @@ class DataSourceAzure(sources.DataSource): crawled_data["metadata"]["random_seed"] = seed crawled_data["metadata"]["instance-id"] = self._iid() - if pps_type != PPSType.NONE: - LOG.info("Reporting ready to Azure after getting ReprovisionData") - self._report_ready() + if self._negotiated is False and self._is_ephemeral_networking_up(): + # Report ready and fetch public-keys from Wireserver, if required. + pubkey_info = self._determine_wireserver_pubkey_info( + cfg=cfg, imds_md=imds_md + ) + try: + ssh_keys = self._report_ready(pubkey_info=pubkey_info) + except Exception: + # Failed to report ready, but continue with best effort. + pass + else: + LOG.debug("negotiating returned %s", ssh_keys) + if ssh_keys: + crawled_data["metadata"]["public-keys"] = ssh_keys + + self._cleanup_markers() + self._negotiated = True return crawled_data @@ -803,7 +838,11 @@ class DataSourceAzure(sources.DataSource): # We don't want Azure to react to an UPPER/lower difference as a new # instance id as it rewrites SSH host keys. # LP: #1835584 - iid = dmi.read_dmi_data("system-uuid").lower() + system_uuid = dmi.read_dmi_data("system-uuid") + if system_uuid is None: + raise RuntimeError("failed to read system-uuid") + + iid = system_uuid.lower() if os.path.exists(prev_iid_path): previous = util.load_file(prev_iid_path).strip() if previous.lower() == iid: @@ -815,24 +854,6 @@ class DataSourceAzure(sources.DataSource): return iid @azure_ds_telemetry_reporter - def setup(self, is_new_instance): - if self._negotiated is False: - LOG.debug( - "negotiating for %s (new_instance=%s)", - self.get_instance_id(), - is_new_instance, - ) - ssh_keys = self._negotiate() - LOG.debug("negotiating returned %s", ssh_keys) - if ssh_keys: - self.metadata["public-keys"] = ssh_keys - self._negotiated = True - else: - LOG.debug( - "negotiating already done for %s", self.get_instance_id() - ) - - @azure_ds_telemetry_reporter def _wait_for_nic_detach(self, nl_sock): """Use the netlink socket provided to wait for nic detach event. NOTE: The function doesn't close the socket. The caller owns closing @@ -866,7 +887,7 @@ class DataSourceAzure(sources.DataSource): path, "{pid}: {time}\n".format(pid=os.getpid(), time=time()) ) except AssertionError as error: - report_diagnostic_event(error, logger_func=LOG.error) + report_diagnostic_event(str(error), logger_func=LOG.error) raise @azure_ds_telemetry_reporter @@ -888,10 +909,10 @@ class DataSourceAzure(sources.DataSource): attempts = 0 LOG.info("Unbinding and binding the interface %s", ifname) while True: - - devicename = net.read_sys_net(ifname, "device/device_id").strip( - "{}" - ) + device_id = net.read_sys_net(ifname, "device/device_id") + if device_id is False or not isinstance(device_id, str): + raise RuntimeError("Unable to read device ID: %s" % device_id) + devicename = device_id.strip("{}") util.write_file( "/sys/bus/vmbus/drivers/hv_netvsc/unbind", devicename ) @@ -954,11 +975,12 @@ class DataSourceAzure(sources.DataSource): :raises sources.InvalidMetaDataException: On error reporting ready. """ - report_ready_succeeded = self._report_ready() - if not report_ready_succeeded: + try: + self._report_ready() + except Exception as error: msg = "Failed reporting ready while in the preprovisioning pool." report_diagnostic_event(msg, logger_func=LOG.error) - raise sources.InvalidMetaDataException(msg) + raise sources.InvalidMetaDataException(msg) from error self._create_report_ready_marker() @@ -1118,7 +1140,7 @@ class DataSourceAzure(sources.DataSource): break except AssertionError as error: - report_diagnostic_event(error, logger_func=LOG.error) + report_diagnostic_event(str(error), logger_func=LOG.error) @azure_ds_telemetry_reporter def _wait_for_all_nics_ready(self): @@ -1168,7 +1190,7 @@ class DataSourceAzure(sources.DataSource): logger_func=LOG.info, ) except netlink.NetlinkCreateSocketError as e: - report_diagnostic_event(e, logger_func=LOG.warning) + report_diagnostic_event(str(e), logger_func=LOG.warning) raise finally: if nl_sock: @@ -1234,6 +1256,11 @@ class DataSourceAzure(sources.DataSource): self._setup_ephemeral_networking(timeout_minutes=20) try: + if ( + self._ephemeral_dhcp_ctx is None + or self._ephemeral_dhcp_ctx.iface is None + ): + raise RuntimeError("Missing ephemeral context") iface = self._ephemeral_dhcp_ctx.iface nl_sock = netlink.create_bound_netlink_socket() @@ -1366,25 +1393,36 @@ class DataSourceAzure(sources.DataSource): return False - def _report_ready(self) -> bool: + @azure_ds_telemetry_reporter + def _report_ready( + self, *, pubkey_info: Optional[List[str]] = None + ) -> Optional[List[str]]: """Tells the fabric provisioning has completed. - @return: The success status of sending the ready signal. + :param pubkey_info: Fingerprints of keys to request from Wireserver. + + :raises Exception: if failed to report. + + :returns: List of SSH keys, if requested. """ try: - get_metadata_from_fabric( + data = get_metadata_from_fabric( fallback_lease_file=None, dhcp_opts=self._wireserver_endpoint, - iso_dev=self.iso_dev, + iso_dev=self._iso_dev, + pubkey_info=pubkey_info, ) - return True except Exception as e: report_diagnostic_event( "Error communicating with Azure fabric; You may experience " "connectivity issues: %s" % e, logger_func=LOG.warning, ) - return False + raise + + # Reporting ready ejected OVF media, no need to do so again. + self._iso_dev = None + return data def _ppstype_from_imds(self, imds_md: dict) -> Optional[str]: try: @@ -1430,6 +1468,7 @@ class DataSourceAzure(sources.DataSource): "{pid}: {time}\n".format(pid=os.getpid(), time=time()), ) + @azure_ds_telemetry_reporter def _reprovision(self): """Initiate the reprovisioning workflow. @@ -1445,40 +1484,29 @@ class DataSourceAzure(sources.DataSource): return (md, ud, cfg, {"ovf-env.xml": contents}) @azure_ds_telemetry_reporter - def _negotiate(self): - """Negotiate with fabric and return data from it. + def _determine_wireserver_pubkey_info( + self, *, cfg: dict, imds_md: dict + ) -> Optional[List[str]]: + """Determine the fingerprints we need to retrieve from Wireserver. - On success, returns a dictionary including 'public_keys'. - On failure, returns False. + :return: List of keys to request from Wireserver, if any, else None. """ - pubkey_info = None + pubkey_info: Optional[List[str]] = None try: - self._get_public_keys_from_imds(self.metadata["imds"]) + self._get_public_keys_from_imds(imds_md) except (KeyError, ValueError): - pubkey_info = self.cfg.get("_pubkeys", None) + pubkey_info = cfg.get("_pubkeys", None) log_msg = "Retrieved {} fingerprints from OVF".format( len(pubkey_info) if pubkey_info is not None else 0 ) report_diagnostic_event(log_msg, logger_func=LOG.debug) + return pubkey_info - LOG.debug("negotiating with fabric") - try: - ssh_keys = get_metadata_from_fabric( - fallback_lease_file=self.dhclient_lease_file, - pubkey_info=pubkey_info, - ) - except Exception as e: - report_diagnostic_event( - "Error communicating with Azure fabric; You may experience " - "connectivity issues: %s" % e, - logger_func=LOG.warning, - ) - return False - + def _cleanup_markers(self): + """Cleanup any marker files.""" util.del_file(REPORTED_READY_MARKER_FILE) util.del_file(REPROVISION_MARKER_FILE) util.del_file(REPROVISION_NIC_DETACHED_MARKER_FILE) - return ssh_keys @azure_ds_telemetry_reporter def activate(self, cfg, is_new_instance): @@ -1521,6 +1549,54 @@ class DataSourceAzure(sources.DataSource): def region(self): return self.metadata.get("imds", {}).get("compute", {}).get("location") + @azure_ds_telemetry_reporter + def validate_imds_network_metadata(self, imds_md: dict) -> bool: + """Validate IMDS network config and report telemetry for errors.""" + local_macs = get_hv_netvsc_macs_normalized() + + try: + network_config = imds_md["network"] + imds_macs = [ + normalize_mac_address(i["macAddress"]) + for i in network_config["interface"] + ] + except KeyError: + report_diagnostic_event( + "IMDS network metadata has incomplete configuration: %r" + % imds_md.get("network"), + logger_func=LOG.warning, + ) + return False + + missing_macs = [m for m in local_macs if m not in imds_macs] + if not missing_macs: + return True + + report_diagnostic_event( + "IMDS network metadata is missing configuration for NICs %r: %r" + % (missing_macs, network_config), + logger_func=LOG.warning, + ) + + if not self._ephemeral_dhcp_ctx or not self._ephemeral_dhcp_ctx.iface: + # No primary interface to check against. + return False + + primary_mac = net.get_interface_mac(self._ephemeral_dhcp_ctx.iface) + if not primary_mac or not isinstance(primary_mac, str): + # Unexpected data for primary interface. + return False + + primary_mac = normalize_mac_address(primary_mac) + if primary_mac in missing_macs: + report_diagnostic_event( + "IMDS network metadata is missing primary NIC %r: %r" + % (primary_mac, network_config), + logger_func=LOG.warning, + ) + + return False + def _username_from_imds(imds_data): try: @@ -1873,7 +1949,7 @@ def read_azure_ovf(contents): raise BrokenAzureDataSource("no child nodes of configuration set") md_props = "seedfrom" - md = {"azure_data": {}} + md: Dict[str, Any] = {"azure_data": {}} cfg = {} ud = "" password = None @@ -2084,9 +2160,7 @@ def _get_random_seed(source=PLATFORM_ENTROPY_SOURCE): # string. Same number of bits of entropy, just with 25% more zeroes. # There's no need to undo this base64-encoding when the random seed is # actually used in cc_seed_random.py. - seed = base64.b64encode(seed).decode() - - return seed + return base64.b64encode(seed).decode() # type: ignore @azure_ds_telemetry_reporter @@ -2151,7 +2225,7 @@ def _generate_network_config_from_imds_metadata(imds_metadata) -> dict: @param: imds_metadata: Dict of content read from IMDS network service. @return: Dictionary containing network version 2 standard configuration. """ - netconfig = {"version": 2, "ethernets": {}} + netconfig: Dict[str, Any] = {"version": 2, "ethernets": {}} network_metadata = imds_metadata["network"] for idx, intf in enumerate(network_metadata["interface"]): has_ip_address = False @@ -2160,7 +2234,7 @@ def _generate_network_config_from_imds_metadata(imds_metadata) -> dict: # addresses. nicname = "eth{idx}".format(idx=idx) dhcp_override = {"route-metric": (idx + 1) * 100} - dev_config = { + dev_config: Dict[str, Any] = { "dhcp4": True, "dhcp4-overrides": dhcp_override, "dhcp6": False, @@ -2170,6 +2244,7 @@ def _generate_network_config_from_imds_metadata(imds_metadata) -> dict: # If there are no available IP addresses, then we don't # want to add this interface to the generated config. if not addresses: + LOG.debug("No %s addresses found for: %r", addr_type, intf) continue has_ip_address = True if addr_type == "ipv4": @@ -2194,7 +2269,7 @@ def _generate_network_config_from_imds_metadata(imds_metadata) -> dict: "{ip}/{prefix}".format(ip=privateIp, prefix=netPrefix) ) if dev_config and has_ip_address: - mac = ":".join(re.findall(r"..", intf["macAddress"])) + mac = normalize_mac_address(intf["macAddress"]) dev_config.update( {"match": {"macaddress": mac.lower()}, "set-name": nicname} ) @@ -2205,6 +2280,14 @@ def _generate_network_config_from_imds_metadata(imds_metadata) -> dict: if driver and driver == "hv_netvsc": dev_config["match"]["driver"] = driver netconfig["ethernets"][nicname] = dev_config + continue + + LOG.debug( + "No configuration for: %s (dev_config=%r) (has_ip_address=%r)", + nicname, + dev_config, + has_ip_address, + ) return netconfig @@ -2214,9 +2297,12 @@ def _generate_network_config_from_fallback_config() -> dict: @return: Dictionary containing network version 2 standard configuration. """ - return net.generate_fallback_config( + cfg = net.generate_fallback_config( blacklist_drivers=BLACKLIST_DRIVERS, config_driver=True ) + if cfg is None: + return {} + return cfg @azure_ds_telemetry_reporter diff --git a/cloudinit/sources/helpers/azure.py b/cloudinit/sources/helpers/azure.py index ec6ab80c..d07dc3c0 100755 --- a/cloudinit/sources/helpers/azure.py +++ b/cloudinit/sources/helpers/azure.py @@ -334,12 +334,10 @@ def _get_dhcp_endpoint_option_name(): @azure_ds_telemetry_reporter -def http_with_retries(url, **kwargs) -> str: +def http_with_retries(url, **kwargs) -> url_helper.UrlResponse: """Wrapper around url_helper.readurl() with custom telemetry logging that url_helper.readurl() does not provide. """ - exc = None - max_readurl_attempts = 240 default_readurl_timeout = 5 sleep_duration_between_retries = 5 @@ -374,16 +372,18 @@ def http_with_retries(url, **kwargs) -> str: return ret except Exception as e: - exc = e if attempt % periodic_logging_attempts == 0: report_diagnostic_event( "Failed HTTP request with Azure endpoint %s during " "attempt %d with exception: %s" % (url, attempt, e), logger_func=LOG.debug, ) - time.sleep(sleep_duration_between_retries) + if attempt == max_readurl_attempts: + raise + + time.sleep(sleep_duration_between_retries) - raise exc + raise RuntimeError("Failed to return in http_with_retries") def build_minimal_ovf( @@ -433,14 +433,16 @@ class AzureEndpointHttpClient: "x-ms-guest-agent-public-x509-cert": certificate, } - def get(self, url, secure=False): + def get(self, url, secure=False) -> url_helper.UrlResponse: headers = self.headers if secure: headers = self.headers.copy() headers.update(self.extra_secure_headers) return http_with_retries(url, headers=headers) - def post(self, url, data=None, extra_headers=None): + def post( + self, url, data=None, extra_headers=None + ) -> url_helper.UrlResponse: headers = self.headers if extra_headers is not None: headers = self.headers.copy() diff --git a/cloudinit/sources/helpers/vultr.py b/cloudinit/sources/helpers/vultr.py index 190a5640..88a21034 100644 --- a/cloudinit/sources/helpers/vultr.py +++ b/cloudinit/sources/helpers/vultr.py @@ -7,7 +7,7 @@ from functools import lru_cache from cloudinit import dmi from cloudinit import log as log -from cloudinit import net, netinfo, subp, url_helper, util +from cloudinit import net, subp, url_helper, util from cloudinit.net.dhcp import EphemeralDHCPv4, NoDHCPLeaseError # Get LOG @@ -30,9 +30,6 @@ def get_metadata(url, timeout, retries, sec_between, agent): with EphemeralDHCPv4( iface=iface[0], connectivity_url_data={"url": url} ): - # Set metadata route - set_route(iface[0]) - # Fetch the metadata v1 = read_metadata(url, timeout, retries, sec_between, agent) @@ -43,49 +40,6 @@ def get_metadata(url, timeout, retries, sec_between, agent): raise exception -# Set route for metadata -def set_route(iface): - # Get routes, confirm entry does not exist - routes = netinfo.route_info() - - # If no tools exist and empty dict is returned - if "ipv4" not in routes: - return - - # We only care about IPv4 - routes = routes["ipv4"] - - # Searchable list - dests = [] - - # Parse each route into a more searchable format - for route in routes: - dests.append(route["destination"]) - - gw_present = "100.64.0.0" in dests or "100.64.0.0/10" in dests - dest_present = "169.254.169.254" in dests - - # If not IPv6 only (No link local) - # or the route is already present - if not gw_present or dest_present: - return - - # Set metadata route - if subp.which("ip"): - subp.subp( - [ - "ip", - "route", - "add", - "169.254.169.254/32", - "dev", - iface, - ] - ) - elif subp.which("route"): - subp.subp(["route", "add", "-net", "169.254.169.254/32", "100.64.0.1"]) - - # Read the system information from SMBIOS def get_sysinfo(): return { @@ -165,19 +119,18 @@ def generate_network_config(interfaces): # Prepare interface 0, public if len(interfaces) > 0: - public = generate_public_network_interface(interfaces[0]) + public = generate_interface(interfaces[0], primary=True) network["config"].append(public) # Prepare additional interfaces, private for i in range(1, len(interfaces)): - private = generate_private_network_interface(interfaces[i]) + private = generate_interface(interfaces[i]) network["config"].append(private) return network -# Input Metadata and generate public network config part -def generate_public_network_interface(interface): +def generate_interface(interface, primary=False): interface_name = get_interface_name(interface["mac"]) if not interface_name: raise RuntimeError( @@ -188,13 +141,33 @@ def generate_public_network_interface(interface): "name": interface_name, "type": "physical", "mac_address": interface["mac"], - "accept-ra": 1, - "subnets": [ + } + + if primary: + netcfg["accept-ra"] = 1 + netcfg["subnets"] = [ {"type": "dhcp", "control": "auto"}, {"type": "ipv6_slaac", "control": "auto"}, - ], - } + ] + + if not primary: + netcfg["subnets"] = [ + { + "type": "static", + "control": "auto", + "address": interface["ipv4"]["address"], + "netmask": interface["ipv4"]["netmask"], + } + ] + generate_interface_routes(interface, netcfg) + generate_interface_additional_addresses(interface, netcfg) + + # Add config to template + return netcfg + + +def generate_interface_routes(interface, netcfg): # Options that may or may not be used if "mtu" in interface: netcfg["mtu"] = interface["mtu"] @@ -205,6 +178,8 @@ def generate_public_network_interface(interface): if "routes" in interface: netcfg["subnets"][0]["routes"] = interface["routes"] + +def generate_interface_additional_addresses(interface, netcfg): # Check for additional IP's additional_count = len(interface["ipv4"]["additional"]) if "ipv4" in interface and additional_count > 0: @@ -228,8 +203,8 @@ def generate_public_network_interface(interface): add = { "type": "static6", "control": "auto", - "address": additional["address"], - "netmask": additional["netmask"], + "address": "%s/%s" + % (additional["network"], additional["prefix"]), } if "routes" in additional: @@ -237,44 +212,6 @@ def generate_public_network_interface(interface): netcfg["subnets"].append(add) - # Add config to template - return netcfg - - -# Input Metadata and generate private network config part -def generate_private_network_interface(interface): - interface_name = get_interface_name(interface["mac"]) - if not interface_name: - raise RuntimeError( - "Interface: %s could not be found on the system" % interface["mac"] - ) - - netcfg = { - "name": interface_name, - "type": "physical", - "mac_address": interface["mac"], - "subnets": [ - { - "type": "static", - "control": "auto", - "address": interface["ipv4"]["address"], - "netmask": interface["ipv4"]["netmask"], - } - ], - } - - # Options that may or may not be used - if "mtu" in interface: - netcfg["mtu"] = interface["mtu"] - - if "accept-ra" in interface: - netcfg["accept-ra"] = interface["accept-ra"] - - if "routes" in interface: - netcfg["subnets"][0]["routes"] = interface["routes"] - - return netcfg - # Make required adjustments to the network configs provided def add_interface_names(interfaces): diff --git a/cloudinit/url_helper.py b/cloudinit/url_helper.py index 790e2fbf..5b2f2ef9 100644 --- a/cloudinit/url_helper.py +++ b/cloudinit/url_helper.py @@ -188,7 +188,7 @@ def readurl( infinite=False, log_req_resp=True, request_method=None, -): +) -> UrlResponse: """Wrapper around requests.Session to read the url and retry if necessary :param url: Mandatory url to request. @@ -339,9 +339,8 @@ def readurl( sec_between, ) time.sleep(sec_between) - if excps: - raise excps[-1] - return None # Should throw before this... + + raise excps[-1] def wait_for_url( diff --git a/cloudinit/version.py b/cloudinit/version.py index 6ad90bd3..fa51cb9e 100644 --- a/cloudinit/version.py +++ b/cloudinit/version.py @@ -4,7 +4,7 @@ # # This file is part of cloud-init. See LICENSE file for license information. -__VERSION__ = "21.4" +__VERSION__ = "22.1" _PACKAGED_VERSION = "@@PACKAGED_VERSION@@" FEATURES = [ diff --git a/debian/changelog b/debian/changelog index bd3e4081..d7f7f572 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,3 +1,24 @@ +cloud-init (22.1-1-gb3d9acdd-0ubuntu1~22.04.1) jammy; urgency=medium + + * New upstream snapshot. + - integration tests: fix Azure failures (#1269) + - Release 22.1 (#1267) (LP: #1960939) + - sources/azure: report ready in local phase (#1265) [Chris Patterson] + - sources/azure: validate IMDS network configuration metadata (#1257) + [Chris Patterson] + - docs: Add more details to runcmd docs (#1266) + - use PEP 589 syntax for TypeDict (#1253) + - mypy: introduce type checking (#1254) [Chris Patterson] + - Fix extra ipv6 issues, code reduction and simplification (#1243) [eb3095] + - tests: when generating crypted password, generate in target env (#1252) + - sources/azure: address mypy/pyright typing complaints (#1245) + [Chris Patterson] + - Docs for x-shellscript* userdata (#1260) + - test_apt_security: azure platform has specific security URL overrides + (#1263) + + -- Brett Holman <brett.holman@canonical.com> Wed, 16 Feb 2022 13:39:11 -0700 + cloud-init (21.4-119-gdeb3ae82-0ubuntu1~22.04.1) jammy; urgency=medium * d/cloud-init.templates: Move LXD to back of datasource_list diff --git a/doc/rtd/topics/format.rst b/doc/rtd/topics/format.rst index 6d790b05..93ef34f0 100644 --- a/doc/rtd/topics/format.rst +++ b/doc/rtd/topics/format.rst @@ -38,6 +38,9 @@ Supported content-types are listed from the cloud-init subcommand make-mime: x-include-once-url x-include-url x-shellscript + x-shellscript-per-boot + x-shellscript-per-instance + x-shellscript-per-once Helper subcommand to generate mime messages @@ -47,13 +50,26 @@ The cloud-init subcommand can generate MIME multi-part files: `make-mime`_. ``make-mime`` subcommand takes pairs of (filename, "text/" mime subtype) separated by a colon (e.g. ``config.yaml:cloud-config``) and emits a MIME -multipart message to stdout. An example invocation, assuming you have your -cloud config in ``config.yaml`` and a shell script in ``script.sh`` and want -to store the multipart message in ``user-data``: +multipart message to stdout. + +Examples +-------- +Create userdata containing both a cloud-config (``config.yaml``) +and a shell script (``script.sh``) + +.. code-block:: shell-session + + $ cloud-init devel make-mime -a config.yaml:cloud-config -a script.sh:x-shellscript > userdata + +Create userdata containing 3 shell scripts: + +- ``always.sh`` - Run every boot +- ``instance.sh`` - Run once per instance +- ``once.sh`` - Run once .. code-block:: shell-session - $ cloud-init devel make-mime -a config.yaml:cloud-config -a script.sh:x-shellscript > user-data + $ cloud-init devel make-mime -a always.sh:x-shellscript-per-boot -a instance.sh:x-shellscript-per-instance -a once.sh:x-shellscript-per-once .. _make-mime: https://github.com/canonical/cloud-init/blob/main/cloudinit/cmd/devel/make_mime.py diff --git a/pyproject.toml b/pyproject.toml index 26734e62..324d6f35 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -6,3 +6,97 @@ profile = "black" line_length = 79 # We patch logging in main.py before certain imports skip = ["cloudinit/cmd/main.py", ".tox", "packages", "tools"] + +[tool.mypy] +follow_imports = "silent" +exclude=[ + '^cloudinit/apport\.py$', + '^cloudinit/cmd/query\.py$', + '^cloudinit/config/cc_chef\.py$', + '^cloudinit/config/cc_keyboard\.py$', + '^cloudinit/config/cc_landscape\.py$', + '^cloudinit/config/cc_mcollective\.py$', + '^cloudinit/config/cc_rsyslog\.py$', + '^cloudinit/config/cc_write_files_deferred\.py$', + '^cloudinit/config/cc_zypper_add_repo\.py$', + '^cloudinit/config/schema\.py$', + '^cloudinit/distros/bsd\.py$', + '^cloudinit/distros/freebsd\.py$', + '^cloudinit/distros/parsers/networkmanager_conf\.py$', + '^cloudinit/distros/parsers/resolv_conf\.py$', + '^cloudinit/distros/parsers/sys_conf\.py$', + '^cloudinit/dmi\.py$', + '^cloudinit/features\.py$', + '^cloudinit/handlers/cloud_config\.py$', + '^cloudinit/handlers/jinja_template\.py$', + '^cloudinit/net/__init__\.py$', + '^cloudinit/net/dhcp\.py$', + '^cloudinit/net/netplan\.py$', + '^cloudinit/net/sysconfig\.py$', + '^cloudinit/serial\.py$', + '^cloudinit/sources/DataSourceAliYun\.py$', + '^cloudinit/sources/DataSourceLXD\.py$', + '^cloudinit/sources/DataSourceOracle\.py$', + '^cloudinit/sources/DataSourceScaleway\.py$', + '^cloudinit/sources/DataSourceSmartOS\.py$', + '^cloudinit/sources/DataSourceVMware\.py$', + '^cloudinit/sources/__init__\.py$', + '^cloudinit/sources/helpers/vmware/imc/config_file\.py$', + '^cloudinit/stages\.py$', + '^cloudinit/templater\.py$', + '^cloudinit/url_helper\.py$', + '^conftest\.py$', + '^doc/rtd/conf\.py$', + '^setup\.py$', + '^tests/integration_tests/clouds\.py$', + '^tests/integration_tests/conftest\.py$', + '^tests/integration_tests/instances\.py$', + '^tests/integration_tests/integration_settings\.py$', + '^tests/integration_tests/modules/test_disk_setup\.py$', + '^tests/integration_tests/modules/test_growpart\.py$', + '^tests/integration_tests/modules/test_ssh_keysfile\.py$', + '^tests/unittests/__init__\.py$', + '^tests/unittests/cmd/devel/test_render\.py$', + '^tests/unittests/cmd/test_clean\.py$', + '^tests/unittests/cmd/test_cloud_id\.py$', + '^tests/unittests/cmd/test_main\.py$', + '^tests/unittests/cmd/test_query\.py$', + '^tests/unittests/cmd/test_status\.py$', + '^tests/unittests/config/test_cc_chef\.py$', + '^tests/unittests/config/test_cc_landscape\.py$', + '^tests/unittests/config/test_cc_locale\.py$', + '^tests/unittests/config/test_cc_mcollective\.py$', + '^tests/unittests/config/test_cc_rh_subscription\.py$', + '^tests/unittests/config/test_cc_set_hostname\.py$', + '^tests/unittests/config/test_cc_snap\.py$', + '^tests/unittests/config/test_cc_timezone\.py$', + '^tests/unittests/config/test_cc_ubuntu_advantage\.py$', + '^tests/unittests/config/test_cc_ubuntu_drivers\.py$', + '^tests/unittests/config/test_schema\.py$', + '^tests/unittests/helpers\.py$', + '^tests/unittests/net/test_dhcp\.py$', + '^tests/unittests/net/test_init\.py$', + '^tests/unittests/sources/test_aliyun\.py$', + '^tests/unittests/sources/test_azure\.py$', + '^tests/unittests/sources/test_ec2\.py$', + '^tests/unittests/sources/test_exoscale\.py$', + '^tests/unittests/sources/test_gce\.py$', + '^tests/unittests/sources/test_lxd\.py$', + '^tests/unittests/sources/test_opennebula\.py$', + '^tests/unittests/sources/test_openstack\.py$', + '^tests/unittests/sources/test_rbx\.py$', + '^tests/unittests/sources/test_scaleway\.py$', + '^tests/unittests/sources/test_smartos\.py$', + '^tests/unittests/test_data\.py$', + '^tests/unittests/test_ds_identify\.py$', + '^tests/unittests/test_ec2_util\.py$', + '^tests/unittests/test_net\.py$', + '^tests/unittests/test_net_activators\.py$', + '^tests/unittests/test_persistence\.py$', + '^tests/unittests/test_sshutil\.py$', + '^tests/unittests/test_subp\.py$', + '^tests/unittests/test_templating\.py$', + '^tests/unittests/test_url_helper\.py$', + '^tests/unittests/test_util\.py$', + '^tools/mock-meta\.py$', +] diff --git a/tests/integration_tests/modules/test_apt.py b/tests/integration_tests/modules/test_apt.py index 48f398d1..6b3a8b7c 100644 --- a/tests/integration_tests/modules/test_apt.py +++ b/tests/integration_tests/modules/test_apt.py @@ -267,10 +267,12 @@ class TestDefaults: sources_list = class_client.read_from_file("/etc/apt/sources.list") # 3 lines from main, universe, and multiverse - assert 3 == sources_list.count("deb http://security.ubuntu.com/ubuntu") - assert 3 == sources_list.count( - "# deb-src http://security.ubuntu.com/ubuntu" - ) + sec_url = "deb http://security.ubuntu.com/ubuntu" + if class_client.settings.PLATFORM == "azure": + sec_url = "deb http://azure.archive.ubuntu.com/ubuntu/" + sec_src_url = sec_url.replace("deb ", "# deb-src ") + assert 3 == sources_list.count(sec_url) + assert 3 == sources_list.count(sec_src_url) DEFAULT_DATA_WITH_URI = _DEFAULT_DATA.format( diff --git a/tests/integration_tests/modules/test_set_password.py b/tests/integration_tests/modules/test_set_password.py index e0f8b692..0e35cd26 100644 --- a/tests/integration_tests/modules/test_set_password.py +++ b/tests/integration_tests/modules/test_set_password.py @@ -8,8 +8,6 @@ other tests chpasswd's list being a string. Both expect the same results, so they use a mixin to share their test definitions, because we can (of course) only specify one user-data per instance. """ -import crypt - import pytest import yaml @@ -162,9 +160,13 @@ class Mixin: shadow_users, _ = self._fetch_and_parse_etc_shadow(class_client) fmt_and_salt = shadow_users["tom"].rsplit("$", 1)[0] - expected_value = crypt.crypt("mypassword123!", fmt_and_salt) - - assert expected_value == shadow_users["tom"] + GEN_CRYPT_CONTENT = ( + "import crypt\n" + f"print(crypt.crypt('mypassword123!', '{fmt_and_salt}'))\n" + ) + class_client.write_to_file("/gen_crypt.py", GEN_CRYPT_CONTENT) + result = class_client.execute("python3 /gen_crypt.py") + assert result.stdout == shadow_users["tom"] def test_shadow_expected_users(self, class_client): """Test that the right set of users is in /etc/shadow.""" diff --git a/tests/integration_tests/test_shell_script_by_frequency.py b/tests/integration_tests/test_shell_script_by_frequency.py index 25157722..6df75e2b 100644 --- a/tests/integration_tests/test_shell_script_by_frequency.py +++ b/tests/integration_tests/test_shell_script_by_frequency.py @@ -22,7 +22,7 @@ FILES = [ (PER_ONCE_FILE, "once.sh", "x-shellscript-per-once"), ] -USER_DATA, errors = create_mime_message(FILES) +USER_DATA = create_mime_message(FILES)[0].as_string() @pytest.mark.ci diff --git a/tests/unittests/config/test_schema.py b/tests/unittests/config/test_schema.py index 2f43d9e7..1d48056a 100644 --- a/tests/unittests/config/test_schema.py +++ b/tests/unittests/config/test_schema.py @@ -420,20 +420,18 @@ class GetSchemaDocTest(CiTestCase): "frequency": "frequency", "distros": ["debian", "rhel"], } - self.meta = MetaSchema( - { - "title": "title", - "description": "description", - "id": "id", - "name": "name", - "frequency": "frequency", - "distros": ["debian", "rhel"], - "examples": [ - 'ex1:\n [don\'t, expand, "this"]', - "ex2: true", - ], - } - ) + self.meta: MetaSchema = { + "title": "title", + "description": "description", + "id": "id", + "name": "name", + "frequency": "frequency", + "distros": ["debian", "rhel"], + "examples": [ + 'ex1:\n [don\'t, expand, "this"]', + "ex2: true", + ], + } def test_get_meta_doc_returns_restructured_text(self): """get_meta_doc returns restructured text for a cloudinit schema.""" diff --git a/tests/unittests/sources/test_azure.py b/tests/unittests/sources/test_azure.py index a47ed611..5f956a63 100644 --- a/tests/unittests/sources/test_azure.py +++ b/tests/unittests/sources/test_azure.py @@ -37,14 +37,170 @@ from tests.unittests.helpers import ( wrap_and_call, ) +MOCKPATH = "cloudinit.sources.DataSourceAzure." + @pytest.fixture -def azure_ds(request, paths): +def azure_ds(paths): """Provide DataSourceAzure instance with mocks for minimal test case.""" with mock.patch(MOCKPATH + "_is_platform_viable", return_value=True): yield dsaz.DataSourceAzure(sys_cfg={}, distro=mock.Mock(), paths=paths) +@pytest.fixture +def mock_azure_helper_readurl(): + with mock.patch( + "cloudinit.sources.helpers.azure.url_helper.readurl", autospec=True + ) as m: + yield m + + +@pytest.fixture +def mock_azure_get_metadata_from_fabric(): + with mock.patch( + MOCKPATH + "get_metadata_from_fabric", + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_azure_report_failure_to_fabric(): + with mock.patch( + MOCKPATH + "report_failure_to_fabric", + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_dmi_read_dmi_data(): + def fake_read(key: str) -> str: + if key == "system-uuid": + return "fake-system-uuid" + raise RuntimeError() + + with mock.patch( + MOCKPATH + "dmi.read_dmi_data", + side_effect=fake_read, + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_net_dhcp_maybe_perform_dhcp_discovery(): + with mock.patch( + "cloudinit.net.dhcp.maybe_perform_dhcp_discovery", + return_value=[ + { + "unknown-245": "aa:bb:cc:dd", + "interface": "ethBoot0", + "fixed-address": "192.168.2.9", + "routers": "192.168.2.1", + "subnet-mask": "255.255.255.0", + } + ], + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_net_dhcp_EphemeralIPv4Network(): + with mock.patch( + "cloudinit.net.dhcp.EphemeralIPv4Network", + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_get_interfaces(): + with mock.patch(MOCKPATH + "net.get_interfaces", return_value=[]) as m: + yield m + + +@pytest.fixture +def mock_get_interface_mac(): + with mock.patch( + MOCKPATH + "net.get_interface_mac", + return_value="001122334455", + ) as m: + yield m + + +@pytest.fixture +def mock_netlink(): + with mock.patch( + MOCKPATH + "netlink", + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_os_path_isfile(): + with mock.patch(MOCKPATH + "os.path.isfile", autospec=True) as m: + yield m + + +@pytest.fixture +def mock_readurl(): + with mock.patch(MOCKPATH + "readurl", autospec=True) as m: + yield m + + +@pytest.fixture +def mock_subp_subp(): + with mock.patch(MOCKPATH + "subp.subp", side_effect=[]) as m: + yield m + + +@pytest.fixture +def mock_util_ensure_dir(): + with mock.patch( + MOCKPATH + "util.ensure_dir", + autospec=True, + ) as m: + yield m + + +@pytest.fixture +def mock_util_find_devs_with(): + with mock.patch(MOCKPATH + "util.find_devs_with", autospec=True) as m: + yield m + + +@pytest.fixture +def mock_util_load_file(): + with mock.patch( + MOCKPATH + "util.load_file", + autospec=True, + return_value=b"", + ) as m: + yield m + + +@pytest.fixture +def mock_util_mount_cb(): + with mock.patch( + MOCKPATH + "util.mount_cb", + autospec=True, + return_value=({}, "", {}, {}), + ) as m: + yield m + + +@pytest.fixture +def mock_util_write_file(): + with mock.patch( + MOCKPATH + "util.write_file", + autospec=True, + ) as m: + yield m + + def construct_valid_ovf_env( data=None, pubkeys=None, userdata=None, platform_settings=None ): @@ -196,7 +352,6 @@ IMDS_NETWORK_METADATA = { ] } -MOCKPATH = "cloudinit.sources.DataSourceAzure." EXAMPLE_UUID = "d0df4c54-4ecb-4a4b-9954-5bdf3ed5c3b8" @@ -764,6 +919,13 @@ scbus-1 on xpt0 bus 0 self.m_is_platform_viable = mock.MagicMock(autospec=True) self.m_get_metadata_from_fabric = mock.MagicMock(return_value=[]) self.m_report_failure_to_fabric = mock.MagicMock(autospec=True) + self.m_get_interfaces = mock.MagicMock( + return_value=[ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("eth0", "00:15:5d:69:63:ba", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + ) self.m_list_possible_azure_ds = mock.MagicMock( side_effect=_load_possible_azure_ds ) @@ -799,6 +961,16 @@ scbus-1 on xpt0 bus 0 ), (dsaz, "get_boot_telemetry", mock.MagicMock()), (dsaz, "get_system_info", mock.MagicMock()), + ( + dsaz.net, + "get_interface_mac", + mock.MagicMock(return_value="00:15:5d:69:63:ba"), + ), + ( + dsaz.net, + "get_interfaces", + self.m_get_interfaces, + ), (dsaz.subp, "which", lambda x: True), ( dsaz.dmi, @@ -1225,7 +1397,10 @@ scbus-1 on xpt0 bus 0 dsrc.crawl_metadata() - assert m_report_ready.mock_calls == [mock.call(), mock.call()] + assert m_report_ready.mock_calls == [ + mock.call(), + mock.call(pubkey_info=None), + ] def test_waagent_d_has_0700_perms(self): # we expect /var/lib/waagent to be created 0700 @@ -1603,12 +1778,23 @@ scbus-1 on xpt0 bus 0 def test_dsaz_report_ready_returns_true_when_report_succeeds(self): dsrc = self._get_ds({"ovfcontent": construct_valid_ovf_env()}) - self.assertTrue(dsrc._report_ready()) + assert dsrc._report_ready() == [] - def test_dsaz_report_ready_returns_false_and_does_not_propagate_exc(self): + @mock.patch(MOCKPATH + "report_diagnostic_event") + def test_dsaz_report_ready_failure_reports_telemetry(self, m_report_diag): dsrc = self._get_ds({"ovfcontent": construct_valid_ovf_env()}) - self.m_get_metadata_from_fabric.side_effect = Exception - self.assertFalse(dsrc._report_ready()) + self.m_get_metadata_from_fabric.side_effect = Exception("foo") + + with pytest.raises(Exception): + dsrc._report_ready() + + assert m_report_diag.mock_calls == [ + mock.call( + "Error communicating with Azure fabric; " + "You may experience connectivity issues: foo", + logger_func=dsaz.LOG.warning, + ) + ] def test_dsaz_report_failure_returns_true_when_report_succeeds(self): dsrc = self._get_ds({"ovfcontent": construct_valid_ovf_env()}) @@ -1940,7 +2126,7 @@ scbus-1 on xpt0 bus 0 ) distro.networking.get_interfaces_by_mac() - m_net_get_interfaces.assert_called_with( + self.m_get_interfaces.assert_called_with( blacklist_drivers=dsaz.BLACKLIST_DRIVERS ) @@ -2993,7 +3179,7 @@ class TestPreprovisioningHotAttachNics(CiTestCase): @mock.patch(MOCKPATH + "net.is_up", autospec=True) @mock.patch(MOCKPATH + "util.write_file") - @mock.patch("cloudinit.net.read_sys_net") + @mock.patch("cloudinit.net.read_sys_net", return_value="device-id") @mock.patch("cloudinit.distros.networking.LinuxNetworking.try_set_link_up") def test_wait_for_link_up_checks_link_after_sleep( self, m_try_set_link_up, m_read_sys_net, m_writefile, m_is_up @@ -3023,7 +3209,7 @@ class TestPreprovisioningHotAttachNics(CiTestCase): self.assertEqual(2, m_is_up.call_count) @mock.patch(MOCKPATH + "util.write_file") - @mock.patch("cloudinit.net.read_sys_net") + @mock.patch("cloudinit.net.read_sys_net", return_value="device-id") @mock.patch("cloudinit.distros.networking.LinuxNetworking.try_set_link_up") def test_wait_for_link_up_writes_to_device_file( self, m_is_link_up, m_read_sys_net, m_writefile @@ -3282,7 +3468,7 @@ class TestPreprovisioningPollIMDS(CiTestCase): } ] m_media_switch.return_value = None - m_report_ready.return_value = False + m_report_ready.side_effect = [Exception("fail")] dsa = dsaz.DataSourceAzure({}, distro=None, paths=self.paths) self.assertFalse(os.path.exists(report_file)) with mock.patch(MOCKPATH + "REPORTED_READY_MARKER_FILE", report_file): @@ -3534,4 +3720,587 @@ class TestRandomSeed(CiTestCase): self.assertEqual(deserialized["seed"], result) +class TestProvisioning: + @pytest.fixture(autouse=True) + def provisioning_setup( + self, + azure_ds, + mock_azure_get_metadata_from_fabric, + mock_azure_report_failure_to_fabric, + mock_net_dhcp_maybe_perform_dhcp_discovery, + mock_net_dhcp_EphemeralIPv4Network, + mock_dmi_read_dmi_data, + mock_get_interfaces, + mock_get_interface_mac, + mock_netlink, + mock_os_path_isfile, + mock_readurl, + mock_subp_subp, + mock_util_ensure_dir, + mock_util_find_devs_with, + mock_util_load_file, + mock_util_mount_cb, + mock_util_write_file, + ): + self.azure_ds = azure_ds + self.mock_azure_get_metadata_from_fabric = ( + mock_azure_get_metadata_from_fabric + ) + self.mock_azure_report_failure_to_fabric = ( + mock_azure_report_failure_to_fabric + ) + self.mock_net_dhcp_maybe_perform_dhcp_discovery = ( + mock_net_dhcp_maybe_perform_dhcp_discovery + ) + self.mock_net_dhcp_EphemeralIPv4Network = ( + mock_net_dhcp_EphemeralIPv4Network + ) + self.mock_dmi_read_dmi_data = mock_dmi_read_dmi_data + self.mock_get_interfaces = mock_get_interfaces + self.mock_get_interface_mac = mock_get_interface_mac + self.mock_netlink = mock_netlink + self.mock_os_path_isfile = mock_os_path_isfile + self.mock_readurl = mock_readurl + self.mock_subp_subp = mock_subp_subp + self.mock_util_ensure_dir = mock_util_ensure_dir + self.mock_util_find_devs_with = mock_util_find_devs_with + self.mock_util_load_file = mock_util_load_file + self.mock_util_mount_cb = mock_util_mount_cb + self.mock_util_write_file = mock_util_write_file + + self.imds_md = { + "extended": {"compute": {"ppsType": "None"}}, + "network": { + "interface": [ + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "011122334455", + }, + ] + }, + } + + def test_no_pps(self): + self.mock_readurl.side_effect = [ + mock.MagicMock(contents=json.dumps(self.imds_md).encode()), + ] + self.mock_azure_get_metadata_from_fabric.return_value = [] + self.mock_os_path_isfile.side_effect = [False, False, False] + + self.azure_ds._get_data() + + assert self.mock_os_path_isfile.mock_calls == [ + mock.call("/var/lib/cloud/data/poll_imds"), + mock.call( + os.path.join( + self.azure_ds.paths.cloud_dir, "seed/azure/ovf-env.xml" + ) + ), + mock.call("/var/lib/cloud/data/poll_imds"), + ] + + assert self.mock_readurl.mock_calls == [ + mock.call( + "http://169.254.169.254/metadata/instance?" + "api-version=2021-08-01&extended=true", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=dsaz.retry_on_url_exc, + infinite=False, + ), + ] + + # Verify DHCP is setup once. + assert self.mock_net_dhcp_maybe_perform_dhcp_discovery.mock_calls == [ + mock.call(None, dsaz.dhcp_log_cb) + ] + assert self.azure_ds._wireserver_endpoint == "aa:bb:cc:dd" + assert self.azure_ds._is_ephemeral_networking_up() is False + + # Verify DMI usage. + assert self.mock_dmi_read_dmi_data.mock_calls == [ + mock.call("system-uuid") + ] + assert self.azure_ds.metadata["instance-id"] == "fake-system-uuid" + + # Verify IMDS metadata. + assert self.azure_ds.metadata["imds"] == self.imds_md + + # Verify reporting ready once. + assert self.mock_azure_get_metadata_from_fabric.mock_calls == [ + mock.call( + fallback_lease_file=None, + dhcp_opts="aa:bb:cc:dd", + iso_dev="/dev/sr0", + pubkey_info=None, + ) + ] + + # Verify netlink. + assert self.mock_netlink.mock_calls == [] + + def test_running_pps(self): + self.imds_md["extended"]["compute"]["ppsType"] = "Running" + ovf_data = {"HostName": "myhost", "UserName": "myuser"} + + nl_sock = mock.MagicMock() + self.mock_netlink.create_bound_netlink_socket.return_value = nl_sock + self.mock_readurl.side_effect = [ + mock.MagicMock(contents=json.dumps(self.imds_md).encode()), + mock.MagicMock( + contents=construct_valid_ovf_env(data=ovf_data).encode() + ), + mock.MagicMock(contents=json.dumps(self.imds_md).encode()), + ] + self.mock_azure_get_metadata_from_fabric.return_value = [] + self.mock_os_path_isfile.side_effect = [False, False, False, False] + + self.azure_ds._get_data() + + assert self.mock_os_path_isfile.mock_calls == [ + mock.call("/var/lib/cloud/data/poll_imds"), + mock.call( + os.path.join( + self.azure_ds.paths.cloud_dir, "seed/azure/ovf-env.xml" + ) + ), + mock.call("/var/lib/cloud/data/poll_imds"), + mock.call("/var/lib/cloud/data/reported_ready"), + ] + + assert self.mock_readurl.mock_calls == [ + mock.call( + "http://169.254.169.254/metadata/instance?" + "api-version=2021-08-01&extended=true", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=dsaz.retry_on_url_exc, + infinite=False, + ), + mock.call( + "http://169.254.169.254/metadata/reprovisiondata?" + "api-version=2019-06-01", + timeout=2, + headers={"Metadata": "true"}, + exception_cb=mock.ANY, + infinite=True, + log_req_resp=False, + ), + mock.call( + "http://169.254.169.254/metadata/instance?" + "api-version=2021-08-01&extended=true", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=dsaz.retry_on_url_exc, + infinite=False, + ), + ] + + # Verify DHCP is setup twice. + assert self.mock_net_dhcp_maybe_perform_dhcp_discovery.mock_calls == [ + mock.call(None, dsaz.dhcp_log_cb), + mock.call(None, dsaz.dhcp_log_cb), + ] + assert self.azure_ds._wireserver_endpoint == "aa:bb:cc:dd" + assert self.azure_ds._is_ephemeral_networking_up() is False + + # Verify DMI usage. + assert self.mock_dmi_read_dmi_data.mock_calls == [ + mock.call("system-uuid") + ] + assert self.azure_ds.metadata["instance-id"] == "fake-system-uuid" + + # Verify IMDS metadata. + assert self.azure_ds.metadata["imds"] == self.imds_md + + # Verify reporting ready twice. + assert self.mock_azure_get_metadata_from_fabric.mock_calls == [ + mock.call( + fallback_lease_file=None, + dhcp_opts="aa:bb:cc:dd", + iso_dev="/dev/sr0", + pubkey_info=None, + ), + mock.call( + fallback_lease_file=None, + dhcp_opts="aa:bb:cc:dd", + iso_dev=None, + pubkey_info=None, + ), + ] + + # Verify netlink operations for Running PPS. + assert self.mock_netlink.mock_calls == [ + mock.call.create_bound_netlink_socket(), + mock.call.wait_for_media_disconnect_connect(mock.ANY, "ethBoot0"), + mock.call.create_bound_netlink_socket().__bool__(), + mock.call.create_bound_netlink_socket().close(), + ] + + def test_savable_pps(self): + self.imds_md["extended"]["compute"]["ppsType"] = "Savable" + ovf_data = {"HostName": "myhost", "UserName": "myuser"} + + nl_sock = mock.MagicMock() + self.mock_netlink.create_bound_netlink_socket.return_value = nl_sock + self.mock_netlink.wait_for_nic_detach_event.return_value = "eth9" + self.mock_netlink.wait_for_nic_attach_event.return_value = ( + "ethAttached1" + ) + self.mock_readurl.side_effect = [ + mock.MagicMock(contents=json.dumps(self.imds_md).encode()), + mock.MagicMock( + contents=json.dumps(self.imds_md["network"]).encode() + ), + mock.MagicMock( + contents=construct_valid_ovf_env(data=ovf_data).encode() + ), + mock.MagicMock(contents=json.dumps(self.imds_md).encode()), + ] + self.mock_azure_get_metadata_from_fabric.return_value = [] + self.mock_os_path_isfile.side_effect = [ + False, # /var/lib/cloud/data/poll_imds + False, # seed/azure/ovf-env.xml + False, # /var/lib/cloud/data/poll_imds + False, # /var/lib/cloud/data/reported_ready + False, # /var/lib/cloud/data/reported_ready + False, # /var/lib/cloud/data/nic_detached + True, # /var/lib/cloud/data/reported_ready + ] + self.azure_ds._fallback_interface = False + + self.azure_ds._get_data() + + assert self.mock_os_path_isfile.mock_calls == [ + mock.call("/var/lib/cloud/data/poll_imds"), + mock.call( + os.path.join( + self.azure_ds.paths.cloud_dir, "seed/azure/ovf-env.xml" + ) + ), + mock.call("/var/lib/cloud/data/poll_imds"), + mock.call("/var/lib/cloud/data/reported_ready"), + mock.call("/var/lib/cloud/data/reported_ready"), + mock.call("/var/lib/cloud/data/nic_detached"), + mock.call("/var/lib/cloud/data/reported_ready"), + ] + + assert self.mock_readurl.mock_calls == [ + mock.call( + "http://169.254.169.254/metadata/instance?" + "api-version=2021-08-01&extended=true", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=dsaz.retry_on_url_exc, + infinite=False, + ), + mock.call( + "http://169.254.169.254/metadata/instance/network?" + "api-version=2019-06-01", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=mock.ANY, + infinite=True, + ), + mock.call( + "http://169.254.169.254/metadata/reprovisiondata?" + "api-version=2019-06-01", + timeout=2, + headers={"Metadata": "true"}, + exception_cb=mock.ANY, + infinite=True, + log_req_resp=False, + ), + mock.call( + "http://169.254.169.254/metadata/instance?" + "api-version=2021-08-01&extended=true", + timeout=2, + headers={"Metadata": "true"}, + retries=0, + exception_cb=dsaz.retry_on_url_exc, + infinite=False, + ), + ] + + # Verify DHCP is setup twice. + assert self.mock_net_dhcp_maybe_perform_dhcp_discovery.mock_calls == [ + mock.call(None, dsaz.dhcp_log_cb), + mock.call("ethAttached1", dsaz.dhcp_log_cb), + ] + assert self.azure_ds._wireserver_endpoint == "aa:bb:cc:dd" + assert self.azure_ds._is_ephemeral_networking_up() is False + + # Verify DMI usage. + assert self.mock_dmi_read_dmi_data.mock_calls == [ + mock.call("system-uuid") + ] + assert self.azure_ds.metadata["instance-id"] == "fake-system-uuid" + + # Verify IMDS metadata. + assert self.azure_ds.metadata["imds"] == self.imds_md + + # Verify reporting ready twice. + assert self.mock_azure_get_metadata_from_fabric.mock_calls == [ + mock.call( + fallback_lease_file=None, + dhcp_opts="aa:bb:cc:dd", + iso_dev="/dev/sr0", + pubkey_info=None, + ), + mock.call( + fallback_lease_file=None, + dhcp_opts="aa:bb:cc:dd", + iso_dev=None, + pubkey_info=None, + ), + ] + + # Verify netlink operations for Savable PPS. + assert self.mock_netlink.mock_calls == [ + mock.call.create_bound_netlink_socket(), + mock.call.wait_for_nic_detach_event(nl_sock), + mock.call.wait_for_nic_attach_event(nl_sock, ["ethAttached1"]), + mock.call.create_bound_netlink_socket().__bool__(), + mock.call.create_bound_netlink_socket().close(), + ] + + +class TestValidateIMDSMetadata: + @pytest.mark.parametrize( + "mac,expected", + [ + ("001122aabbcc", "00:11:22:aa:bb:cc"), + ("001122AABBCC", "00:11:22:aa:bb:cc"), + ("00:11:22:aa:bb:cc", "00:11:22:aa:bb:cc"), + ("00:11:22:AA:BB:CC", "00:11:22:aa:bb:cc"), + ("pass-through-the-unexpected", "pass-through-the-unexpected"), + ("", ""), + ], + ) + def test_normalize_scenarios(self, mac, expected): + normalized = dsaz.normalize_mac_address(mac) + assert normalized == expected + + def test_empty( + self, azure_ds, caplog, mock_get_interfaces, mock_get_interface_mac + ): + imds_md = {} + + assert azure_ds.validate_imds_network_metadata(imds_md) is False + assert ( + "cloudinit.sources.DataSourceAzure", + 30, + "IMDS network metadata has incomplete configuration: None", + ) in caplog.record_tuples + + def test_validates_one_nic( + self, azure_ds, mock_get_interfaces, mock_get_interface_mac + ): + + mock_get_interfaces.return_value = [ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("test0", "00:11:22:33:44:55", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + azure_ds._ephemeral_dhcp_ctx = mock.Mock(iface="test0") + + imds_md = { + "network": { + "interface": [ + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "001122334455", + } + ] + } + } + + assert azure_ds.validate_imds_network_metadata(imds_md) is True + + def test_validates_multiple_nic( + self, azure_ds, mock_get_interfaces, mock_get_interface_mac + ): + + mock_get_interfaces.return_value = [ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("test0", "00:11:22:33:44:55", "hv_netvsc", "0x3"), + ("test1", "01:11:22:33:44:55", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + azure_ds._ephemeral_dhcp_ctx = mock.Mock(iface="test0") + + imds_md = { + "network": { + "interface": [ + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "001122334455", + }, + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "011122334455", + }, + ] + } + } + + assert azure_ds.validate_imds_network_metadata(imds_md) is True + + def test_missing_all( + self, azure_ds, caplog, mock_get_interfaces, mock_get_interface_mac + ): + + mock_get_interfaces.return_value = [ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("test0", "00:11:22:33:44:55", "hv_netvsc", "0x3"), + ("test1", "01:11:22:33:44:55", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + azure_ds._ephemeral_dhcp_ctx = mock.Mock(iface="test0") + + imds_md = {"network": {"interface": []}} + + assert azure_ds.validate_imds_network_metadata(imds_md) is False + assert ( + "cloudinit.sources.DataSourceAzure", + 30, + "IMDS network metadata is missing configuration for NICs " + "['00:11:22:33:44:55', '01:11:22:33:44:55']: " + f"{imds_md['network']!r}", + ) in caplog.record_tuples + + def test_missing_primary( + self, azure_ds, caplog, mock_get_interfaces, mock_get_interface_mac + ): + + mock_get_interfaces.return_value = [ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("test0", "00:11:22:33:44:55", "hv_netvsc", "0x3"), + ("test1", "01:11:22:33:44:55", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + azure_ds._ephemeral_dhcp_ctx = mock.Mock(iface="test0") + + imds_md = { + "network": { + "interface": [ + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "011122334455", + }, + ] + } + } + + assert azure_ds.validate_imds_network_metadata(imds_md) is False + assert ( + "cloudinit.sources.DataSourceAzure", + 30, + "IMDS network metadata is missing configuration for NICs " + f"['00:11:22:33:44:55']: {imds_md['network']!r}", + ) in caplog.record_tuples + assert ( + "cloudinit.sources.DataSourceAzure", + 30, + "IMDS network metadata is missing primary NIC " + f"'00:11:22:33:44:55': {imds_md['network']!r}", + ) in caplog.record_tuples + + def test_missing_secondary( + self, azure_ds, mock_get_interfaces, mock_get_interface_mac + ): + + mock_get_interfaces.return_value = [ + ("dummy0", "9e:65:d6:19:19:01", None, None), + ("test0", "00:11:22:33:44:55", "hv_netvsc", "0x3"), + ("test1", "01:11:22:33:44:55", "hv_netvsc", "0x3"), + ("lo", "00:00:00:00:00:00", None, None), + ] + azure_ds._ephemeral_dhcp_ctx = mock.Mock(iface="test0") + + imds_md = { + "network": { + "interface": [ + { + "ipv4": { + "ipAddress": [ + { + "privateIpAddress": "10.0.0.22", + "publicIpAddress": "", + } + ], + "subnet": [ + {"address": "10.0.0.0", "prefix": "24"} + ], + }, + "ipv6": {"ipAddress": []}, + "macAddress": "001122334455", + }, + ] + } + } + + assert azure_ds.validate_imds_network_metadata(imds_md) is False + + # vi: ts=4 expandtab diff --git a/tests/unittests/sources/test_vultr.py b/tests/unittests/sources/test_vultr.py index 21d5bc17..18b2c084 100644 --- a/tests/unittests/sources/test_vultr.py +++ b/tests/unittests/sources/test_vultr.py @@ -8,6 +8,7 @@ import json from cloudinit import helpers, settings +from cloudinit.net.dhcp import NoDHCPLeaseError from cloudinit.sources import DataSourceVultr from cloudinit.sources.helpers import vultr from tests.unittests.helpers import CiTestCase, mock @@ -95,7 +96,9 @@ VULTR_V1_2 = { "netmask": "255.255.254.0", }, "ipv6": { - "additional": [], + "additional": [ + {"network": "2002:19f0:5:28a7::", "prefix": "64"} + ], "address": "2001:19f0:5:28a7:5400:03ff:fe1b:4eca", "network": "2001:19f0:5:28a7::", "prefix": "64", @@ -138,6 +141,14 @@ VULTR_V1_2 = { SSH_KEYS_1 = ["ssh-rsa AAAAB3NzaC1y...IQQhv5PAOKaIl+mM3c= test3@key"] +INTERFACES = [ + ["lo", "56:00:03:15:c4:00", "drv", "devid0"], + ["dummy0", "56:00:03:15:c4:01", "drv", "devid1"], + ["eth1", "56:00:03:15:c4:02", "drv", "devid2"], + ["eth0", "56:00:03:15:c4:04", "drv", "devid4"], + ["eth2", "56:00:03:15:c4:03", "drv", "devid3"], +] + # Expected generated objects # Expected config @@ -182,6 +193,11 @@ EXPECTED_VULTR_NETWORK_2 = { "subnets": [ {"type": "dhcp", "control": "auto"}, {"type": "ipv6_slaac", "control": "auto"}, + { + "type": "static6", + "control": "auto", + "address": "2002:19f0:5:28a7::/64", + }, ], }, { @@ -208,6 +224,9 @@ INTERFACE_MAP = { } +EPHERMERAL_USED = "" + + class TestDataSourceVultr(CiTestCase): def setUp(self): super(TestDataSourceVultr, self).setUp() @@ -284,5 +303,37 @@ class TestDataSourceVultr(CiTestCase): EXPECTED_VULTR_NETWORK_2, vultr.generate_network_config(interf) ) + def ephemeral_init(self, iface="", connectivity_url_data=None): + global EPHERMERAL_USED + EPHERMERAL_USED = iface + if iface == "eth0": + return + raise NoDHCPLeaseError("Generic for testing") + + # Test interface seeking to ensure we are able to find the correct one + @mock.patch("cloudinit.net.dhcp.EphemeralDHCPv4.__init__", ephemeral_init) + @mock.patch("cloudinit.sources.helpers.vultr.is_vultr") + @mock.patch("cloudinit.sources.helpers.vultr.read_metadata") + @mock.patch("cloudinit.net.get_interfaces") + def test_interface_seek( + self, mock_get_interfaces, mock_read_metadata, mock_isvultr + ): + mock_read_metadata.side_effect = NoDHCPLeaseError( + "Generic for testing" + ) + mock_isvultr.return_value = True + mock_get_interfaces.return_value = INTERFACES + + source = DataSourceVultr.DataSourceVultr( + settings.CFG_BUILTIN, None, helpers.Paths({"run_dir": self.tmp}) + ) + + try: + source._get_data() + except Exception: + pass + + self.assertEqual(EPHERMERAL_USED, INTERFACES[3][0]) + # vi: ts=4 expandtab @@ -1,5 +1,5 @@ [tox] -envlist = py3, lowest-supported-dev, flake8, pylint, black, isort +envlist = py3, lowest-supported-dev, black, flake8, isort, mypy, pylint recreate = True [testenv] @@ -10,10 +10,15 @@ passenv= PYTEST_ADDOPTS [format_deps] -flake8==3.9.2 -pylint==2.11.1 black==21.12b0 +flake8==3.9.2 isort==5.10.1 +mypy==0.931 +pylint==2.11.1 +pytest==7.0.0 +types-PyYAML==6.0.4 +types-requests==2.27.8 +types-setuptools==57.4.9 [testenv:flake8] deps = @@ -37,18 +42,30 @@ deps = isort=={[format_deps]isort} commands = {envpython} -m isort . --check-only +[testenv:mypy] +deps = + mypy=={[format_deps]mypy} + types-pyyaml=={[format_deps]types-PyYAML} + types-requests=={[format_deps]types-requests} + types-setuptools=={[format_deps]types-setuptools} + pytest=={[format_deps]pytest} +commands = {envpython} -m mypy . + [testenv:check_format] deps = - flake8=={[format_deps]flake8} - pylint=={[format_deps]pylint} black=={[format_deps]black} + flake8=={[format_deps]flake8} isort=={[format_deps]isort} + mypy=={[format_deps]mypy} + pylint=={[format_deps]pylint} + pytest=={[format_deps]pytest} -r{toxinidir}/test-requirements.txt -r{toxinidir}/integration-requirements.txt commands = {[testenv:black]commands} - {[testenv:isort]commands} {[testenv:flake8]commands} + {[testenv:isort]commands} + {[testenv:mypy]commands} {[testenv:pylint]commands} [testenv:do_format] @@ -122,6 +139,15 @@ commands = commands = {envpython} -m flake8 {posargs:cloudinit/ tests/ tools/ setup.py} deps = flake8 +[testenv:tip-mypy] +commands = {envpython} -m mypy --install-types --non-interactive . +deps = + mypy + pytest + types-PyYAML + types-requests + types-setuptools + [testenv:tip-pylint] commands = {envpython} -m pylint {posargs:cloudinit tests tools} deps = |