| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, clients could use XML external entities (XXEs) to read
arbitrary files from proxy-servers and inject the content into the
request. Since many S3 APIs reflect request content back to the user,
this could be used to extract any secrets that the swift user could
read, such as tempauth credentials, keymaster secrets, etc.
Now, disable entity resolution -- any unknown entities will be replaced
with an empty string. Without resolving the entities, the request is
still processed.
[CVE-2022-47950]
Closes-Bug: #1998625
Co-Authored-By: Romain de Joux <romain.de-joux@ovhcloud.com>
Change-Id: I84494123cfc85e234098c554ecd3e77981f8a096
(cherry picked from commit b8467e190f6fc67fd8fb6a8c5e32b2aa6a10fd8e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two things:
* Add attrs to lower-constraints
(cherry picked from commit ee12a11e708fce6c982a85b58cd2c3899f13479e)
* Drop tempest and grenade testing
Recently, these jobs started failing while installing tempest. The
trouble seems to be:
* tempest 26.1.0 depends on paramiko>=2.7.0
* paramiko 3.0.0 is selected which in turn depends on
cryptography >= 3.3 and bcrypt >= 3.2
* cryptography 39.0.0 and bcrypt 4.0.1 are selected, neither of which
get installed from binary wheels and both of which require a Rust
toolchain to compile
For some unknown reason, this used to be OK-ish -- the stable/train
upper-constraints file would be respected, paramiko==2.6.0 would be
installed with cryptography==2.8 and bcrypt==3.1.7, and pip would
simply warn
ERROR: tempest 26.1.0 has requirement paramiko>=2.7.0, but you'll
have paramiko 2.6.0 which is incompatible.
even though everything would still work. It's not at all clear how or
why this stopped working -- there haven't been any recent changes to
tempest's requirements or the stable/train constraints file, and the
version of pip is the same. Given the age of the branch and its
extended-maintenance state, dropping the testing seems best.
Change-Id: I61750a1083a1c97a6222ec9040f90980ee73acc8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Applied deltas:
- Fix http.client references
- Inline HTTPStatus codes
- Address request line splitting (https://bugs.python.org/issue33973)
- Special-case py2 header-parsing
- Address multiple leading slashes in request path
(https://github.com/python/cpython/issues/99220)
Closes-Bug: #1999278
Change-Id: Iae28097668213aa0734837ff21aef83251167d19
(cherry picked from commit 884f5538f8fb187b6ff18316249f1bd4b97b0952)
|
|
|
|
|
| |
Change-Id: I35cade2c46eb6acb66c064cde75d78173f46864c
(cherry picked from commit 597887dedcf1f2c855edfd1c591d4f30222be580)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New tox has a bunch of incompatibilities -- on master it's worth fixing
things so we can work in the new world, but on stable branches it seems
simplest to keep ourselves in the old world.
Do it at the project level to ensure the pin applies to the
periodic-stable jobs.
Also, skip installing python-dev on jammy.
Also, pin PasteDeploy to 2.1.1 on CentOS 7. This was the last version to
support py27.
Change-Id: I316170442c67c1b4a5b87f9a1168cc04ca2417b8
Related-Change: If69ae0f8eac8fe8ff7d5e4f4f1bff6d0ea9e7a8b
Co-Authored-By: Matthew Vernon <mvernon@wikimedia.org>
(cherry picked from commit cc033154ad4a4f345258457f3ceed9143fb3d46d)
(cherry picked from commit eb994ea501d96fdf73d4df9479a3b7d51e2d5744)
|
|
|
|
|
| |
Change-Id: Ibe514a7ab22d475517b1efc50de676f47d741a4c
(cherry picked from commit 6142ce88cc71037ba0cd23113eb6082fa91346ac)
|
|
|
|
|
|
|
|
|
|
|
| |
Following the fix for https://bugs.python.org/issue43882, our py39 unit
tests started failing. This was because swob.Request.blank calls
stdlib's urlparse, which now strips out newlines. Since Request.blank
*also* always unquotes, just make sure we always quote the newlines we
want to use while testing.
Change-Id: Ia5857c70e51d8af3e42ecaced95525be578db127
(cherry picked from commit 2b5853f4196e4e3725d1ab55ae7528c41b180a58)
|
|
|
|
|
|
|
|
| |
c.badtest.com actually has a CNAME now, and apparently we're actually
doing the look-up during tests.
Change-Id: I306b7d05740a2b8fcef2f5f432ebf5211bc723cc
(cherry picked from commit 54fc8a7dee4ad7a38944cbd4c2e3b5f2ec393765)
|
|
|
|
| |
Change-Id: I10f94c03dbc50acd9e4dad853677bd484ca52538
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Without this change, the self._response_headers gets a dict_items() type
on PUT. But the rest of the code assumes that it is a list.
Bug manifestation:
File "swift/common/middleware/catch_errors.py", line 120,
in handle_request
self._response_headers.append(('X-Trans-Id', trans_id))
AttributeError: 'dict_items' object has no attribute 'append'
Closes-Bug: #1929083
Change-Id: I5c398b6008716b64c668737e4201ba3b6ab3320b
(cherry picked from commit 77530136f13f1fc0d8625d43e1689427d4ee2fad)
|
|/
|
|
|
|
|
| |
Change-Id: Ic2f4487b6caf81fef3455ce03dda2cc144ae24ec
Related-Bug: #1929083
Co-Authored-By: Walter Doekes <walter+github@wjd.nu>
(cherry picked from commit 6bfd93d88644f0a7ff979dc9d6c3d85fff42f632)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the proxy container controller could, in corner cases, get
into a loop while building a listing for a sharded container. For
example, if a root has a single shard then the proxy will be
redirected to that shard, but if that shard has shrunk into the root
then it will redirect the proxy back to the root, and so on until the
root is updated with the shard's shrunken status.
There is already a guard to prevent the proxy fetching shard ranges
again from the same container that it is *currently* querying for
listing parts. That deals with the case when a container fills in gaps
in its listing shard ranges with a reference to itself. This patch
extends that guard to prevent the proxy fetching shard ranges again
from any container that has previously been queried for listing parts.
Cherry-Picked-From: I7dc793f0ec65236c1278fd93d6b1f17c2db98d7b
Change-Id: I3a197fecfd68359a325efdfd4590b7bd48598da8
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Shard shrinking can be instigated by a third party modifying shard
ranges, moving one shard to shrinking state and expanding the
namespace of one or more other shard(s) to act as acceptors. These
state and namespace changes must propagate to the shrinking and
acceptor shards. The shrinking shard must also discover the acceptor
shard(s) into which it will shard itself.
The sharder audit function already updates shards with their own state
and namespace changes from the root. However, there is currently no
mechanism for the shrinking shard to learn about the acceptor(s) other
than by a PUT request being made to the shrinking shard container.
This patch modifies the shard container audit function so that other
overlapping shards discovered from the root are merged into the
audited shard's db. In this way, the audited shard will have acceptor
shards to cleave to if shrinking.
This new behavior is restricted to when the shard is shrinking. In
general, a shard is responsible for processing its own sub-shard
ranges (if any) and reporting them to root. Replicas of a shard
container synchronise their sub-shard ranges via replication, and do
not rely on the root to propagate sub-shard ranges between shard
replicas. The exception to this is when a third party (or
auto-sharding) wishes to instigate shrinking by modifying the shard
and other acceptor shards in the root container. In other
circumstances, merging overlapping shard ranges discovered from the
root is undesirable because it risks shards inheriting other unrelated
shard ranges. For example, if the root has become polluted by
split-brain shard range management, a sharding shard may have its
sub-shards polluted by an undesired shard from the root.
During the shrinking process a shard range's own shard range state may
be either shrinking or, prior to this patch, sharded. The sharded
state could occur when one replica of a shrinking shard completed
shrinking and moved the own shard range state to sharded before other
replica(s) had completed shrinking. This makes it impossible to
distinguish a shrinking shard (with sharded state), which we do want
to inherit shard ranges, from a sharding shard (with sharded state),
which we do not want to inherit shard ranges.
This patch therefore introduces a new shard range state, 'SHRUNK', and
applies this state to shard ranges that have completed shrinking.
Shards are now restricted to inherit shard ranges from the root only
when their own shard range state is either SHRINKING or SHRUNK.
This patch also:
- Stops overlapping shrinking shards from generating audit warnings:
overlaps are cured by shrinking and we therefore expect shrinking
shards to sometimes overlap.
- Extends an existing probe test to verify that overlapping shard
ranges may be resolved by shrinking a subset of the shard ranges.
- Adds a --no-auto-shard option to swift-container-sharder to enable the
probe tests to disable auto-sharding.
- Improves sharder logging, in particular by decrementing ranges_todo
when a shrinking shard is skipped during cleaving.
- Adds a ShardRange.sort_key class method to provide a single definition
of ShardRange sort ordering.
- Improves unit test coverage for sharder shard auditing.
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Cherry-Picked-From: I9034a5715406b310c7282f1bec9625fe7acd57b6
Change-Id: I83441cb0ba600e7c26aa24a80bc3fb85803c90f3
|
|
|
|
|
|
|
| |
This gets us retries "for free" and should reduce gate flakiness.
Cherry-Picked-From: Ia2e4c94f246230a3e25e4557b4b2c1a3a67df756
Change-Id: Icbe504fd1b7680ab45254fe9acb5010754222dd3
|
|
|
|
|
|
|
|
|
| |
The existing tests cover a lot of behaviors and carry around a lot of
state that makes them hard to extend in a descriptive mannor to cover
new or changed behaviors.
Cherry-Picked-From: Ie52932d8d4a66b11c295d5568aa3a60895b84f3b
Change-Id: I23a91efd4b1ef909fff355ca18976b8da331f9bf
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When you have overlapping active shard ranges updates will get sent to
"the first" database; but when the proxy queries shards for listings
they get stitched together end-to-end with markers.
This means mostly the second shard range is ignored. But since the
order of shard ranges is not stable (it seems to depend on the database
indexes; which can change when rows are added or removed) you could send
updates to "the wrong" shard.
Using a stable order leads to more correct and robust behavior under
failure; and is also better for cognitive overhead.
Cherry-Picked-From: Ia9d29822bf07757fc1cf58ded90b49f12b7b2c24
Change-Id: I9e3b6391b3ea85ecd147916a2bf0f002c35fe5e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous patch, we could clean up all container DBs, but only if
the daemons went in a specific order (which cannot be guaranteed in a
production system).
Once a reclaim age passes, there's a race: If the container-replicator
processes the root container before the container-sharder processes the
shards, the deleted shards would get reaped from the root so they won't
be available for the sharder. The shard containers then hang around
indefinitely.
Now, be willing to mark shard DBs as deleted even when we can't find our
own shard range in the root. Fortunately, the shard already knows that
its range has been deleted; we don't need to get that info from the root.
Cherry-Picked-From: If08bccf753490157f27c95b4038f3dd33d3d7f8c
Related-Change: Icba98f1c9e17e8ade3f0e1b9a23360cf5ab8c86b
Change-Id: Ia313663ae43d7041d710f4122d2537e5e601908d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a DB gets deleted, we clear out its metadata. This included sysmeta
such as that used to tell shards the name of their root DB.
Previously, this would cause deleted shards to pop back to life as roots
that claimed to have objects still sitting in whatever container they
sharnk into.
Now, use the metadata if it's available, but when it's not, go by the
state of the DB's "own shard range" -- deleted shards should be marked
deleted, while roots never are.
This allows us to actually clean up the database files; you can test
this by doing something like
* Run `nosetests test/probe/test_sharder.py:TestContainerSharding.test_shrinking`
* Run `find /srv/*/*/containers -name '*.db'` to see how many databases
are left on disk. There should be 15: 3 for the root container, 6 for
the two shards on the first pass, and another 6 for the two shards on
the second pass.
* Edit container configs to decrease reclaim_age -- even 1 should be
fine.
* Run `swift-init main start` to restart the servers.
* Run `swift-init container-sharder once` to have the shards get marked
deleted.
* Run `swift-init container-updater once` to ensure all containers have
reported.
* Run `swift-init container-replicator once` to clean up the
containers.
* Run `find /srv/*/*/containers -name '*.db'` again to verify no
containers remain on disk.
Cherry-Picked-From: Icba98f1c9e17e8ade3f0e1b9a23360cf5ab8c86b
Change-Id: I3560d1352fee825c63cd26556caa806f7424c389
|
|
|
|
|
|
|
|
|
| |
The current behavior is really painful when you've got hundreds of shard
ranges in a DB. The new summary with the states is default. Users can
add a -v/--verbose flag to see the old full detail view.
Cherry-Picked-From: I0a7d65f64540f99514c52a70f9157ef060a8a892
Change-Id: Id86f09e20dd2460a3f0baf1aeaf9dd482152a43c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...unless the client requests it specifically using a new flag:
X-Backend-Auto-Create: true
Previously, you could get real jittery listings during a rebalance:
* Partition with a shard DB get reassigned, so one primary has no DB.
* Proxy makes a listing, gets a 404, tries another node. Likely, one of
the other shard replicas responds. Things are fine.
* Update comes in. Since we use the auto_create_account_prefix
namespace for shards, container DB gets created and we write the row.
* Proxy makes another listing. There's a one-in-three chance that we
claim there's only one object in that whole range.
Note that unsharded databases would respond to the update with a 404 and
wait for one of the other primaries (or the old primary that's now a
hand-off) to rsync a whole DB over, keeping us in the happy state.
Now, if the account is in the shards namespace, 404 the object update if
we have no DB. Wait for replication like in the unsharded case.
Continue to be willing to create the DB when the sharder is seeding all
the CREATED databases before it starts cleaving, though.
Closes-Bug: #1881210
Cherry-Picked-From: I15052f3f17999e6f432951ba7c0731dcdc9475bb
Change-Id: I45fb36602284ca1ec5f6fc73d05c039fd6ca1269
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea is, if none of
- timestamp,
- object_count,
- bytes_used,
- state, or
- epoch
has changed, we shouldn't need to send an update back to the root
container.
This is more-or-less comparable to what the container-updater does to
avoid unnecessary writes to the account.
Closes-Bug: #1834097
Cherry-Picked-From: I1ee7ba5eae3c508064714c4deb4f7c6bbbfa32af
Change-Id: If2346d1079b656290ad698d6e4ce276fbd9a5841
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we'd see warnings like
UnicodeWarning: Unicode equal comparison failed to convert both
arguments to Unicode - interpreting them as being unequal
when setting lower/upper bounds with non-ascii byte strings.
Cherry-Picked-From: I328f297a5403d7e59db95bc726428a3f92df88e1
Change-Id: Icfffb3e7f53161a2641e8fd5be55b045d0c3de02
|
|
|
|
|
| |
Cherry-Picked-From: Ic7c40589679c290e5565f9581f70b9a1c070f6ab
Change-Id: Ie5bee7eb7d219a6c8cec5fc1c9367793cecc7f4f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an operator does a `find_and_replace` on a DB that already has
shard ranges, they get a prompt like:
This will delete existing 58 shard ranges.
Do you want to show the existing ranges [s], delete the existing
ranges [yes] or quit without deleting [q]?
Previously, if they selected `q`, we would skip the delete but still do
the merge (!) and immediately warn about how there are now invalid shard
ranges. Now, quit without merging.
Cherry-Picked-From: I7d869b137a6fbade59bb8ba16e4f3e9663e18822
Change-Id: I7ee69f0ca1aeef1004e61c45eeeb98dd35fe212e
|
|
|
|
|
|
|
|
| |
Otherwise, we make a bunch of backend requests where we have no
real expectation of finding data.
Cherry-Picked-From: I7eaa012ba938eaa7fc22837c32007d1b7ae99709
Change-Id: I5ede50169603051f6c8ce921c9b6c81ed6f3b36b
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
...and bump up their timeout, since that seems more likely to happen if
we have to retry.
Change-Id: Ie05521f6cd146234dc5615c96ad19681b43e9110
(cherry picked from commit d4c0a7d3b3106f8b491e78ea21fca36c99ad04d9)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There's been a roughly 5x increase in timeouts in the last three months,
which was itself about a 10x increase from the three months prior. Let's
get timeouts to be the exception, not the norm.
FWIW, some stats over the past two and a half years:
Quarter | Median | Timeouts
| Pass Time |
--------+-----------+---------
2018Q1 | 19.0m | 2
2018Q2 | 24.5m | 19
2018Q3 | 26.8m | 3
2018Q4 | 29.0m | 2
2019Q1 | 31.0m | 0
2019Q2 | 31.7m | 4
2019Q3 | 32.7m | 1
2019Q4 | 33.7m | 2
2020Q1 | 46.4m | 25
2020Q2 | 47.5m | 134
Change-Id: Iab60b952dfd1060f1166e5a07c9298c75e6831f1
(cherry picked from commit 5eb677fe505f41e4d039e3a699270609004261ab)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
DSVM recently got a bunch more middlewares enabled, so it's running more
tests than it used to.
I can't think of much that's changed for probe tests, but I feel like
I've seen it popping timeouts more often lately.
Note that the new timeouts are still lower than the typical run-time of
our longest-running jobs (currently grenade / tempest-ipv6-only).
Change-Id: Iffbb567124096df02b04981550faec8204d5d1ec
Related-Change: I3cbbcd2ea9ced0923bee4a6b0783e4cf5e82e95b
(cherry picked from commit 67598b3a4a6f841726b7358245ae18edbfc58250)
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Looking at the last few hundred runs, we've had ~8% TIMEOUTs, and the
average time is already 21 minutes. Let's give it an extra 10 minutes,
see if the TIMEOUT rate comes down.
Change-Id: Ifd9fbac032825bae1c8f5edd88c5d692a0b2cef1
(cherry picked from commit cd78ee983d0ae0cebdb27327b6ea7950308d0dbf)
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| | |
Long term, though, we should look at moving this in-tree if we really care about it.
Change-Id: I0a25a6e395e5cf2bb39fa5b349418384eb513963
(cherry picked from commit fb91993b47aa02209ba39382620789b14e10d6d2)
|
|\ \
| | |
| | |
| | | |
stable/train
|
| |/
| |
| |
| |
| |
| |
| | |
That way we avoid POST_FAILUREs when the real problem was in run.
Change-Id: I9eb84d1c794d58f0af3b7d78d3bc4660c1823dc8
(cherry picked from commit 73aa48a823a2e602089c3e37d6b9ec4c9e19d35f)
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, if you were on Python 2.7.10+ [0], such a newline would cause the
sharder to fail, complaining about invalid header values when trying to create
the shard containers. On older versions of Python, it would most likely cause a
parsing error in the container-server that was trying to handle the PUT.
Now, quote all places that we pass around container paths. This includes:
* The X-Container-Sysmeta-Shard-(Quoted-)Root sent when creating the (empty)
remote shards
* The X-Container-Sysmeta-Shard-(Quoted-)Root included when initializing the
local handoff for cleaving
* The X-Backend-(Quoted-)Container-Path the proxy sends to the object-server
for container updates
* The Location header the container-server sends to the object-updater
Note that a new header was required in requests so that servers would
know whether the value should be unquoted or not. We can get away with
reusing Location in responses by having clients opt-in to quoting with
a new X-Backend-Accept-Quoted-Location header.
During a rolling upgrade,
* old object-servers servicing requests from new proxy-servers will
not know about the container path override and so will try to update
the root container,
* in general, object updates are more likely to land in the root
container; the sharder will deal with them as misplaced objects, and
* shard containers created by new code on servers running old code
will think they are root containers until the server is running new
code, too; during this time they'll fail the sharder audit and report
stats to their account, but both of these should get cleared up upon
upgrade.
Drive-by: fix a "conainer_name" typo that prevented us from testing that
we can shard a container with unicode in its name. Also, add more UTF8
probe tests.
[0] See https://bugs.python.org/issue22928
Closes-Bug: 1856894
Cherry-Picked-From: Ie08f36e31a448a547468dd85911c3a3bc30e89f1
Change-Id: I5268a9282fa3b785427498188aff541679c1f915
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This patch specifies a set of configuration options required to build
a TLS context, which is used to wrap the client connection socket.
Closes-Bug: #1906846
Change-Id: I03a92168b90508956f367fbb60b7712f95b97f60
(cherry picked from commit 6930bc24b2f7613bc56bee3d2c34f7bb4890ec39)
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| | |/
| |/|
| | |
| | |
| | | |
Change-Id: I495fb1ec2394130c7274368662b58212ca375854
(cherry picked from commit 976cc8f482723a698160815affabb540e4d766eb)
|
|\ \ \
| | | |
| | | |
| | | | |
stable/train
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, passing a relative path would confuse the ContainerBroker about
which DB files are available, leading to an IndexError when none were found.
Just call realpath() on whatever the user provided so we don't have to muck
with any of the broker code.
Cherry-Picked-From: Icdf100dfcd006d975b49d151b99aa9272452d013
Change-Id: I77d3b50b802208732a11c68a2e77ceba7339dda0
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The lack of quoting gets extra troublesome with reserved names,
where messages get truncated.
Cherry-Picked-From: I415901d3a8cd24cb3cedc72235292bb9d1705bbc
Change-Id: Idd634d1152395a50caa59f35452f216007e08406
|
|\ \ \ \
| |/ / /
| | / /
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Otherwise, we can 500 with
ValueError: invalid literal for int() with base 10: ''
Cherry-Picked-From: I35614aa4b42e61d97929579dcb16f7dfc9fef96f
Change-Id: I239884e0cf3e6438fc791d609fe3acd456e21e94
|