summaryrefslogtreecommitdiff
path: root/chromium/docs/speed
diff options
context:
space:
mode:
authorAllan Sandfeld Jensen <allan.jensen@qt.io>2020-11-18 16:35:47 +0100
committerAllan Sandfeld Jensen <allan.jensen@qt.io>2020-11-18 15:45:54 +0000
commit32f5a1c56531e4210bc4cf8d8c7825d66e081888 (patch)
treeeeeec6822f4d738d8454525233fd0e2e3a659e6d /chromium/docs/speed
parent99677208ff3b216fdfec551fbe548da5520cd6fb (diff)
downloadqtwebengine-chromium-32f5a1c56531e4210bc4cf8d8c7825d66e081888.tar.gz
BASELINE: Update Chromium to 87.0.4280.67
Change-Id: Ib157360be8c2ffb2c73125751a89f60e049c1d54 Reviewed-by: Allan Sandfeld Jensen <allan.jensen@qt.io>
Diffstat (limited to 'chromium/docs/speed')
-rw-r--r--chromium/docs/speed/benchmark/harnesses/blink_perf.md25
-rw-r--r--chromium/docs/speed/binary_size/android_binary_size_trybot.md1
-rw-r--r--chromium/docs/speed/binary_size/metrics.md8
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/glossary.md2
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/how_to_disable_a_story.md7
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md2
-rw-r--r--chromium/docs/speed/metrics_changelog/2020_06_cls.md21
-rw-r--r--chromium/docs/speed/metrics_changelog/2020_06_fcp.md21
-rw-r--r--chromium/docs/speed/metrics_changelog/2020_07_fcp.md24
-rw-r--r--chromium/docs/speed/metrics_changelog/2020_08_lcp.md23
-rw-r--r--chromium/docs/speed/metrics_changelog/cls.md2
-rw-r--r--chromium/docs/speed/metrics_changelog/fcp.md6
-rw-r--r--chromium/docs/speed/metrics_changelog/lcp.md2
-rw-r--r--chromium/docs/speed/perf_lab_platforms.md1
-rw-r--r--chromium/docs/speed/perf_regression_sheriffing.md163
15 files changed, 177 insertions, 131 deletions
diff --git a/chromium/docs/speed/benchmark/harnesses/blink_perf.md b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
index 54f2d87cc60..39da9e7ab1d 100644
--- a/chromium/docs/speed/benchmark/harnesses/blink_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
@@ -85,7 +85,7 @@ Example tracing synchronous tests:
* [append-child-measure-time.html](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/test_data/append-child-measure-time.html)
-* [simple-html-measure-page-load-time.html](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/test_ata/simple-html-measure-page-load-time.html)
+* [simple-html-measure-page-load-time.html](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/test_data/simple-html-measure-page-load-time.html)
### Asynchronous Perf Tests
@@ -166,29 +166,6 @@ Here is an example for testing Cache Storage API of service workers:
[cache-open-add-delete-10K-service-worker.js](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/service_worker/resources/cache-open-add-delete-10K-service-worker.js)
-## Canvas Tests
-
-The sub-framework [canvas_runner.js](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/canvas/resources/canvas_runner.js) is used for
-tests in the `canvas` directory. This can measure rasterization and GPU time
-using requestAnimationFrame (RAF) and contains a callback framework for video.
-
-Normal tests using `runTest()` work similarly to the asynchronous test above,
-but crucially wait for RAF after completing a single trial of
-`MEASURE_DRAW_TIMES` runs.
-
-RAF tests are triggered by appending the query string `raf` (case insensitive)
-to the test's url. These tests wait for RAF to return before making a
-measurement. This way rasterization and GPU time are included in the
-measurement.
-
-For example:
-
-The test [gpu-bound-shader.html](https://chromium.googlesource.com/chromium/src/+/master/third_party/blink/perf_tests/canvas/gpu-bound-shader.html) is just measuring
-CPU, and thus looks extremely fast as the test is just one slow shader.
-
-The url `gpu-bound-shader.html?raf` will measure rasterization and GPU time as
-well, thus giving a more realistic measurement of performance.
-
## Running Tests
**Running tests directly in browser**
diff --git a/chromium/docs/speed/binary_size/android_binary_size_trybot.md b/chromium/docs/speed/binary_size/android_binary_size_trybot.md
index 59d907ef252..7c813ea43c7 100644
--- a/chromium/docs/speed/binary_size/android_binary_size_trybot.md
+++ b/chromium/docs/speed/binary_size/android_binary_size_trybot.md
@@ -207,3 +207,4 @@ For more information on when to use `const char *` vs `const char[]`, see
- [Link to recipe](https://cs.chromium.org/chromium/build/scripts/slave/recipes/binary_size_trybot.py)
- [Link to src-side checks](/tools/binary_size/trybot_commit_size_checker.py)
+- [Link to Gerrit Plugin](https://chromium.googlesource.com/infra/gerrit-plugins/chromium-binary-size/)
diff --git a/chromium/docs/speed/binary_size/metrics.md b/chromium/docs/speed/binary_size/metrics.md
index c9a1cbadc59..5105f178408 100644
--- a/chromium/docs/speed/binary_size/metrics.md
+++ b/chromium/docs/speed/binary_size/metrics.md
@@ -37,9 +37,15 @@ For Googlers, more information available at [go/chrome-apk-size](https://goto.go
* Computed as:
* The size of an APK
* With all native code as the sum of section sizes (except .bss), uncompressed.
+ * Why: Removes effects of ELF section alignment.
* With all dex code as if it were stored uncompressed.
+ * Why: Dex is stored uncompressed on newer Android versions.
+ * With all zipalign padding removed.
+ * Why: Removes effects of file alignment (esp. relevant because native libraries are 4k-aligned).
+ * With size of apk signature block removed.
+ * Why: Size fluctuates by several KB based on how hash values turn out.
* With all translations as if they were not missing (estimates size of missing translations based on size of english strings).
- * Without translation-normalization, translation dumps cause jumps.
+ * Why: Without translation-normalization, translation dumps cause jumps.
* Translation-normalization applies only to apks (not to Android App Bundles).
### Native Code Size Metrics
diff --git a/chromium/docs/speed/bot_health_sheriffing/glossary.md b/chromium/docs/speed/bot_health_sheriffing/glossary.md
index 6a2343ead9e..c9bad7d6246 100644
--- a/chromium/docs/speed/bot_health_sheriffing/glossary.md
+++ b/chromium/docs/speed/bot_health_sheriffing/glossary.md
@@ -10,7 +10,7 @@
* **Device: **The physical hardware that we run performance tests on.
-* **Flakiness dashboard: **[A dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html#testType=blink_perf.canvas) that shows the revisions at which a given test failed. Also known as the test results dashboard.
+* **Flakiness dashboard: **[A dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html#testType=blink_perf.layout) that shows the revisions at which a given test failed. Also known as the test results dashboard.
* **Host: **The physical hardware that Telemetry runs on. For desktop testing, this is the same as the *device* on which the testing is done. For mobile testing, the *host* can mean either the Linux desktop or one of the multiple Docker containers within that Linux desktop, each with access to a single attached mobile device.
diff --git a/chromium/docs/speed/bot_health_sheriffing/how_to_disable_a_story.md b/chromium/docs/speed/bot_health_sheriffing/how_to_disable_a_story.md
index 3eb10fc993d..0f8503d8db4 100644
--- a/chromium/docs/speed/bot_health_sheriffing/how_to_disable_a_story.md
+++ b/chromium/docs/speed/bot_health_sheriffing/how_to_disable_a_story.md
@@ -22,7 +22,8 @@ As of January 2020, no benchmarks are named in a similar fashion.
Start a fresh branch in an up-to-date Chromium checkout. If you're unsure of how to do this, [see these instructions](https://www.chromium.org/developers/how-tos/get-the-code).
- In your editor, open up [`tools/perf/expectations.config`](https://cs.chromium.org/chromium/src/tools/perf/expectations.config?q=expectations.config&sq=package:chromium&dr).
+
+In your editor, open up [`tools/perf/expectations.config`](https://cs.chromium.org/chromium/src/tools/perf/expectations.config?q=expectations.config&sq=package:chromium&dr).
You'll see that the file is divided into sections sorted alphabetically by benchmark name. Find the section for the benchmark in question. (If it doesn't exist, add it in the correct alphabetical location.)
@@ -46,12 +47,12 @@ Add a new line for each story that you need to disable, or an asterisk if you're
For example, an entry disabling a particular story might look like:
- crbug.com/738453 [ Nexus_6 ] blink_perf.canvas/putImageData.html [ Skip ]
+ crbug.com/738453 [ Nexus_6 ] blink_perf.layout/subtree-detaching.html [ Skip ]
whereas an entry disabling a benchmark on an entire platform might look like:
- crbug.com/593973 [ Android_Svelte ] blink_perf.canvas/* [ Skip ]
+ crbug.com/593973 [ Android_Svelte ] blink_perf.layout/* [ Skip ]
## Submit changes
diff --git a/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md b/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
index 91b15bece4a..6c65d1ff392 100644
--- a/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
+++ b/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
@@ -2,7 +2,7 @@
The first step in addressing a test failure is to identify what stories are failing.
-The easiest way to identify these is to use the [Flakiness dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html#testType=blink_perf.canvas), which is a high-level dashboard showing test passes and failures. (Sheriff-o-matic tries to automatically identify the failing stories, but is often incorrect and therefore can't be trusted.) Open up the flakiness dashboard and select the benchmark and platform in question (pulled from the SOM alert) from the "Test type" and "Builder" dropdowns. You should see a view like this:
+The easiest way to identify these is to use the [Flakiness dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html#testType=blink_perf.layout), which is a high-level dashboard showing test passes and failures. (Sheriff-o-matic tries to automatically identify the failing stories, but is often incorrect and therefore can't be trusted.) Open up the flakiness dashboard and select the benchmark and platform in question (pulled from the SOM alert) from the "Test type" and "Builder" dropdowns. You should see a view like this:
![The flakiness dashboard](images/flakiness_dashboard.png)
diff --git a/chromium/docs/speed/metrics_changelog/2020_06_cls.md b/chromium/docs/speed/metrics_changelog/2020_06_cls.md
new file mode 100644
index 00000000000..52813fad4fb
--- /dev/null
+++ b/chromium/docs/speed/metrics_changelog/2020_06_cls.md
@@ -0,0 +1,21 @@
+# Cumulative Layout Shift Changes in M85
+
+## Changes in Chrome 85
+Prior to Chrome 85, there was a [bug](https://bugs.chromium.org/p/chromium/issues/detail?id=1088311)
+in Cumulative Layout Shift on pages with video elements. Hovering over the video
+element so that the thumb slider was visible would result in layout shifts when
+it moved. The bug was fixed in Chrome 85. The source code of the change can be
+seen [here](https://chromium-review.googlesource.com/c/chromium/src/+/2233310).
+
+## How does this affect a site's metrics?
+
+This change only affects metrics for a very small amount of sites. Desktop sites
+with video elements that users hover their mouse over for an extended period of
+time will have lower CLS values starting in Chrome 85.
+
+We do not see an impact from this change in our overall metrics, so we believe
+the effect on most sites will be minimal.
+
+## When were users affected?
+
+Most users were updated to Chrome 85 the week of August 24, 2020.
diff --git a/chromium/docs/speed/metrics_changelog/2020_06_fcp.md b/chromium/docs/speed/metrics_changelog/2020_06_fcp.md
new file mode 100644
index 00000000000..3ec1798d42e
--- /dev/null
+++ b/chromium/docs/speed/metrics_changelog/2020_06_fcp.md
@@ -0,0 +1,21 @@
+# First Contentful Paint Changes in M84
+
+## Changes in Chrome 84
+Starting in Chrome 84, content with opacity:0 is no longer counted as the
+first contentful paint. This brings behavior in line with the
+[specification](https://www.w3.org/TR/paint-timing/).
+The source code of the change can be seen
+[here](https://chromium-review.googlesource.com/c/chromium/src/+/2145134).
+
+## How does this affect a site's metrics?
+
+This change affects sites whose content is all opacity:0 during the first paint.
+After this change, the first contentful paint will be reported the next time
+visible content paints.
+
+We do not see an impact from this change in our overall metrics, so we believe
+the effect on most sites will be minimal.
+
+## When were users affected?
+
+Most users were updated to Chrome 84 the week of July 13, 2020.
diff --git a/chromium/docs/speed/metrics_changelog/2020_07_fcp.md b/chromium/docs/speed/metrics_changelog/2020_07_fcp.md
new file mode 100644
index 00000000000..94bdbaff668
--- /dev/null
+++ b/chromium/docs/speed/metrics_changelog/2020_07_fcp.md
@@ -0,0 +1,24 @@
+# First Contentful Paint Changes in M86
+
+## Changes in Chrome 86
+In Chrome 86, some small changes were made to First Contentful Paint to bring
+its implementation in line with the [specification](https://www.w3.org/TR/paint-timing/).
+
+The changes are:
+ * Video elements now trigger FCP when painted. [Source code for this change](https://chromium-review.googlesource.com/c/chromium/src/+/2276244).
+ * Only SVG elements with content now trigger FCP. Previously empty SVG paints triggered SVG. [Source code for this change](https://chromium-review.googlesource.com/c/chromium/src/+/2285532).
+ * WebGL2 canvases now trigger FCP when painted. [Source code for this change](https://chromium-review.googlesource.com/c/chromium/src/+/2348694).
+
+## How does this affect a site's metrics?
+
+This change affects a small set of sites:
+ * Sites with a first contentful paint which is only a video element without a `poster` attribute will have an earlier FCP time.
+ * Sites with a first contentful paint which is only an empty SVG element will have a later FCP time.
+ * Sites with a first contentful paint which is only a WebGL canvas will have an earlier FCP time.
+
+We do not see an impact from this change in our overall metrics, so we believe
+the effect on most sites will be minimal.
+
+## When were users affected?
+
+Most users were updated to Chrome 86 the week of October 5, 2020.
diff --git a/chromium/docs/speed/metrics_changelog/2020_08_lcp.md b/chromium/docs/speed/metrics_changelog/2020_08_lcp.md
new file mode 100644
index 00000000000..b9170b5e9bb
--- /dev/null
+++ b/chromium/docs/speed/metrics_changelog/2020_08_lcp.md
@@ -0,0 +1,23 @@
+# Largest Contentful Paint Changes in M86
+
+## Changes in Chrome 86
+Prior to Chrome 86, there was a [bug](https://bugs.chromium.org/p/chromium/issues/detail?id=1092473)
+in Largest Contentful Paint on some pages where the largest contentful element
+initially has opacity:0. The bug was fixed in Chrome 86. The source code of the
+change can be seen [here](https://chromium-review.googlesource.com/c/chromium/src/+/2316788).
+
+## How does this affect a site's metrics?
+
+This change only affects metrics for a very small amount of sites. Generally
+sites whose largest contentful elements are opacity:0 initially are using A/B
+testing frameworks which initially clear the page before displaying the correct
+content. Sites using this technique will now see a longer LCP, reflecting the
+time until the largest contentful paint is visible to the user, instead of the
+time it is loaded in the DOM but invisible.
+
+We do not see an impact from this change in our overall metrics, so we believe
+the effect on most sites will be minimal.
+
+## When were users affected?
+
+Most users were updated to Chrome 86 the week of October 5, 2020.
diff --git a/chromium/docs/speed/metrics_changelog/cls.md b/chromium/docs/speed/metrics_changelog/cls.md
index b2fd866057f..a2d8fa13cb9 100644
--- a/chromium/docs/speed/metrics_changelog/cls.md
+++ b/chromium/docs/speed/metrics_changelog/cls.md
@@ -8,6 +8,8 @@ This is a list of changes to [Cumulative Layout Shift](https://web.dev/cls).
* an element and its descendants if they move together, and
* inline elements and texts in a block after a shifted text.
These changes will affect layout instability score for the specific cases.
+* Chrome 85
+ * Metric definition improvement: [Cumulative Layout Shift ignores layout shifts from video slider thumb](2020_06_cls.md)
* Chrome 79
* Metric is elevated to stable; changes in metric definition will be reported in this log.
* Chrome 77
diff --git a/chromium/docs/speed/metrics_changelog/fcp.md b/chromium/docs/speed/metrics_changelog/fcp.md
index 1d8545c469a..ef2c9f4798c 100644
--- a/chromium/docs/speed/metrics_changelog/fcp.md
+++ b/chromium/docs/speed/metrics_changelog/fcp.md
@@ -2,8 +2,12 @@
This is a list of changes to [First Contentful Paint](https://web.dev/fcp).
+* Chrome 86
+ * Metric definition improvements: [First Contentful Paint reported for videos, webgl2 canvases, and non-contentful SVG](2020_07_fcp.md)
+* Chrome 84
+ * Metric definition improvement: [First Contentful Paint not reported for opacity:0](2020_06_fcp.md)
* Chrome 77
* Metric definition improvement: [First Contentful Paint ending switches from swap time to presentation time](2019_12_fcp.md)
* Chrome performance regression: [First Contentful Paint regression (recovered in Chrome 78)](2019_12_fcp.md)
* Chrome 60
- * Metric exposed via API: [First Contentful Paint](https://web.dev/first-contentful-paint/) available via [Paint Timing API](https://w3c.github.io/paint-timing/#first-contentful-paint) \ No newline at end of file
+ * Metric exposed via API: [First Contentful Paint](https://web.dev/first-contentful-paint/) available via [Paint Timing API](https://w3c.github.io/paint-timing/#first-contentful-paint)
diff --git a/chromium/docs/speed/metrics_changelog/lcp.md b/chromium/docs/speed/metrics_changelog/lcp.md
index 80aabcf4335..5d114ddf428 100644
--- a/chromium/docs/speed/metrics_changelog/lcp.md
+++ b/chromium/docs/speed/metrics_changelog/lcp.md
@@ -2,6 +2,8 @@
This is a list of changes to [Largest Contentful Paint](https://web.dev/lcp).
+* Chrome 86
+ * Metric definition improvement: [Largest Contentful Paint ignores paints with opacity 0](2020_08_lcp.md)
* Chrome 83
* Metric definition improvement: [Largest Contentful Paint measurement stops at first input or scroll](2020_05_lcp.md)
* Metric definition improvement: [Largest Contentful Paint properly accounts for visual size of background images](2020_05_lcp.md)
diff --git a/chromium/docs/speed/perf_lab_platforms.md b/chromium/docs/speed/perf_lab_platforms.md
index 4f4f7a1099f..59decbb192a 100644
--- a/chromium/docs/speed/perf_lab_platforms.md
+++ b/chromium/docs/speed/perf_lab_platforms.md
@@ -13,6 +13,7 @@
* [android-pixel2-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2-perf): Android OPM1.171019.021.
* [android-pixel2_weblayer-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2_weblayer-perf): Android OPM1.171019.021.
* [android-pixel2_webview-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2_webview-perf): Android OPM1.171019.021.
+ * [android-pixel4a_power-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel4a_power-perf): Android QD4A.200102.001.A1.
## Linux
diff --git a/chromium/docs/speed/perf_regression_sheriffing.md b/chromium/docs/speed/perf_regression_sheriffing.md
index ef058ffa050..33375a0e350 100644
--- a/chromium/docs/speed/perf_regression_sheriffing.md
+++ b/chromium/docs/speed/perf_regression_sheriffing.md
@@ -1,105 +1,75 @@
# Perf Regression Sheriffing (go/perfregression-sheriff)
The perf regression sheriff tracks performance regressions in Chrome's
-continuous integration tests. Note that a [new rotation](perf_bot_sheriffing.md)
-has been created to ensure the builds and tests stay green, so the perf
-regression sheriff role is now entirely focused on performance.
+continuous integration tests. Note that a [different
+rotation](perf_bot_sheriffing.md) has been created to ensure the builds and
+tests stay green, so the perf regression sheriff role is now entirely focused
+on performance.
**[Rotation calendar](https://calendar.google.com/calendar/embed?src=google.com_2fpmo740pd1unrui9d7cgpbg2k%40group.calendar.google.com)**
## Key Responsibilities
- * [Triage Regressions on the Perf Dashboard](#Triage-Regressions-on-the-Perf-Dashboard)
- * [Follow up on Performance Regressions](#Follow-up-on-Performance-Regressions)
- * [Give Feedback on our Infrastructure](#Give-Feedback-on-our-Infrastructure)
-
-## Triage Regressions on the Perf Dashboard
-
-Open the perf dashboard [alerts page](https://chromeperf.appspot.com/alerts).
-
-In the upper right corner, **sign in with your Chromium account**. Signing in is
-important in order to be able to kick off bisect jobs, and see data from
-internal waterfalls.
-
-Pick up **Chromium Perf Sheriff** from "Select an item ▼" drop down menu.
-table of "Performance Alerts" should be shown. If there are no currently pending
-alerts, then the table won't be shown.
-
-The list can be sorted by clicking on the column header. When you click on the
-checkbox next to an alert, all the other alerts that occurred in the same
-revision range will be highlighted.
-
-Check the boxes next to the alerts you want to take a look at, and click the
-"Graph" button. You'll be taken to a page with a table at the top listing all
-the alerts that have an overlapping revision range with the one you chose, and
-below it the dashboard shows graphs of all the alerts checked in that table.
-
-1. **For alerts related to `resource_sizes`:**
- * Refer to [apk_size_regressions.md](apk_size_regressions.md).
-2. **Look at the graph**.
- * If the alert appears to be **within the noise**, click on the red
- exclamation point icon for it in the graph and hit the "Report Invalid
- Alert" button.
- * If the alert appears to be **reverting a recent improvement**, click on
- the red exclamation point icon for it in the graph and hit the "Ignore
- Valid Alert" button.
- * If the alert is **visibly to the left or the right of the
- actual regression**, click on it and use the "nudge" menu to move it into
- place.
- * If there is a line labeled "ref" on the graph, that is the reference build.
- It's an older version of Chrome, used to help us sort out whether a change
- to the bot or test might have caused the graph to jump, rather than a real
- performance regression. If **the ref build moved at the same time as the
- alert**, click on the alert and hit the "Report Invalid Alert" button.
-3. **Look at the other alerts** in the table to see if any should be grouped together.
- Note that the bisect will automatically dupe bugs if it finds they have the
- same culprit, so you don't need to be too aggressive about grouping alerts
- that might not be related. Some signs alerts should be grouped together:
- * If they're all in the same test suite
- * If they all regressed the same metric (a lot of commonality in the Test
- column)
-4. **Triage the group of alerts**. Check all the alerts you believe are related,
- and press the triage button.
- * If one of the alerts already has a bug id, click "existing bug" and use
- that bug id.
- * Otherwise click "new bug".
- * Only add a description if you have additional context. Otherwise a default
- description will be automatically added when left blank.
-5. **Look at the revision range** for the regression. You can see it in the
- tooltip on the graph. If you see any likely culprits, cc the authors on the
- bug.
-6. **Optionally, kick off more bisects**. The perf dashboard will automatically
- kick off a bisect for each bug you file. But if you think the regression is
- much clearer on one platform, or a specific page of a page set, or you want
- to see a broader revision range feel free to click on the alert on that graph
- and kick off a bisect for it. There should be capacity to kick off as many
- bisects as you feel are necessary to investigate; [give feedback](#feedback)
- below if you feel that is not the case.
-
-### Dashboard UI Tips
-
-* Grouping is done client side today. If you click "Show more" at the bottom
-until you can see all the alerts, the alerts will be grouped together more.
-* You can shift click on the check boxes to select multiple alerts quickly.
+* [Address bugs needing attention](#Address-bugs-needing-attention)
-## Follow up on Performance Regressions
+* [Follow up on Performance Regressions](#Follow-up-on-Performance-Regressions)
+
+* [Give Feedback on our Infrastructure](#Give-Feedback-on-our-Infrastructure)
+
+## Address bugs needing attention
+
+NOTE: Ensure that you're signed into Monorail.
+
+Use [this Monorail query](https://bugs.chromium.org/p/chromium/issues/list?sort=modified&q=label%3AChromeperf-Sheriff-NeedsAttention%2CChromeperf-Auto-NeedsAttention%20-has%3Aowner&can=2)
+to find automatically triaged issues which need attention.
+
+NOTE: If the list of issues that need attention is empty, please jump ahead to
+[Follow up on Performance Regressions](#Follow-up-on-Performance-Regressions).
+
+Issues in the list will include automatically filed and bisected regressions
+that are supported by the Chromium Perf Sheriff rotation. For each of the
+issues:
+
+1. Determine the cause of the failure:
+
+ * If it's Pinpoint failing to find a culprit, consider re-running the
+ failing Pinpoint job.
-During your shift, you should try to follow up on each of the bugs you filed.
-Once you've triaged all the alerts, check to see if the bisects have come back,
-or if they failed. If the results came back, and a culprit was found, follow up
-with the CL author. If the bisects failed to update the bug with results, please
-file a bug on it (see [feedback](#feedback) links below).
+ * If it's the Chromeperf Dashboard failing to start a Pinpoint bisection,
+ consider running a bisection from the grouped alerts. The issue
+ description should have a link to the group of anomalies associated with
+ the issue.
+
+ * If this was a manual escalation (e.g. a suspected culprit author put the
+ `Chromeperf-Sheriff-NeedsAttention` label to seek help) use the tools at
+ your disposal, like:
+
+ * Retry the most recent Pinpoint job, potentially changing the parameters.
+
+ * Inspect the results of the Pinpoint job associated with the issues and
+ decide that this could be noise.
+
+ * In cases where it's unclear what next should be done, escalate the issue
+ to the Chrome Speed Tooling team by adding the `Speed>Bisection` component
+ and leaving the issue `Untriaged` or `Unconfirmed`.
+
+2. Remove the `Chromeperf-Sheriff-NeedsAttention` or
+ `Chromeperf-Auto-NeedsAttention` label once you've acted on an issue.
+
+**For alerts related to `resource_sizes`:** Refer to
+ [apk_size_regressions.md](apk_size_regressions.md).
+
+## Follow up on Performance Regressions
-Also during your shift, please spend any spare time driving down bugs from the
-[regression backlog](http://go/triage-backlog). Treat these bugs as you would
-your own -- investigate the regressions, find out what the next step should be,
-and then move the bug along. Some possible next steps and questions to answer
-are:
+Please spend any spare time driving down bugs from the [regression
+backlog](http://go/triage-backlog). Treat these bugs as you would your own --
+investigate the regressions, find out what the next step should be, and then
+move the bug along. Some possible next steps and questions to answer are:
-* Should the bug be closed?
-* Are there questions that need to be answered?
-* Are there people that should be added to the CC list?
-* Is the correct owner assigned?
+* Should the bug be closed?
+* Are there questions that need to be answered?
+* Are there people that should be added to the CC list?
+* Is the correct owner assigned?
When a bug does need to be pinged, rather than adding a generic "ping", it's
much much more effective to include the username and action item.
@@ -121,16 +91,9 @@ tools are accurate and improving them. Please file bugs and feature requests
as you see them:
* **Perf Dashboard**: Please use the red "Report Issue" link in the navbar.
-* **Perf Bisect/Trybots**: If a bisect is identifying the wrong CL as culprit
+* **Pinpoint**: If Pinpoint is identifying the wrong CL as culprit
or missing a clear culprit, or not reproducing what appears to be a clear
- regression, please link the comment the bisect bot posted on the bug at
- [go/bad-bisects](https://docs.google.com/spreadsheets/d/13PYIlRGE8eZzsrSocA3SR2LEHdzc8n9ORUoOE2vtO6I/edit#gid=0).
- The team triages these regularly. If you spot a really clear bug (bisect
- job red, bugs not being updated with bisect results) please file it in
- crbug with component `Speed>Bisection`. If a bisect problem is blocking a
- perf regression bug triage, **please file a new bug with component
- `Speed>Bisection` and block the regression bug on the bisect bug**. This
- makes it much easier for the team to triage, dupe, and close bugs on the
- infrastructure without affecting the state of the perf regression bugs.
+ regression, please file an issue in crbug with the `Speed>Bisection`
+ component.
* **Noisy Tests**: Please file a bug in crbug with component `Speed>Benchmarks`
and [cc the owner](http://go/perf-owners).