summaryrefslogtreecommitdiff
path: root/chromium/docs/speed
diff options
context:
space:
mode:
authorAllan Sandfeld Jensen <allan.jensen@qt.io>2018-01-31 16:33:43 +0100
committerAllan Sandfeld Jensen <allan.jensen@qt.io>2018-02-06 16:33:22 +0000
commitda51f56cc21233c2d30f0fe0d171727c3102b2e0 (patch)
tree4e579ab70ce4b19bee7984237f3ce05a96d59d83 /chromium/docs/speed
parentc8c2d1901aec01e934adf561a9fdf0cc776cdef8 (diff)
downloadqtwebengine-chromium-da51f56cc21233c2d30f0fe0d171727c3102b2e0.tar.gz
BASELINE: Update Chromium to 65.0.3525.40
Also imports missing submodules Change-Id: I36901b7c6a325cda3d2c10cedb2186c25af3b79b Reviewed-by: Alexandru Croitor <alexandru.croitor@qt.io>
Diffstat (limited to 'chromium/docs/speed')
-rw-r--r--chromium/docs/speed/README.md7
-rw-r--r--chromium/docs/speed/benchmark_harnesses/system_health.md200
-rw-r--r--chromium/docs/speed/images/pinpoint-perf-try-button.pngbin0 -> 20354 bytes
-rw-r--r--chromium/docs/speed/images/pinpoint-perf-try-dialog.pngbin0 -> 111446 bytes
-rw-r--r--chromium/docs/speed/perf_bot_sheriffing.md108
-rw-r--r--chromium/docs/speed/perf_trybots.md95
-rw-r--r--chromium/docs/speed/speed_tracks.md6
7 files changed, 277 insertions, 139 deletions
diff --git a/chromium/docs/speed/README.md b/chromium/docs/speed/README.md
index ec870817084..c2ccfc8c0f9 100644
--- a/chromium/docs/speed/README.md
+++ b/chromium/docs/speed/README.md
@@ -30,3 +30,10 @@
* **[Chrome Speed Metrics](https://docs.google.com/document/d/1wBT5fauGf8bqW2Wcg2A5Z-3_ZvgPhE8fbp1Xe6xfGRs/edit#heading=h.8ieoiiwdknwt)**: provides a set of high-quality metrics that represent real-world user experience, and exposes these metrics to both Chrome and Web Developers.
* General discussion: progressive-web-metrics@chromium.org
* The actual metrics: [tracking](https://docs.google.com/spreadsheets/d/1gY5hkKPp8RNVqmOw1d-bo-f9EXLqtq4wa3Z7Q8Ek9Tk/edit#gid=0)
+
+## For Googlers
+
+ * We have a bi-weekly meeting to keep everyone in sync. Join chrome-speed@ or
+ contact benhenry@chromium.org for more information.
+ * Have something to include in our Milestone-based Speed report to the Chrome
+ team? Please keep track of it [here](https://goto.google.com/speed-improvement).
diff --git a/chromium/docs/speed/benchmark_harnesses/system_health.md b/chromium/docs/speed/benchmark_harnesses/system_health.md
new file mode 100644
index 00000000000..5ec3d68a78a
--- /dev/null
+++ b/chromium/docs/speed/benchmark_harnesses/system_health.md
@@ -0,0 +1,200 @@
+# System Health tests
+
+[TOC]
+
+## Overview
+
+The Chrome System Health benchmarking effort aims to create a common set of user
+stories on the web that can be used for all Chrome speed projects. Our
+benchmarks mimic average web users’ activities and cover all major web platform
+APIs & browser features.
+
+The web is vast, the possible combination of user activities is endless. Hence,
+to get a useful benchmarking tool for engineers to use for preventing
+regressions, during launches and day to day work, we use data analysis and work
+with teams within Chrome to create a limited set of stories that can fit a
+budget of 90 minutes machine time.
+
+These are our key cover areas for the browser:
+* Different user gestures: swipe, fling, text input, scroll & infinite scroll
+* Video
+* Audio
+* Flash
+* Graphics: css, svg, canvas, webGL
+* Background tabs
+* Multi-tab switching
+* Back button
+* Follow a link
+* Restore tabs
+* Reload a page
+* ... ([Full tracking sheet](https://docs.google.com/spreadsheets/d/1t15Ya5ssYBeXAZhHm3RJqfwBRpgWsxoib8_kwQEHMwI/edit#gid=0))
+
+Success to us means System Health benchmarks cast a wide enough net to
+catch major regressions before they make it to the users. This also means
+performance improvements to System Health benchmarks translate to actual wins
+on the web, enabling teams to use these benchmarks for tracking progress with
+the confidence that their improvement on the suite matters to real users.
+
+To achieve this goal, just simulating user’s activities on the web is not
+enough. We also partner with
+[chrome-speed-metrics](https://groups.google.com/a/chromium.org/forum/#!forum/progressive-web-metrics)
+team to track key user metrics on our user stories.
+
+
+## Where are the System Health stories?
+
+All the System Health stories are located in
+[tools/perf/page_sets/system_health/](../../../tools/perf/page_sets/system_health/).
+
+There are few groups of stories:
+1. [Accessibility stories](../../../tools/perf/page_sets/system_health/accessibility_stories.py)
+2. [Background stories](../../../tools/perf/page_sets/system_health/background_stories.py)
+3. [Browsing stories](../../../tools/perf/page_sets/system_health/browsing_stories.py)
+4. [Chrome stories](../../../tools/perf/page_sets/system_health/chrome_stories.py)
+5. [Loading stories](../../../tools/perf/page_sets/system_health/loading_stories.py)
+6. [Multi-tab stories](../../../tools/perf/page_sets/system_health/multi_tab_stories.py)
+7. [Media stories](../../../tools/perf/page_sets/system_health/media_stories.py)
+
+## What is the structure of a System Health story?
+A System Health story is a subclass of
+[SystemHealthStory](https://cs.chromium.org/chromium/src/tools/perf/page_sets/system_health/system_health_story.py?l=44&rcl=d5f1f0821489a8311dc437fc6b70ac0b0d72b28b), for example:
+```
+class NewSystemHealthStory(SystemHealthStory):
+ NAME = 'case:group:page:2018'
+ URL = 'https://the.starting.url'
+ TAGS = [story_tags.JAVASCRIPT_HEAVY, story_tags.INFINITE_SCROLL]
+ SUPPORTED_PLATFORMS = platforms.ALL_PLATFORMS # Default.
+ # or platforms.DESKTOP_ONLY
+ # or platforms.MOBILE_ONLY
+ # or platforms.NO_PLATFORMS
+
+ def _Login(self, action_runner):
+ # Optional. Called before the starting URL is loaded.
+
+ def _DidLoadDocument(self, action_runner):
+ # Optional. Called after the starting URL is loaded
+ # (but before potentially measuring memory).
+```
+
+The name must have the following structure:
+1. **Case** (load, browse, search, …). User action/journey that the story
+ simulates (verb). Stories for each case are currently kept in a separate
+ file.
+ Benchmarks using the System Health story set can specify which cases they want to
+ include (see
+ [SystemHealthStorySet](https://cs.chromium.org/chromium/src/tools/perf/page_sets/system_health/system_health_stories.py?l=16&rcl=e3eb21e24dbe0530356003fd9f9a8a94fb91d00b)).
+2. **Group** (social, news, tools, …). General category to which the page
+ (item 3) belongs.
+3. **Page** (google, facebook, nytimes, …). The name of the individual page. In
+ case there are multi pages, one can use the general grouping name like
+ "top_pages", or "typical_pages".
+4. **Year** (2017, 2018, 2018_q3, ...). The year (and quarter if necessary for
+ disambiguating) when the page is added. Note: this rule was added later,
+ so the old System Health stories do not have this field.
+
+In addition, each story also has accompanied tags that define its important
+characteristics.
+[Tags](../../../tools/perf/page_sets/system_health/story_tags.py) are used as
+the way to track coverage of System Health stories, so they should be as
+detailed as needed to distinguish each System Health story from the others.
+
+## How are System Health stories executed?
+Given a System Health story set with N stories, each story is executed sequentially as
+follows:
+
+1. Launch the browser
+2. Start tracing
+3. Run `story._Login` (no-op by default)
+4. Load `story.URL`
+5. Run `story._DidLoadDocument` (no-op by default)
+6. Measure memory (disabled by default)
+7. Stop tracing
+8. Tear down the browser
+
+All the benchmarks using System Health stories tear down the browser after single story.
+This ensures that every story is completely independent and modifications to the
+System Health story set won’t cause as many regressions/improvements on the perf dashboard.
+
+## Should I add new System Health stories and how?
+
+First, check this list of [System Health stories](https://docs.google.com/spreadsheets/d/1t15Ya5ssYBeXAZhHm3RJqfwBRpgWsxoib8_kwQEHMwI/edit#gid=0)
+to see if your intended user stories are already covered by existing ones.
+
+If there is a good reason for your stories to be added, please make one CL for
+each of the new stories so they can be landed (and reverted if needed)
+individually. On each CL, make sure that the perf trybots all pass before
+comitting.
+
+Once your patch makes it through the CQ, you’re done… unless your story starts
+failing on some random platform, in which case the perf bot sheriff will very
+likely revert your patch and assign a bug to you. It is then up to you to figure
+out why the story fails, fix it and re-land the patch.
+
+Add new SystemHealthStory subclass(es) to either one of the existing files or a
+new file in [tools/perf/page_sets/system_health/](../../tools/perf/page_sets/system_health).
+The new class(es) will automatically be picked up and added to the story set.
+To run the story through the memory benchmark against live sites, use the
+following commands:
+
+```
+$ tools/perf/run_benchmark system_health.memory_desktop \
+ --browser=reference --device=desktop \
+ --story-filter=<NAME-OF-YOUR-STORY> \
+ --use-live-sites
+$ tools/perf/run_benchmark system_health.memory_mobile \
+ --browser=reference --device=android \
+ --story-filter=<NAME-OF-YOUR-STORY> \
+ --also-run-disabled-tests --use-live-sites
+```
+
+Once you’re happy with the stories, record them:
+
+```
+$ tools/perf/record_wpr --story desktop_system_health_story_set \
+ --browser=reference --device=desktop \
+ --story-filter=<NAME-OF-YOUR-STORY>
+$ tools/perf/record_wpr --story mobile_system_health_story_set \
+ --browser=reference --device=android \
+ --story-filter=<NAME-OF-YOUR-STORY>
+```
+
+You can now replay the stories from the recording by omitting the
+`--use-live-sites` flag:
+
+```
+$ tools/perf/run_benchmark system_health.memory_desktop \
+ --browser=reference --device=desktop \
+ --story-filter=<NAME-OF-YOUR-STORY> \
+ --also-run-disabled-tests
+$ tools/perf/run_benchmark system_health.memory_mobile \
+ --browser=reference --device=android \
+ --story-filter=<NAME-OF-YOUR-STORY> \
+ --also-run-disabled-tests
+```
+
+The recordings are stored in `system_health_desktop_MMM.wprgo` and
+`system_health_mobile_NNN.wprgo` files in the
+[tools/perf/page_sets/data](../../../tools/perf/page_sets/data) directory.
+You can find the MMM and NNN values by inspecting the changes to
+`system_health_desktop.json` and `system_health_mobile.json`:
+
+```
+$ git diff tools/perf/page_sets/data/system_health_desktop.json
+$ git diff tools/perf/page_sets/data/system_health_mobile.json
+```
+
+Once you verified that the replay works as you expect, you can upload the .wprgo
+files to the cloud and include the .wprgo.sha1 files in your patch:
+
+```
+$ upload_to_google_storage.py --bucket chrome-partner-telemetry \
+ system_health_desktop_MMM.wprgo
+$ upload_to_google_storage.py --bucket chrome-partner-telemetry \
+ system_health_mobile_NNN.wprgo
+$ git add tools/perf/page_sets/data/system_health_desktop_MMM.wprgo.sha1
+$ git add tools/perf/page_sets/data/system_health_mobile_NNN.wprgo.sha1
+```
+
+If the stories work as they should (certain website features don’t work well
+under WPR and need to be worked around), send them out for review in the patch
+that is adding the new story.
diff --git a/chromium/docs/speed/images/pinpoint-perf-try-button.png b/chromium/docs/speed/images/pinpoint-perf-try-button.png
new file mode 100644
index 00000000000..c3cd8de1e6f
--- /dev/null
+++ b/chromium/docs/speed/images/pinpoint-perf-try-button.png
Binary files differ
diff --git a/chromium/docs/speed/images/pinpoint-perf-try-dialog.png b/chromium/docs/speed/images/pinpoint-perf-try-dialog.png
new file mode 100644
index 00000000000..8f2eee37f32
--- /dev/null
+++ b/chromium/docs/speed/images/pinpoint-perf-try-dialog.png
Binary files differ
diff --git a/chromium/docs/speed/perf_bot_sheriffing.md b/chromium/docs/speed/perf_bot_sheriffing.md
index 1660697f785..303f5fc1358 100644
--- a/chromium/docs/speed/perf_bot_sheriffing.md
+++ b/chromium/docs/speed/perf_bot_sheriffing.md
@@ -331,98 +331,66 @@ be investigated. When a test fails:
### Disabling Telemetry Tests
-
-If the test is a telemetry test, its name will have a '.' in it, such as
-`thread_times.key_mobile_sites` or `page_cycler.top_10`. The part before the
-first dot will be a python file in [tools/perf/benchmarks](https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/benchmarks/).
-
If a telemetry test is failing and there is no clear culprit to revert
immediately, disable the story on the failing platforms.
-For example:
-* If a single story is failing on a single platform, disable only that story on that platform.
-* If multiple stories are failing across all platforms, disable those stories on all platforms.
-* If all stories are failing on a single platform, disable all stories on that platform.
+You can do this with [Expectations](https://cs.chromium.org/chromium/src/tools/perf/expectations.config).
+
+Disabling CLs can be TBR-ed to anyone in
+[tools/perf/OWNERS](https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/OWNERS).
+As long as the disabling CL touches only tools/perf/expectations.config, you can
+use TBR and NOTRY=true to submit the CL immediately.
+
+An expectation is a line in the expectations file in the following format:
+```
+reason [ conditions ] benchmark/story [ Skip ]
+```
-You can do this with [StoryExpectations](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/story/expectations.py).
+Reasons must be in the format `crbug.com/#`. If the same test is failing and
+linked to multiple bugs, an entry for each bug is needed.
+
+A list of supported conditions can be found [here](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/story/expectations.py).
+
+Multiple conditions when listed in a single expectation are treated as logical
+_AND_ so a platform must meet all conditions to be disabled. Each failing
+platform requires its own expectations entry.
To determine which stories are failing in a given run, go to the buildbot page
-for that run and search for `Unexpected failures`.
+for that run and search for `Unexpected failures` in the failing test entry.
Example:
-On platform P, story foo is failing on benchmark bar. On the same benchmark on
-platform Q, story baz is failing. To disable these stories, go
-to where benchmark bar is declared. Using codesearch, you can look for
-benchmark\_baz which will likely be in bar.py. This is where you can disable the
-story.
-
-Once there, find the benchmark's `GetExpectations()` method. Inside there you
-should see a `SetExpectations()` method. That is where stories are disabled.
+On Mac platforms, google\_story is failing on memory\_benchmark.
-Buildbot output for failing run on platform P:
+Buildbot output for failing run on Mac:
```
-bar.benchmark_baz
+memory_benchmark/google_story
Bot id: 'buildxxx-xx'
...
Unexpected Failures:
-* foo
+* google_story
```
-Buildbot output for failing run on platform Q
-```
-bar.benchmark_baz
-Bot id: 'buildxxx-xx'
-...
-Unexpected Failures:
-* baz
-```
+Go to the [expectations config file](https://cs.chromium.org/chromium/src/tools/perf/expectations.config).
+Look for a comment showing the benchmarks name. If the benchmark is not present
+in the expectations file, you may need to add a new section. Please keep them in
+alphabetical ordering.
-Code snippet from bar.py benchmark:
+It will look similar to this when the above example is done:
```
-class BarBenchmark(perf_benchmark.PerfBenchmark):
- ...
- def Name():
- return 'bar.benchmark_baz'
- ...
- def GetExpectations(self):
- class StoryExpectations(story.expectations.StoryExpectations):
- def SetExpectations(self):
- self.DisableStory(
- 'foo', [story.expectations.PLATFORM_P], 'crbug.com/1234')
- self.DisableStory(
- 'baz', [story.expectations.PLATFORM_Q], 'crbug.com/5678')
-```
-
-If a story is failing on multiple platforms, you can add more platforms to the
-list in the second argument to `DisableStory()`. If the story is failing on
-different platforms for different reasons, you can have multiple `DisableStory()`
-declarations for the same story with different reasons listed.
+\# Test Expectation file for telemetry tests.
+\# tags: Mac
-If a particular story isn't applicable to a given platform, it should be
-disabled using [CanRunStory](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/page/shared_page_state.py?type=cs&q=CanRunOnBrowser&l=271).
-
-To find the currently supported disabling conditions view the [expectations file](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/story/expectations.py).
-
-In the case that a benchmark is failing in its entirety on a platfrom that it
-should noramally run on, you can temporarily disable it by using
-DisableBenchmark():
+\# Benchmark: memory_benchmark
+crbug.com/123 [ Mac ] memory_benchmark/google_story [ Skip ]
+```
+In the case that a benchmark is failing in its entirety on a platform that it
+should noramally run on, you can temporarily disable it by using an expectation
+of this format:
```
-class BarBenchmark(perf_benchmark.PerfBenchmark):
- ...
- def GetExpectations(self):
- class StoryExpectations(story.expectations.StoryExpectations):
- def SetExpectations(self):
- self.DisableBenchmark([story.expectation.PLATFORM_Q], 'crbug.com/9876')
+crbug.com/123456 [ CONDITIONS ] memory_benchmark/* [ Skip ]
```
-If for some reason you are unable to disable at the granularity you would like,
-disable the test at the lowest granularity possible and contact rnephew@ to
-suggest new disabling criteria.
-
-Disabling CLs can be TBR-ed to anyone in [tools/perf/OWNERS](https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/OWNERS),
-but please do **not** submit with NOTRY=true.
-
### Disabling Other Tests
Non-telemetry tests are configured in [chromium.perf.json](https://code.google.com/p/chromium/codesearch#chromium/src/testing/buildbot/chromium.perf.json) **But do not manually edit this file.**
diff --git a/chromium/docs/speed/perf_trybots.md b/chromium/docs/speed/perf_trybots.md
index 536c33d284a..365c996fc04 100644
--- a/chromium/docs/speed/perf_trybots.md
+++ b/chromium/docs/speed/perf_trybots.md
@@ -1,64 +1,16 @@
+
# Perf Try Bots
[TOC]
-## What are perf try bots?
+## What is a perf try job?
Chrome has a performance lab with dozens of device and OS configurations. You
-can run performance tests on an unsubmitted CL on these devices using the
-perf try bots.
+can run performance tests on an unsubmitted CL on these devices using Pinpoint. The specified CL will be run against tip-of-tree with and without the CL applied.
## Supported platforms
-The platforms available in the lab change over time. To find the currently
-available platforms, run `tools/perf/run_benchmark try --help`.
-
-Example output:
-
-```
-> tools/perf/run_benchmark try --help
-usage: Run telemetry benchmarks on trybot. You can add all the benchmark options available except the --browser option
- [-h] [--repo_path <repo path>] [--deps_revision <deps revision>]
- <trybot name> <benchmark name>
-
-positional arguments:
- <trybot name> specify which bots to run telemetry benchmarks on. Allowed values are:
- Mac Builder
- all
- all-android
- all-linux
- all-mac
- all-win
- android-fyi
- android-nexus5
- android-nexus5X
- android-nexus6
- android-nexus7
- android-one
- android-webview-arm64-aosp
- android-webview-nexus6-aosp
- linux
- mac-10-11
- mac-10-12
- mac-10-12-mini-8gb
- mac-air
- mac-pro
- mac-retina
- staging-android-nexus5X
- staging-linux
- staging-mac-10-12
- staging-win
- win
- win-8
- win-x64
- winx64-10
- winx64-high-dpi
- winx64-zen
- winx64ati
- winx64intel
- winx64nvidia
-
-```
+The platforms available in the lab change over time. To see the currently supported platofrms, click the "configuration" dropdown on the dialog.
## Supported benchmarks
@@ -71,29 +23,34 @@ which test Chrome's performance at a high level, and the
[benchmark harnesses](https://docs.google.com/spreadsheets/d/1ZdQ9OHqEjF5v8dqNjd7lGUjJnK6sgi8MiqO7eZVMgD0/edit#gid=0),
which cover more specific areas.
-## Starting a perf try job
-Use this command line:
+## Starting a perf try job
-`tools/perf/run_benchmark try <trybot_name> <benchmark_name>`
+![Pinpoint Perf Try Button](images/pinpoint-perf-try-button.png)
-See above for how to choose a trybot and benchmark.
+Visit [Pinpoint](https://pinpoint-dot-chromeperf.appspot.com) and click the perf try button in the bottom right corner of the screen.
-Run `tools/perf/run_benchmark try --help` for more information about available
-options.
+You should see the following dialog popup:
-## Specifying additional tracing categories
+![Perf Try Dialog](images/pinpoint-perf-try-dialog.png)
-You can add `--extra-chrome-categories` to specify additional tracing
-categories.
-## Interpreting the results
+**Build Arguments**| **Description**
+--- | ---
+Bug Id | (optional) A bug ID.
+Gerrit URL | The patch you want to run the benchmark on.
+Configuration | The configuration to run the test on.
+Browser | (optional) The specific browser to use for the test.
-Perf trybots create a code review under the covers to hold the trybot results.
-The code review will list links to buildbot status pages for the try jobs.
-On each buildbot status page, you will see a "HTML Results" link. You can click
-it to see detailed information about the performance test results with and
-without your patch.
+**Test Arguments**| **Description**
+--- | ---
+Benchmark | A telemetry benchmark, eg. system_health.common_desktop
+Story | (optional) A specific story from the benchmark to run.
+Extra Test Arguments | (optional) Extra arguments for the test, eg. --extra-chrome-categories="foo,bar"
-**[Here is the documentation](https://github.com/catapult-project/catapult/blob/master/docs/metrics-results-ui.md)**
-on reading the results.
+**Values Arguments**| **Description**
+--- | ---
+Chart | (optional) Please ignore.
+TIR Label | (optional) Please ignore.
+Trace | (optional) Please ignore.
+Statistic | (optional) Please ignore.
diff --git a/chromium/docs/speed/speed_tracks.md b/chromium/docs/speed/speed_tracks.md
index 3efea410bb4..9e599692daf 100644
--- a/chromium/docs/speed/speed_tracks.md
+++ b/chromium/docs/speed/speed_tracks.md
@@ -33,6 +33,12 @@ good job.
#### Links
+ * Power
+ [Rotation](https://rotation.googleplex.com/#rotation?id=5428142711767040)
+ * Rotation
+ [Documentation](https://docs.google.com/document/d/1YgsRvJOi7eJWCTh2p7dy2Wf4EJtjk_3XU30yp_7mhaM/preview)
+ * Power
+ [Backlog](https://docs.google.com/spreadsheets/d/1VhU1aM6APdUN74NVPW98X3aqpQyJkxg1UJcvBidXaK8/edit)
* Performance-Power [Bug
Queue](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=Performance%3DPower)