summaryrefslogtreecommitdiff
path: root/chromium/docs
diff options
context:
space:
mode:
authorAllan Sandfeld Jensen <allan.jensen@qt.io>2018-05-15 10:20:33 +0200
committerAllan Sandfeld Jensen <allan.jensen@qt.io>2018-05-15 10:28:57 +0000
commitd17ea114e5ef69ad5d5d7413280a13e6428098aa (patch)
tree2c01a75df69f30d27b1432467cfe7c1467a498da /chromium/docs
parent8c5c43c7b138c9b4b0bf56d946e61d3bbc111bec (diff)
downloadqtwebengine-chromium-d17ea114e5ef69ad5d5d7413280a13e6428098aa.tar.gz
BASELINE: Update Chromium to 67.0.3396.47
Change-Id: Idcb1341782e417561a2473eeecc82642dafda5b7 Reviewed-by: Michal Klocek <michal.klocek@qt.io>
Diffstat (limited to 'chromium/docs')
-rw-r--r--chromium/docs/README.md10
-rw-r--r--chromium/docs/accessibility/chromevox.md67
-rw-r--r--chromium/docs/accessibility/chromevox_on_desktop_linux.md6
-rw-r--r--chromium/docs/accessibility/overview.md14
-rw-r--r--chromium/docs/accessibility/patts.md8
-rw-r--r--chromium/docs/building_old_revisions.md85
-rw-r--r--chromium/docs/callback.md6
-rw-r--r--chromium/docs/chromoting_android_hacking.md4
-rw-r--r--chromium/docs/cipd.md216
-rw-r--r--chromium/docs/clang.md67
-rw-r--r--chromium/docs/code_coverage.md151
-rw-r--r--chromium/docs/code_reviews.md27
-rw-r--r--chromium/docs/fuchsia_sdk_updates.md4
-rw-r--r--chromium/docs/google_play_services.md2
-rw-r--r--chromium/docs/gpu/debugging_gpu_related_code.md235
-rw-r--r--chromium/docs/gpu/gpu_testing.md571
-rw-r--r--chromium/docs/gpu/gpu_testing_bot_details.md539
-rw-r--r--chromium/docs/gpu/images/wrangler.pngbin0 -> 12567 bytes
-rw-r--r--chromium/docs/gpu/pixel_wrangling.md298
-rw-r--r--chromium/docs/how_to_add_your_feature_flag.md8
-rw-r--r--chromium/docs/images/code_coverage_component_view.pngbin0 -> 319721 bytes
-rw-r--r--chromium/docs/images/code_coverage_directory_view.pngbin0 -> 60625 bytes
-rw-r--r--chromium/docs/images/code_coverage_workflow.pngbin0 -> 60373 bytes
-rw-r--r--chromium/docs/ios/coverage.md78
-rw-r--r--chromium/docs/ios/images/coverage_xcode.pngbin335490 -> 0 bytes
-rw-r--r--chromium/docs/ios/images/llvm-cov_report.pngbin362896 -> 0 bytes
-rw-r--r--chromium/docs/ios/images/llvm-cov_report_folder.pngbin275504 -> 0 bytes
-rw-r--r--chromium/docs/ios/images/llvm-cov_show.pngbin232389 -> 0 bytes
-rw-r--r--chromium/docs/ios/images/llvm-cov_show_file.pngbin321140 -> 0 bytes
-rw-r--r--chromium/docs/jumbo.md8
-rw-r--r--chromium/docs/layout_tests_linux.md4
-rw-r--r--chromium/docs/linux_chromium_packages.md7
-rw-r--r--chromium/docs/linux_gtk_theme_integration.md135
-rw-r--r--chromium/docs/memory-infra/README.md36
-rw-r--r--chromium/docs/memory-infra/heap_profiler.md3
-rw-r--r--chromium/docs/memory/README.md13
-rw-r--r--chromium/docs/memory/debugging_memory_issues.md131
-rw-r--r--chromium/docs/memory/key_concepts.md90
-rw-r--r--chromium/docs/memory/tools.md11
-rw-r--r--chromium/docs/network_traffic_annotations.md57
-rw-r--r--chromium/docs/optional.md4
-rw-r--r--chromium/docs/origin_trials_integration.md4
-rw-r--r--chromium/docs/process/merge_request.md10
-rw-r--r--chromium/docs/security/mojo.md91
-rw-r--r--chromium/docs/security/sheriff.md22
-rw-r--r--chromium/docs/servicification.md7
-rw-r--r--chromium/docs/speed/README.md6
-rw-r--r--chromium/docs/speed/benchmark/benchmark_ownership.md37
-rw-r--r--chromium/docs/speed/benchmark/benchmark_short_list.md29
-rw-r--r--chromium/docs/speed/benchmark/harnesses/blink_perf.md (renamed from chromium/docs/speed/benchmark_harnesses/blink_perf.md)0
-rw-r--r--chromium/docs/speed/benchmark/harnesses/power_perf.md (renamed from chromium/docs/speed/benchmark_harnesses/power_perf.md)0
-rw-r--r--chromium/docs/speed/benchmark/harnesses/system_health.md (renamed from chromium/docs/speed/benchmark_harnesses/system_health.md)0
-rw-r--r--chromium/docs/speed/benchmark/harnesses/webrtc_perf.md (renamed from chromium/docs/speed/benchmark_harnesses/webrtc_perf.md)0
-rw-r--r--chromium/docs/speed/benchmark/telemetry_device_setup.md71
-rw-r--r--chromium/docs/speed/how_does_chrome_measure_performance.md6
-rw-r--r--chromium/docs/speed/perf_bot_sheriffing.md8
-rw-r--r--chromium/docs/sublime_ide.md55
-rw-r--r--chromium/docs/sync/model_api.md2
-rw-r--r--chromium/docs/sync/uss/client_tag_based_model_type_processor.md (renamed from chromium/docs/sync/uss/shared_model_type_processor.md)6
-rw-r--r--chromium/docs/testing/writing_layout_tests.md2
-rw-r--r--chromium/docs/threading_and_tasks.md22
-rw-r--r--chromium/docs/user_handle_mapping.md1
-rw-r--r--chromium/docs/vscode.md47
-rw-r--r--chromium/docs/webui_explainer.md2
-rw-r--r--chromium/docs/webui_in_components.md4
-rw-r--r--chromium/docs/win_cross.md4
-rw-r--r--chromium/docs/windows_build_instructions.md171
67 files changed, 2927 insertions, 575 deletions
diff --git a/chromium/docs/README.md b/chromium/docs/README.md
index 7f800e44e95..975dcb5b22e 100644
--- a/chromium/docs/README.md
+++ b/chromium/docs/README.md
@@ -127,7 +127,7 @@ used when committed.
* [base::Optional](optional.md) - How to use `base::Optional` in C++ code.
* [Using the Origin Trials Framework](origin_trials_integration.md) - A
framework for conditionally enabling experimental APIs for testing.
-* [`SharedModelTypeProcessor` in Unified Sync and Storage](sync/uss/shared_model_type_processor.md) -
+* [`ClientTagBasedModelTypeProcessor` in Unified Sync and Storage](sync/uss/client_tag_based_model_type_processor.md) -
Notes on the central data structure used in Chrome Sync.
* [Chrome Sync's Model API](sync/model_api.md) - Data models used for syncing
information across devices using Chrome Sync.
@@ -194,7 +194,7 @@ used when committed.
on crash dumping a process running in a seccomp sandbox.
* [Linux Password Storage](linux_password_storage.md) - Keychain integrations
between Chromium and Linux.
-* [Linux Sublime Development](linux_sublime_dev.md) - Using Sublime as an IDE
+* [Linux Sublime Development](sublime_ide.md) - Using Sublime as an IDE
for Chromium development on Linux.
* [Building and Debugging GTK](linux_building_debug_gtk.md) - Building
Chromium against GTK using lower optimization levels and/or more debugging
@@ -246,6 +246,12 @@ used when committed.
`android.util.Log` on Android, and usage guidelines.
* [Chromoting Android Hacking](chromoting_android_hacking.md) - Viewing the
logs and debugging the Chrome Remote Desktop Android client.
+* [Android Java Static Analysis](../build/android/docs/lint.md) - Catching
+ Java related issues at compile time with the 'lint' tool.
+* [Java Code Coverage](../build/android/docs/coverage.md) - Collecting code
+ coverage data with the EMMA tool.
+* [Android BuildConfig files](../build/android/docs/build_config.md) -
+ What are .build_config files and how they are used.
### Misc iOS-Specific Docs
* [Continuous Build and Test Infrastructure for Chromium for iOS](ios/infra.md)
diff --git a/chromium/docs/accessibility/chromevox.md b/chromium/docs/accessibility/chromevox.md
index 3fb74f11546..e72c2f330e9 100644
--- a/chromium/docs/accessibility/chromevox.md
+++ b/chromium/docs/accessibility/chromevox.md
@@ -4,7 +4,12 @@ ChromeVox is the built-in screen reader on Chrome OS. It was originally
developed as a separate extension but now the code lives inside of the Chromium
tree and it's built as part of Chrome OS.
-To start or stop ChromeVox on Chrome OS, press Ctrl+Alt+Z at any time.
+NOTE: ChromeVox ships also as an extension on the Chrome webstore. This version
+of ChromeVox is known as ChromeVox Classic and is loosely related to ChromeVox
+(on Chrome OS). All references to ChromeVox relate only to ChromeVox on Chrome
+OS.
+
+To start or stop ChromeVox, press Ctrl+Alt+Z at any time.
## Developer Info
@@ -21,32 +26,6 @@ ChromeVox for Chrome OS development is done on Linux.
See [ChromeVox on Desktop Linux](chromevox_on_desktop_linux.md)
for more information.
-## ChromeVox Next
-
-ChromeVox Next is the code name we use for a major new rewrite to ChromeVox that
-uses the automation API instead of content scripts. The code is part of
-ChromeVox (unique ChromeVox Next code is found in
-chrome/browser/resources/chromeos/chromevox/cvox2).
-
-ChromeVox contains all of the classic and next code in the same codebase, it
-switches its behavior dynamically based on the mode:
-
-* Next: as of version 56 of Chrome/Chrome OS, this is default. ChromeVox uses new key/braille bindings, earcons, speech/braille output style, the Next engine (Automation API), and other major/minor improvements
-* Next Compat: in order to maintain compatibility with some clients of the ChromeVox Classic js APIs, some sites have been whitelisted for this mode. ChromeVox will inject classic content scripts, but expose a Next-like user experience (like above)
-* Classic: as of version 56 of Chrome/Chrome OS, this mode gets enabled via a keyboard toggle Search+Q. Once enabled, ChromeVox will behave like it did in the past including keyboard bindings, earcons, speech/braille output style, and the underlying engine (content scripts).
-* Classic compat for some sites that require Next, while running in Classic, ChromeVox will use the Next engine but expose a Classic user experience (like above)
-
-Once it's ready, the plan is to retire everything other than Next mode.
-
-## ChromeVox Next
-
-To test ChromeVox Next, click on the Gear icon in the upper-right of the screen
-to open the ChromeVox options (or press the keyboard shortcut Search+Shift+O, O)
-and then click the box to opt into ChromeVox Next.
-
-If you are running m56 or later, you already have ChromeVox Next on by
-default. To switch back to Classic, press Search+Q.
-
## Debugging ChromeVox
There are options available that may assist in debugging ChromeVox. Here are a
@@ -95,37 +74,3 @@ particular test suite - for example, most of the ChromeVox Next tests
have "E2E" in them (for "end-to-end"), so to only run those:
```out/Release/chromevox_tests --test-launcher-jobs=20 --gtest_filter="*E2E*"```
-
-## ChromeVox for other platforms
-
-ChromeVox can be run as an installable extension, separate from a
-linux Chrome OS build.
-
-### From source
-
-chrome/browser/resources/chromeos/chromevox/tools has the required scripts that pack ChromeVox as an extension and make any necessary manifest changes.
-
-### From Webstore
-
-Alternatively, the webstore has the stable version of ChromeVox.
-
-To install without interacting with the webstore UI, place the
-following json block in
-/opt/google/chrome-unstable/extensions/kgejglhpjiefppelpmljglcjbhoiplfn.json
-
-```
-{
-"external_update_url": "https://clients2.google.com/service/update2/crx"
-}
-```
-
-If you're using the desktop Linux version of Chrome, we recommend you
-use Voxin for speech. Run chrome with: “google-chrome
---enable-speech-dispatcher” and select a voice provided by the speechd
-package from the ChromeVox options page (ChromeVox+o, o). As of the
-latest revision of Chrome 44, speechd support has become stable enough
-to use with ChromeVox, but still requires the flag.
-
-In the ChromeVox options page, select the flat keymap and use sticky
-mode (double press quickly of insert) to emulate a modal screen
-reader.
diff --git a/chromium/docs/accessibility/chromevox_on_desktop_linux.md b/chromium/docs/accessibility/chromevox_on_desktop_linux.md
index 70c73c4edaa..69569e6a077 100644
--- a/chromium/docs/accessibility/chromevox_on_desktop_linux.md
+++ b/chromium/docs/accessibility/chromevox_on_desktop_linux.md
@@ -59,6 +59,12 @@ manager key combo conflicts) by doing something like
startx out/cros/chrome
```
+NOTE: if you decide to run Chrome OS under linux within a window manager, you
+are subject to its keybindings which will most certainly conflict with
+ChromeVox. The Search key (which gets mapped from LWIN/key code 91), usually
+gets assigned to numerous shortcut combinations. You can manually disable all
+such combinations, or run under X as described above.
+
## Speech
If you want speech, you just need to copy the speech synthesis data files to
diff --git a/chromium/docs/accessibility/overview.md b/chromium/docs/accessibility/overview.md
index 36e4b14dad1..67cf8e1c675 100644
--- a/chromium/docs/accessibility/overview.md
+++ b/chromium/docs/accessibility/overview.md
@@ -510,23 +510,23 @@ is defined by [automation.idl], which must be kept synchronized with
[AccessibilityHostMsg_EventParams]: https://cs.chromium.org/chromium/src/content/common/accessibility_messages.h?sq=package:chromium&l=75
[AutomationInternalCustomBindings]: https://cs.chromium.org/chromium/src/chrome/renderer/extensions/automation_internal_custom_bindings.h
[AXContentNodeData]: https://cs.chromium.org/chromium/src/content/common/ax_content_node_data.h
-[AXLayoutObject]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/accessibility/AXLayoutObject.h
-[AXNodeObject]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/accessibility/AXNodeObject.h
-[AXObjectImpl]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/accessibility/AXObjectImpl.h
-[AXObjectCacheImpl]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/modules/accessibility/AXObjectCacheImpl.h
+[AXLayoutObject]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/modules/accessibility/ax_layout_object.h
+[AXNodeObject]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/modules/accessibility/ax_node_object.h
+[AXObjectImpl]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/modules/accessibility/ax_object_impl.h
+[AXObjectCacheImpl]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/modules/accessibility/ax_object_cache_impl.h
[AXPlatformNode]: https://cs.chromium.org/chromium/src/ui/accessibility/platform/ax_platform_node.h
[AXTreeSerializer]: https://cs.chromium.org/chromium/src/ui/accessibility/ax_tree_serializer.h
[BlinkAXTreeSource]: https://cs.chromium.org/chromium/src/content/renderer/accessibility/blink_ax_tree_source.h
[BrowserAccessibility]: https://cs.chromium.org/chromium/src/content/browser/accessibility/browser_accessibility.h
[BrowserAccessibilityDelegate]: https://cs.chromium.org/chromium/src/content/browser/accessibility/browser_accessibility_manager.h?sq=package:chromium&l=64
[BrowserAccessibilityManager]: https://cs.chromium.org/chromium/src/content/browser/accessibility/browser_accessibility_manager.h
-[LayoutObject]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/core/layout/LayoutObject.h
+[LayoutObject]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/core/layout/layout_object.h
[NativeViewAccessibility]: https://cs.chromium.org/chromium/src/ui/views/accessibility/native_view_accessibility.h
-[Node]: https://cs.chromium.org/chromium/src/third_party/WebKit/Source/core/dom/Node.h
+[Node]: https://cs.chromium.org/chromium/src/third_party/blink/renderer/core/dom/Node.h
[RenderAccessibilityImpl]: https://cs.chromium.org/chromium/src/content/renderer/accessibility/render_accessibility_impl.h
[RenderFrameHostImpl]: https://cs.chromium.org/chromium/src/content/browser/frame_host/render_frame_host_impl.h
[ui::AXNodeData]: https://cs.chromium.org/chromium/src/ui/accessibility/ax_node_data.h
-[WebAXObject]: https://cs.chromium.org/chromium/src/third_party/WebKit/public/web/WebAXObject.h
+[WebAXObject]: https://cs.chromium.org/chromium/src/third_party/blink/public/web/web_ax_object.h
[automation API]: https://cs.chromium.org/chromium/src/chrome/renderer/resources/extensions/automation
[automation.idl]: https://cs.chromium.org/chromium/src/chrome/common/extensions/api/automation.idl
[ax_enums.idl]: https://cs.chromium.org/chromium/src/ui/accessibility/ax_enums.idl
diff --git a/chromium/docs/accessibility/patts.md b/chromium/docs/accessibility/patts.md
index 65416008069..2270151be89 100644
--- a/chromium/docs/accessibility/patts.md
+++ b/chromium/docs/accessibility/patts.md
@@ -22,11 +22,11 @@ You must hide the existing TTS extension because extension keys must not be
duplicated, and ChromeOS will crash if you try to load the unpacked extension
while the built-in one is already loaded.
-To test, use the [https://chrome.google.com/webstore/detail/tts-demo/chhkejkkcghanjclmhhpncachhgejoel](TTS Demo extension)
+To test, use the [TTS Demo extension](https://chrome.google.com/webstore/detail/tts-demo/chhkejkkcghanjclmhhpncachhgejoel)
in Chromeos. This should automatically recognize the unpacked TTS extension
based on its manifest key. You can also use any site that uses a web speech API
demo. In addition, the Chrome Accessibility team has a
-[https://chrome.google.com/webstore/detail/idllbaaoaldabjncnbfokacibfehkemd](TTS Debug extension)
+[TTS Debug extension](https://chrome.google.com/webstore/detail/idllbaaoaldabjncnbfokacibfehkemd)
which can run several automated tests.
## Updating
@@ -102,9 +102,9 @@ git commit -a
repo upload .
```
-After submitting, inform the [mailto:chrome-a11y-core@google.com](Chrome Accessibility team)
+After submitting, inform the [Chrome Accessibility Team](mailto:chrome-a11y-core@google.com)
so that they can update their local copies of TTS per the
-[https://chromium.googlesource.com/chromium/src/+/lkgr/docs/accessibility/chromevox_on_desktop_linux.md](Chromevox instructions).
+[Chromevox instructions](chromevox_on_desktop_linux.md).
## Ebuild
diff --git a/chromium/docs/building_old_revisions.md b/chromium/docs/building_old_revisions.md
new file mode 100644
index 00000000000..293247f763b
--- /dev/null
+++ b/chromium/docs/building_old_revisions.md
@@ -0,0 +1,85 @@
+# Building old revisions
+
+Occasionally you may want to check out and build old versions of Chromium, such
+as when bisecting a regression or simply building an older release tag. Though
+this is not officially supported, these tips address some common complications.
+
+This process may be easier if you copy your checkout (starting from the
+directory containing `.gclient`) to a new location, so you can just delete the
+checkout when finished instead of having to undo changes to your primary working
+directory.
+
+## Get compatible depot_tools
+
+Check out a version of depot_tools from around the same time as the target
+revision. Since `gclient` auto-updates depot_tools, be sure to
+[disable depot_tools auto-update](https://dev.chromium.org/developers/how-tos/depottools#TOC-Disabling-auto-update)
+before continuing by setting the environment variable `DEPOT_TOOLS_UPDATE=0`.
+
+```shell
+# Get date of current revision:
+~/chrome/src $ COMMIT_DATE=$(git log -n 1 --pretty=format:%ci)
+
+# Check out depot_tools revision from the same time:
+~/depot_tools $ git checkout $(git rev-list -n 1 --before="$COMMIT_DATE" master)
+~/depot_tools $ export DEPOT_TOOLS_UPDATE=0
+```
+
+## Clean your working directory
+
+To avoid unexpected gclient behavior and conflicts between revisions, remove any
+directories that aren't part of the revision you've checked out. By default, Git
+will preserve directories with their own Git repositories; bypass this by
+passing the `--force` option twice to `git clean`.
+
+```shell
+$ git clean -ffd
+```
+
+Repeat this command until it doesn't find anything to remove.
+
+## Sync dependencies
+
+When running `gclient sync`, also remove any dependencies that are no longer
+required:
+
+```shell
+$ gclient sync -D --force --reset
+```
+
+**Warning: `gclient sync` may overwrite the URL of your `origin` remote** if it
+encounters problems. You'll notice this when Git starts thinking everything is
+"untracked" or "deleted". If this happens, fix and fetch the remote before
+continuing:
+
+```shell
+$ git remote get-url origin
+https://chromium.googlesource.com/chromium/deps/opus.git
+$ git remote set-url origin https://chromium.googlesource.com/chromium/src.git
+$ git fetch origin
+```
+
+It may also be necessary to run the revision's version of
+[build/install-build-deps.sh](/build/install-build-deps.sh).
+
+## Build
+
+Since build tools change over time, you may need to build using older versions
+of tools like Visual Studio.
+
+You may also need to disable goma (if enabled).
+
+## Get back to trunk
+
+When returning to a normal checkout, you may need to undo some of the changes
+above:
+
+* Restore `depot_tools` to the `master` branch.
+* Clean up any `_bad_scm/` directories in the directory containing `.gclient`.
+* Revert your `.gclient` file if `gclient` changed it:
+
+ ```
+ WARNING: gclient detected an obsolete setting in your .gclient file. The
+ file has been automagically updated. The previous version is available at
+ .gclient.old.
+ ```
diff --git a/chromium/docs/callback.md b/chromium/docs/callback.md
index 309f5116759..aeff1085960 100644
--- a/chromium/docs/callback.md
+++ b/chromium/docs/callback.md
@@ -546,6 +546,12 @@ void Bar(char* ptr);
base::Bind(&Foo, "test");
base::Bind(&Bar, "test"); // This fails because ptr is not const.
```
+ - In case of partial binding of parameters a possibility of having unbound
+ parameters before bound parameters. Example:
+```cpp
+void Foo(int x, bool y);
+base::Bind(&Foo, _1, false); // _1 is a placeholder.
+```
If you are thinking of forward declaring `base::Callback` in your own header
file, please include "base/callback_forward.h" instead.
diff --git a/chromium/docs/chromoting_android_hacking.md b/chromium/docs/chromoting_android_hacking.md
index fa3b5fdbd78..44a9c8891ce 100644
--- a/chromium/docs/chromoting_android_hacking.md
+++ b/chromium/docs/chromoting_android_hacking.md
@@ -79,8 +79,8 @@ display log messages to the `LogCat` pane.
<classpathentry kind="src" path="remoting/android/java/src"/>
<classpathentry kind="src" path="remoting/android/apk/src"/>
<classpathentry kind="src" path="remoting/android/javatests/src"/>
-<classpathentry kind="src" path="third_party/WebKit/Source/devtools/scripts/jsdoc-validator/src"/>
-<classpathentry kind="src" path="third_party/WebKit/Source/devtools/scripts/compiler-runner/src"/>
+<classpathentry kind="src" path="third_party/blink/renderer/devtools/scripts/jsdoc-validator/src"/>
+<classpathentry kind="src" path="third_party/blink/renderer/devtools/scripts/compiler-runner/src"/>
<classpathentry kind="src" path="third_party/webrtc/voice_engine/test/android/android_test/src"/>
<classpathentry kind="src" path="third_party/webrtc/modules/video_capture/android/java/src"/>
<classpathentry kind="src" path="third_party/webrtc/modules/video_render/android/java/src"/>
diff --git a/chromium/docs/cipd.md b/chromium/docs/cipd.md
new file mode 100644
index 00000000000..17eb176ff43
--- /dev/null
+++ b/chromium/docs/cipd.md
@@ -0,0 +1,216 @@
+# CIPD for chromium dependencies
+
+This document outlines how to use [CIPD][1] for managing binary dependencies in
+chromium.
+
+[TOC]
+
+## Adding a new CIPD dependency
+
+### 1. Set up a new directory for your dependency
+
+You'll first want somewhere in the repository in which your dependency will
+live. For third-party dependencies, this should typically be a subdirectory
+of `//third_party`. You'll need to add the same set of things to that
+directory that you'd add for a non-CIPD dependency -- OWNERS, README.chromium,
+etc.
+
+For example, if you want to add a package named `sample_cipd_dep`, you might
+create the following:
+
+```
+ third_party/
+ sample_cipd_dep/
+ LICENSE
+ OWNERS
+ README.chromium
+```
+
+For more on third-party dependencies, see [here][2].
+
+### 2. Acquire whatever you want to package
+
+Build it, download it, whatever. Once you've done that, lay it out
+in your local checkout the way you want it to be laid out in a typical
+checkout.
+
+Staying with the example from above, if you want to add a package named
+`sample_cipd_dep` that consists of two JARs, `foo.jar` and `bar.jar`, you might
+lay them out like so:
+
+```
+ third_party/
+ sample_cipd_dep/
+ ...
+ lib/
+ bar.jar
+ foo.jar
+```
+
+### 3. Create a cipd.yaml file
+
+CIPD knows how to create your package based on a .yaml file you provide to it.
+The .yaml file should take the following form:
+
+```
+# Comments are allowed.
+
+# The package name is required. Third-party chromium dependencies should
+# unsurprisingly all be prefixed with chromium/third_party/.
+package: chromium/third_party/sample_cipd_dep
+
+# The description is optional and is solely for the reader's benefit. It
+# isn't used in creating the CIPD package.
+description: A sample CIPD dependency.
+
+# The root is optional and, if unspecified, defaults to ".". It specifies the
+# root directory of the files and directories specified below in "data".
+#
+# You won't typically need to specify this explicitly.
+root: "."
+
+# The install mode is optional. If provided, it specifies how CIPD should
+# install a package: "copy", which will copy the contents of the package
+# to the installation directory; and "symlink", which will create symlinks
+# to the contents of the package in the CIPD root inside the installation
+# directory.
+#
+# You won't typically need to specify this explicitly.
+install_mode: "symlink"
+
+# The data is required and described what should be included in the CIPD
+# package.
+data:
+ # Data can include directories, files, or a version file.
+
+ - dir: "directory_name"
+
+ # Directories can include an optional "exclude" list of regexes.
+ # Files or directories within the given directory that match any of
+ # the provided regexes will not be included in the CIPD package.
+ exclude:
+ - .*\.pyc
+ - exclude_me
+ - keep_this/but_not_this
+
+ - file: keep_this_file.bin
+
+ # If included, CIPD will dump package version information to this path
+ # at package installation.
+ - version_file: CIPD_VERSION.json
+```
+
+For example, for `sample_cipd_dep`, we might write the following .yaml file:
+
+```
+package: chromium/third_party/sample_cipd_dep
+description: A sample CIPD dependency.
+data:
+ - file: bar.jar
+ - file: foo.jar
+```
+
+For more information about the package definition spec, see [the code][3].
+
+> **Note:** Committing the .yaml file to the repository isn't required,
+> but it is recommended. Doing so has benefits for visibility and ease of
+> future updates.
+
+### 4. Create your CIPD package
+
+To actually create your package, you'll need:
+
+ - the cipd.yaml file (described above)
+ - a package version. For third-party packages, this should typically include
+ the version of the third-party package itself. If you want to support future
+ modifications within a given version of a third-party package (e.g., if you
+ want to make chromium-specific changes), it's best to include a suffix with
+ a numerical component.
+ - [permission](#permissions-in-cipd).
+
+Once you have those, you can create your package like so:
+
+```
+# Assuming that the third-party dependency in question is at version 1.2.3
+# and this is the first chromium revision of that version.
+$ cipd create --pkg-def cipd.yaml -tag version:1.2.3-cr0
+```
+
+### 5. Add your CIPD package to DEPS
+
+You can add your package to DEPS by adding an entry of the following form to
+the `deps` dict:
+
+```
+deps = {
+ # ...
+
+ # This is the installation directory.
+ 'src/third_party/sample_cipd_dep': {
+
+ # In this example, we're only installing one package in this location,
+ # but installing multiple package in a location is supported.
+ 'packages': [
+ {
+ 'package': 'chromium/third_party/sample_cipd_dep',
+ 'version': 'version:1.2.3-cr0',
+ },
+ ],
+
+ # As with git-based DEPS entries, 'condition' is optional.
+ 'condition': 'checkout_android',
+ 'dep_type': 'cipd',
+ },
+
+ # ...
+}
+```
+
+This will result in CIPD package `chromium/third_party/sample_cipd_dep` at
+`version:1.2.3-cr0` being installed in `src/third_party/sample_cipd_dep`
+(relative to the gclient root directory).
+
+## Updating a CIPD dependency
+
+To modify a CIPD dependency, follow steps 2, 3, and 4 above, then modify the
+version listed in DEPS.
+
+## Miscellaneous
+
+### Permissions in CIPD
+
+You can check a package's ACLs with `cipd acl-list`:
+
+```
+$ cipd acl-list chromium/third_party/sample_cipd_dep
+...
+```
+
+Permissions in CIPD are handled hierarchically. You can check entries higher
+in the package hierarcy with `cipd acl-list`, too:
+
+```
+$ cipd acl-list chromium
+...
+```
+
+By default, [cria/project-chromium-cipd-owners][4] own all CIPD packages
+under `chromium/`. If you're adding a package, talk to one of them.
+
+## Troubleshooting
+
+ - **A file maintained by CIPD is missing, and gclient sync doesn't recreate it.**
+
+CIPD currently caches installation state. Modifying packages managed by CIPD
+will invalidate this cache in a way that CIPD doesn't detect - i.e., CIPD will
+assume that anything it installed is still installed, even if you deleted it.
+To clear the cache and force a full reinstallation, delete your
+`$GCLIENT_ROOT/.cipd` directory.
+
+Note that there is a [bug](crbug.com/794764) on file to add a mode to CIPD
+that is not so trusting of its own cache.
+
+[1]: https://chromium.googlesource.com/infra/luci/luci-go/+/master/cipd/
+[2]: /docs/adding_to_third_party.md
+[3]: https://chromium.googlesource.com/infra/luci/luci-go/+/master/cipd/client/cipd/local/pkgdef.go
+[4]: https://chrome-infra-auth.appspot.com/auth/groups/project-chromium-cipd-owners
diff --git a/chromium/docs/clang.md b/chromium/docs/clang.md
index 4ae603a476b..a45d15f32d9 100644
--- a/chromium/docs/clang.md
+++ b/chromium/docs/clang.md
@@ -1,38 +1,30 @@
# Clang
-[Clang](http://clang.llvm.org/) is a compiler with many desirable features
-(outlined on their website).
+[Clang](http://clang.llvm.org/) is the main supported compiler when
+building Chromium on all platforms.
-Chrome can be built with Clang. It is now the default compiler on Android, Mac
-and Linux for building Chrome, and it is currently useful for its warning and
-error messages on Windows.
-
-See
-[the open bugs](http://code.google.com/p/chromium/issues/list?q=label:clang).
+Known [clang bugs and feature
+requests](http://code.google.com/p/chromium/issues/list?q=label:clang).
[TOC]
-## Build instructions
+## Building with clang
-Get clang (happens automatically during `gclient runhooks` on Mac and Linux):
+This happens by default, with clang binaries being fetched by gclient
+during the `gclient runhooks` phase. To fetch them manually, or build
+a local custom clang, use
tools/clang/scripts/update.py
-Only needs to be run once per checkout, and clang will be automatically updated
-by `gclient runhooks`.
-
-Regenerate the ninja build files with Clang enabled. Again, on Linux and Mac,
-Clang is the default compiler.
-
-Run `gn args` and add `is_clang = true` to your args.gn file.
+Run `gn args` and make sure there is no `is_clang = false` in your args.gn file.
Build: `ninja -C out/gn chrome`
-## Reverting to gcc on linux
+## Reverting to gcc on Linux or MSVC on Windows
-We don't have bots that test this, but building with gcc4.8+ should still work
-on Linux. If your system gcc is new enough, run `gn args` and add `is_clang =
-false`.
+There are no bots that test this but `is_clang = false` will revert to
+gcc on Linux and to Visual Studio on Windows. There is no guarantee it
+will work.
## Mailing List
@@ -57,35 +49,33 @@ To test the FindBadConstructs plugin, run:
./test.py ../../../../third_party/llvm-build/Release+Asserts/bin/clang \
../../../../third_party/llvm-build/Release+Asserts/lib/libFindBadConstructs.so)
+Since the plugin is rolled with clang changes, behavior changes to the plugin
+should be guarded by flags to make it easy to roll clang. A general outline:
+1. Implement new plugin behavior behind a flag.
+1. Wait for a compiler roll to bring in the flag.
+1. Start passing the new flag in `GN` and verify the new behavior.
+1. Enable the new plugin behavior unconditionally and update the plugin to
+ ignore the flag.
+1. Wait for another compiler roll.
+1. Stop passing the flag from `GN`.
+1. Remove the flag completely.
+
## Using the clang static analyzer
See [clang_static_analyzer.md](clang_static_analyzer.md).
## Windows
-clang can be used as compiler on Windows. Clang uses Visual Studio's linker and
-SDK, so you still need to have Visual Studio installed.
-
-Things should compile, and all tests should pass. You can check these bots for
-how things are currently looking:
-https://build.chromium.org/p/chromium.fyi/console?category=win%20clang
-
-```shell
-python tools\clang\scripts\update.py
-# run `gn args` and add `is_clang = true` to your args.gn, then...
-ninja -C out\gn chrome
-```
+Since October 2017, clang is the default compiler on Windows. It uses
+MSVC's linker and SDK, so you still need to have Visual Studio with
+C++ support installed.
-The `update.py` script only needs to be run once per checkout. Clang will be
-kept up to date by `gclient runhooks`.
+To use MSVC's compiler (if it still works), use `is_clang = false`.
Current brokenness:
* To get colored diagnostics, you need to be running
[ansicon](https://github.com/adoxa/ansicon/releases).
-* Debug info does now work, but support for it is new. If you see something
- not working right, please file a bug and mark it as blocking the
- [clang/win debug info tracking bug](https://crbug.com/636111).
## Using a custom clang binary
@@ -102,7 +92,6 @@ clang_use_chrome_plugins = false
is_debug = false
symbol_level = 1
is_component_build = true
-is_clang = true # Implicitly set on Mac, Linux, iOS; needed on Win and Android.
```
You can then run `head out/gn/toolchain.ninja` and check that the first to
diff --git a/chromium/docs/code_coverage.md b/chromium/docs/code_coverage.md
new file mode 100644
index 00000000000..bbc7e8b8985
--- /dev/null
+++ b/chromium/docs/code_coverage.md
@@ -0,0 +1,151 @@
+# Code coverage in Chromium
+
+Table of contents:
+- [Coverage Script](#coverage-script)
+- [Workflow](#workflow)
+ * [Step 0 Download Tooling](#step-0-download-tooling)
+ * [Step 1 Build](#step-1-build)
+ * [Step 2 Create Raw Profiles](#step-2-create-raw-profiles)
+ * [Step 3 Create Indexed Profile](#step-3-create-indexed-profile)
+ * [Step 4 Create Coverage Reports](#step-4-create-coverage-reports)
+
+Chromium uses Clang source-based code coverage, this [documentation] explains
+how to use Clang’s source-based coverage features in general.
+
+In this doc, we first introduce a code coverage script that can be used to
+generate code coverage reports for Chromium code in one command, and then dive
+into details to describe the code coverage reports generation workflow.
+
+## Coverage Script
+The [coverage script] automates the process described below and provides a
+one-stop service to generate code coverage reports in just one command.
+
+This script is currently supported on Linux, Mac, iOS and ChromeOS platforms.
+
+Here is an example usage:
+
+```
+$ gn gen out/coverage \
+ --args='use_clang_coverage=true is_component_build=false'
+$ python tools/code_coverage/coverage.py \
+ crypto_unittests url_unittests \
+ -b out/coverage -o out/report \
+ -c 'out/coverage/crypto_unittests' \
+ -c 'out/coverage/url_unittests --gtest_filter=URLParser.PathURL' \
+ -f url/ -f crypto/
+```
+The command above builds `crypto_unittests` and `url_unittests` targets and then
+runs each with the command and arguments specified by the `-c` flag. For
+`url_unittests`, it only runs the test `URLParser.PathURL`. The coverage report
+is filtered to include only files and sub-directories under `url/` and `crypto/`
+directories.
+
+Aside from automating the process, this script provides additional features to
+view code coverage breakdown by directories and by components, for example:
+
+Directory View:
+
+![code coverage report directory view]
+
+Component View:
+
+![code coverage report component view]
+
+## Workflow
+This section presents the workflow of generating code coverage reports using two
+unit test targets in Chromium repo as an example: `crypto_unittests` and
+`url_unittests`, and the following diagram shows a step-by-step overview of the
+process.
+
+![code coverage generation workflow](images/code_coverage_workflow.png)
+
+### Step 0 Download Tooling
+Generating code coverage reports requires llvm-profdata and llvm-cov tools.
+Currently, these two tools are not part of Chromium’s Clang bundle,
+[coverage script] downloads and updates them automatically, you can also
+download the tools manually ([link]).
+
+### Step 1 Build
+In Chromium, to compile code with coverage enabled, one needs to add
+`use_clang_coverage=true` and `is_component_build=false` GN flags to the args.gn
+file in the build output directory. Under the hood, they ensure
+`-fprofile-instr-generate` and `-fcoverage-mapping` flags are passed to the
+compiler.
+
+```
+$ gn gen out/coverage \
+ --args='use_clang_coverage=true is_component_build=false'
+$ gclient runhooks
+$ ninja -C out/coverage crypto_unittests url_unittests
+```
+
+### Step 2 Create Raw Profiles
+The next step is to run the instrumented binaries, and when the program exits it
+will write a raw profile for each process. Because Chromium runs tests in
+multiple processes, and the number of processes spawned can be as many as a few
+hundred, which results in the generation of a few hundred gigabytes’ raw
+profiles, to limit the number of raw profiles, `%Nm` pattern in
+`LLVM_PROFILE_FILE` environment variable is used to run tests in multi-process
+mode, where `N` is the number of raw profiles. With `N = 4`, the total size of
+the raw profiles are limited to a few gigabytes.
+
+```
+$ export LLVM_PROFILE_FILE=”out/report/crypto_unittests.%4m.profraw”
+$ ./out/coverage/crypto_unittests
+$ ls out/report/
+crypto_unittests.3657994905831792357_0.profraw
+...
+crypto_unittests.3657994905831792357_3.profraw
+```
+
+### Step 3 Create Indexed Profile
+Raw profiles must be indexed before generating code coverage reports, and this
+is done using the `merge` command of `llvm-profdata` tool, which merges multiple
+raw profiles (.profraw) and index them to create a single profile (.profdata).
+
+At this point, all the raw profiles can be thrown away because their information
+are already contained in the indexed profile.
+
+```
+$ llvm-profdata merge -o out/report/coverage.profdata \
+ out/report/crypto_unittests.3657994905831792357_0.profraw
+...
+out/report/crypto_unittests.3657994905831792357_3.profraw
+out/report/url_unittests.714228855822523802_0.profraw
+...
+out/report/url_unittests.714228855822523802_3.profraw
+$ ls out/report/coverage.profdata
+out/report/coverage.profdata
+```
+
+### Step 4 Create Coverage Reports
+Finally, `llvm-cov` is used to render code coverage reports. There are different
+report generation modes, and all of them require the indexed profile, all the
+built binaries and all the exercised source files to be available.
+
+For example, following command can be used to generate per-file line-by-line
+code coverage report:
+
+```
+$ llvm-cov show -output-dir=out/report -format=html \
+ -instr-profile=out/report/coverage.profdata \
+ -object=out/coverage/url_unittests \
+ out/coverage/crypto_unittests
+```
+
+For more information on how to use llvm-cov, please refer to the [guide].
+
+## Reporting problems
+For any breakage report and feature requests, please [file a bug].
+
+## Mail list
+For questions and general discussions, please join [chrome-code-coverage group].
+
+[documentation]: https://clang.llvm.org/docs/SourceBasedCodeCoverage.html
+[coverage script]: https://cs.chromium.org/chromium/src/tools/code_coverage/coverage.py
+[code coverage report directory view]: images/code_coverage_directory_view.png
+[code coverage report component view]: images/code_coverage_component_view.png
+[link]: https://storage.googleapis.com/chromium-browser-clang-staging/
+[guide]: http://llvm.org/docs/CommandGuide/llvm-cov.html
+[file a bug]: https://bugs.chromium.org/p/chromium/issues/entry?components=Tools%3ECodeCoverage
+[chrome-code-coverage group]: https://groups.google.com/a/google.com/forum/#!forum/chrome-code-coverage \ No newline at end of file
diff --git a/chromium/docs/code_reviews.md b/chromium/docs/code_reviews.md
index 3be8a9a1de1..08af713c037 100644
--- a/chromium/docs/code_reviews.md
+++ b/chromium/docs/code_reviews.md
@@ -14,14 +14,13 @@ touching. Any committer can review code, but an owner must provide a review
for each directory you are touching. If you have doubts, look at the git blame
for the file and the `OWNERS` files (see below).
-To indicate a positive review, the reviewer chooses "+1" in Code-Review field
-on Gerrit, or types "LGTM" (case insensitive) into a comment on Rietveld. This
-stands for "Looks Good To Me." "-1" in Code-Review field on Gerrit or the text
-"not LGTM" on Rietveld will cancel out a previous positive review.
+To indicate a positive review, the reviewer provides a "Code-Review +1" in
+Gerrit, also known as an LGTM ("Looks Good To Me"). A score of "-1" indicates
+the change should not be submitted as-is.
-If you have multiple reviewers, make it clear in the message you send
-requesting review what you expect from each reviewer. Otherwise people might
-assume their input is not required or waste time with redundant reviews.
+If you have multiple reviewers, provide a message indicating what you expect
+from each reviewer. Otherwise people might assume their input is not required
+or waste time with redundant reviews.
Please also read [Respectful Changes](cl_respect.md) and
[Respectful Code Reviews](cr_respect.md).
@@ -29,15 +28,15 @@ Please also read [Respectful Changes](cl_respect.md) and
#### Expectations for all reviewers
* Aim to provide some kind of actionable response within 24 hours of receipt
- (not counting weekends and holidays). This doesn't mean you have to have
- done a complete review, but you should be able to give some initial
- feedback, request more time, or suggest another reviewer.
+ (not counting weekends and holidays). This doesn't mean you have to do a
+ complete review, but you should be able to give some initial feedback,
+ request more time, or suggest another reviewer.
- * It can be nice to indicate if you're away in your name in the code review
- tool. If you do this, indicate when you'll be back.
+ * Use the status field in Gerrit settings to indicate if you're away and when
+ * you'll be back.
* Don't generally discourage people from sending you code reviews. This
- includes writing a blanket ("slow") after your name in the review tool.
+ includes using a blanket "slow" in your status field.
## OWNERS files
@@ -130,8 +129,6 @@ Otherwise the reviewer won't know to review the patch.
reviewer2: Please review changes to bar/
```
- * Push the "send mail" button.
-
### TBR-ing certain types of mechanical changes
Sometimes you might do something that affects many callers in different
diff --git a/chromium/docs/fuchsia_sdk_updates.md b/chromium/docs/fuchsia_sdk_updates.md
index b8962d67e7d..b78abf83460 100644
--- a/chromium/docs/fuchsia_sdk_updates.md
+++ b/chromium/docs/fuchsia_sdk_updates.md
@@ -4,7 +4,7 @@
job](https://luci-scheduler.appspot.com/jobs/fuchsia/sdk-x86_64-linux) for a
recent green archive. On the "SUCCEEDED" link, copy the SHA-1 from the
`gsutil.upload` link of the `upload fuchsia-sdk` step.
-0. Put that into Chromium's src.git `build/fuchsia/update_sdk.py` as `SDK_HASH`.
+0. Put that into Chromium's src.git `build/fuchsia/sdk.sha1`.
0. `gclient sync && ninja ...` and make sure things go OK locally.
0. Upload the roll CL, making sure to include the `fuchsia` trybot. Tag the roll
with `Bug: 707030`.
@@ -31,4 +31,4 @@ Chromium-related projects like Crashpad, instead of directly pulling the
`cipd describe fuchsia/sdk/linux-amd64 -version <CIPD_HASH_HERE>`
This description will show the `jiri_snapshot` "tag" for the CIPD package which
-corresponds to the SDK revision that's specified `update_sdk.py` here.
+corresponds to the SDK revision that's specified in `sdk.sha1` here.
diff --git a/chromium/docs/google_play_services.md b/chromium/docs/google_play_services.md
index b526cb930e3..4d285b554d9 100644
--- a/chromium/docs/google_play_services.md
+++ b/chromium/docs/google_play_services.md
@@ -48,4 +48,4 @@ passes, it might fail on the internal bots and result in the CL getting
reverted, so please make sure the APIs are available to the bots before
submitting.
-[bug_link]:https://bugs.chromium.org/p/chromium/issues/entry?labels=Restrict-View-Google,pri-1,Hotlist-GooglePlayServices&owner=dgn@chromium.org&os=Android
+[bug_link]:https://bugs.chromium.org/p/chromium/issues/entry?labels=Restrict-View-Google,pri-1,Hotlist-GooglePlayServices&owner=agrieve@chromium.org&os=Android
diff --git a/chromium/docs/gpu/debugging_gpu_related_code.md b/chromium/docs/gpu/debugging_gpu_related_code.md
new file mode 100644
index 00000000000..1d5f57e6dda
--- /dev/null
+++ b/chromium/docs/gpu/debugging_gpu_related_code.md
@@ -0,0 +1,235 @@
+# Debugging GPU related code
+
+Chromium's GPU system is multi-process, which can make debugging it rather
+difficult. See [GPU Command Buffer] for some of the nitty gitty. These are just
+a few notes to help with debugging.
+
+[TOC]
+
+<!-- TODO(kainino): update link if the page moves -->
+[GPU Command Buffer]: https://sites.google.com/a/chromium.org/dev/developers/design-documents/gpu-command-buffer
+
+## Renderer Process Code
+
+### `--enable-gpu-client-logging`
+
+If you are trying to track down a bug in a GPU client process (compositing,
+WebGL, Skia/Ganesh, Aura), then in a debug build you can use the
+`--enable-gpu-client-logging` flag, which will show every GL call sent to the
+GPU service process. (From the point of view of a GPU client, it's calling
+OpenGL ES functions - but the real driver calls are made in the GPU process.)
+
+```
+[4782:4782:1219/141706:INFO:gles2_implementation.cc(1026)] [.WebGLRenderingContext] glUseProgram(3)
+[4782:4782:1219/141706:INFO:gles2_implementation_impl_autogen.h(401)] [.WebGLRenderingContext] glGenBuffers(1, 0x7fffc9e1269c)
+[4782:4782:1219/141706:INFO:gles2_implementation_impl_autogen.h(416)] 0: 1
+[4782:4782:1219/141706:INFO:gles2_implementation_impl_autogen.h(23)] [.WebGLRenderingContext] glBindBuffer(GL_ARRAY_BUFFER, 1)
+[4782:4782:1219/141706:INFO:gles2_implementation.cc(1313)] [.WebGLRenderingContext] glBufferData(GL_ARRAY_BUFFER, 36, 0x7fd268580120, GL_STATIC_DRAW)
+[4782:4782:1219/141706:INFO:gles2_implementation.cc(2480)] [.WebGLRenderingContext] glEnableVertexAttribArray(0)
+[4782:4782:1219/141706:INFO:gles2_implementation.cc(1140)] [.WebGLRenderingContext] glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0)
+[4782:4782:1219/141706:INFO:gles2_implementation_impl_autogen.h(135)] [.WebGLRenderingContext] glClear(16640)
+[4782:4782:1219/141706:INFO:gles2_implementation.cc(2490)] [.WebGLRenderingContext] glDrawArrays(GL_TRIANGLES, 0, 3)
+```
+
+### Checking about:gpu
+
+The GPU process logs many errors and warnings. You can see these by navigating
+to `about:gpu`. Logs appear at the bottom of the page. You can also see them
+on standard output if Chromium is run from the command line on Linux/Mac.
+On Windows, you need debugging tools (like VS, WinDbg, etc.) to connect to the
+debug output stream.
+
+**Note:** If `about:gpu` is telling you that your GPU is disabled and
+hardware acceleration is unavailable, it might be a problem with your GPU being
+unsupported. To override this and turn on hardware acceleration anyway, you can
+use the `--ignore-gpu-blacklist` command line option when starting Chromium.
+
+### Breaking on GL Error
+
+In <code>[gles2_implementation.h]</code>, there is some code like this:
+
+```cpp
+// Set to 1 to have the client fail when a GL error is generated.
+// This helps find bugs in the renderer since the debugger stops on the error.
+#if DCHECK_IS_ON()
+#if 0
+#define GL_CLIENT_FAIL_GL_ERRORS
+#endif
+#endif
+```
+
+Change that `#if 0` to `#if 1`, build a debug build, then run in a debugger.
+The debugger will break when any renderer code sees a GL error, and you should
+be able to examine the call stack to find the issue.
+
+[gles2_implementation.h]: https://chromium.googlesource.com/chromium/src/+/master/gpu/command_buffer/client/gles2_implementation.h
+
+### Labeling your calls
+
+The output of all of the errors, warnings and debug logs are prefixed. You can
+set this prefix by calling `glPushGroupMarkerEXT`, `glPopGroupMarkerEXT` and
+`glInsertEventMarkerEXT`. `glPushGroupMarkerEXT` appends a string to the end of
+the current log prefix (think namespace in C++). `glPopGroupmarkerEXT` pops off
+the last string appended. `glInsertEventMarkerEXT` sets a suffix for the
+current string. Example:
+
+```cpp
+glPushGroupMarkerEXT(0, "Foo"); // -> log prefix = "Foo"
+glInsertEventMarkerEXT(0, "This"); // -> log prefix = "Foo.This"
+glInsertEventMarkerEXT(0, "That"); // -> log prefix = "Foo.That"
+glPushGroupMarkerEXT(0, "Bar"); // -> log prefix = "Foo.Bar"
+glInsertEventMarkerEXT(0, "Orange"); // -> log prefix = "Foo.Bar.Orange"
+glInsertEventMarkerEXT(0, "Banana"); // -> log prefix = "Foo.Bar.Banana"
+glPopGroupMarkerEXT(); // -> log prefix = "Foo.That"
+```
+
+### Making a reduced test case.
+
+You can often make a simple OpenGL-ES-2.0-only C++ reduced test case that is
+relatively quick to compile and test, by adding tests to the `gl_tests` target.
+Those tests exist in `src/gpu/command_buffer/tests` and are made part of the
+build in `src/gpu/gpu.gyp`. Build with `ninja -C out/Debug gl_tests`. All the
+same command line options listed on this page will work with the `gl_tests`,
+plus `--gtest_filter=NameOfTest` to run a specific test. Note the `gl_tests`
+are not multi-process, so they probably won't help with race conditions, but
+they do go through most of the same code and are much easier to debug.
+
+### Debugging the renderer process
+
+Given that Chrome starts many renderer processes I find it's easier if I either
+have a remote webpage I can access or I make one locally and then use a local
+server to serve it like `python -m SimpleHTTPServer`. Then
+
+On Linux this works for me:
+
+* `out/Debug/chromium --no-sandbox --renderer-cmd-prefix="xterm -e gdb
+ --args" http://localhost:8000/page-to-repro.html`
+
+On OSX this works for me:
+
+* `out/Debug/Chromium.app/Contents/MacOSX/Chromium --no-sandbox
+ --renderer-cmd-prefix="xterm -e gdb --args"
+ http://localhost:8000/page-to-repro.html`
+
+On Windows I use `--renderer-startup-dialog` and then connect to the listed process.
+
+Note 1: On Linux and OSX I use `cgdb` instead of `gdb`.
+
+Note 2: GDB can take minutes to index symbol. To save time, you can precache
+that computation by running `build/gdb-add-index out/Debug/chrome`.
+
+## GPU Process Code
+
+### `--enable-gpu-service-logging`
+
+In a debug build, this will print all actual calls into the GL driver.
+
+```
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kEnableVertexAttribArray
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(905)] glEnableVertexAttribArray(0)
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kVertexAttribPointer
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(1573)] glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0)
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kClear
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(746)] glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(840)] glDepthMask(GL_TRUE)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(900)] glEnable(GL_DEPTH_TEST)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(1371)] glStencilMaskSeparate(GL_FRONT, 4294967295)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(1371)] glStencilMaskSeparate(GL_BACK, 4294967295)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(860)] glDisable(GL_STENCIL_TEST)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(860)] glDisable(GL_CULL_FACE)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(860)] glDisable(GL_SCISSOR_TEST)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(900)] glEnable(GL_BLEND)
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(721)] glClear(16640)
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kDrawArrays
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(870)] glDrawArrays(GL_TRIANGLES, 0, 3)
+```
+
+Note that GL calls into the driver are not currently prefixed (todo?). But, you
+can tell from the commands logged which command, from which context caused the
+following GL calls to be made.
+
+Also note that client resource IDs are virtual IDs, so calls into the real GL
+driver will not match (though some commands print the mapping). Examples:
+
+```
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kBindTexture
+[5497:5497:1219/142413:INFO:gles2_cmd_decoder.cc(837)] [.WebGLRenderingContext] glBindTexture: client_id = 2, service_id = 10
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(662)] glBindTexture(GL_TEXTURE_2D, 10)
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [0052064A367F0000]cmd: kBindBuffer
+[5497:5497:1219/142413:INFO:gles2_cmd_decoder.cc(837)] [0052064A367F0000] glBindBuffer: client_id = 2, service_id = 6
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(637)] glBindBuffer(GL_ARRAY_BUFFER, 6)
+[5497:5497:1219/142413:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kBindFramebuffer
+[5497:5497:1219/142413:INFO:gles2_cmd_decoder.cc(837)] [.WebGLRenderingContext] glBindFramebuffer: client_id = 1, service_id = 3
+[5497:5497:1219/142413:INFO:gl_bindings_autogen_gl.cc(652)] glBindFramebufferEXT(GL_FRAMEBUFFER, 3)
+```
+
+etc... so that you can see renderer process code would be using the client IDs
+where as the gpu process is using the service IDs. This is useful for matching
+up calls if you're dumping both client and service GL logs.
+
+### `--enable-gpu-debugging`
+
+In any build, this will call glGetError after each command
+
+### `--enable-gpu-command-logging`
+
+This will print the name of each GPU command before it is executed.
+
+```
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kBindBuffer
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kBufferData
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: SetToken
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kEnableVertexAttribArray
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kVertexAttribPointer
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kClear
+[5234:5234:1219/052139:ERROR:gles2_cmd_decoder.cc(3301)] [.WebGLRenderingContext]cmd: kDrawArrays
+```
+
+### Debugging in the GPU Process
+
+Given the multi-processness of chromium it can be hard to debug both sides.
+Turing on all the logging and having a small test case is useful. One minor
+suggestion, if you have some idea where the bug is happening a call to some
+obscure gl function like `glHint()` can give you a place to catch a command
+being processed in the GPU process (put a break point on
+`gpu::gles2::GLES2DecoderImpl::HandleHint`. Once in you can follow the commands
+after that. All of them go through `gpu::gles2::GLES2DecoderImpl::DoCommand`.
+
+To actually debug the GPU process:
+
+On Linux this works for me:
+
+* `out/Debug/chromium --no-sandbox --gpu-launcher="xterm -e gdb --args"
+ http://localhost:8000/page-to-repro.html`
+
+On OSX this works for me:
+
+* `out/Debug/Chromium.app/Contents/MacOSX/Chromium --no-sandbox
+ --gpu-launcher="xterm -e gdb --args"
+ http://localhost:8000/page-to-repro.html`
+
+On Windows I use `--gpu-startup-dialog` and then connect to the listed process.
+
+### `GPU PARSE ERROR`
+
+If you see this message in `about:gpu` or your console and you didn't cause it
+directly (by calling `glLoseContextCHROMIUM`) and it's something other than 5
+that means there's likely a bug. Please file an issue at <http://crbug.com/new>.
+
+## Debugging Performance
+
+If you have something to add here please add it. Most perf debugging is done
+using `about:tracing` (see [Trace Event Profiling] for details). Otherwise,
+be aware that, since the system is multi-process, calling:
+
+```
+start = GetTime()
+DoSomething()
+glFinish()
+end = GetTime
+printf("elapsedTime = %f\n", end - start);
+```
+
+**will not** give you meaningful results.
+
+[See Trace Event Profiling for details]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/trace-event-profiling-tool
diff --git a/chromium/docs/gpu/gpu_testing.md b/chromium/docs/gpu/gpu_testing.md
new file mode 100644
index 00000000000..71a8545e9ce
--- /dev/null
+++ b/chromium/docs/gpu/gpu_testing.md
@@ -0,0 +1,571 @@
+# GPU Testing
+
+This set of pages documents the setup and operation of the GPU bots and try
+servers, which verify the correctness of Chrome's graphically accelerated
+rendering pipeline.
+
+[TOC]
+
+## Overview
+
+The GPU bots run a different set of tests than the majority of the Chromium
+test machines. The GPU bots specifically focus on tests which exercise the
+graphics processor, and whose results are likely to vary between graphics card
+vendors.
+
+Most of the tests on the GPU bots are run via the [Telemetry framework].
+Telemetry was originally conceived as a performance testing framework, but has
+proven valuable for correctness testing as well. Telemetry directs the browser
+to perform various operations, like page navigation and test execution, from
+external scripts written in Python. The GPU bots launch the full Chromium
+browser via Telemetry for the majority of the tests. Using the full browser to
+execute tests, rather than smaller test harnesses, has yielded several
+advantages: testing what is shipped, improved reliability, and improved
+performance.
+
+[Telemetry framework]: https://github.com/catapult-project/catapult/tree/master/telemetry
+
+A subset of the tests, called "pixel tests", grab screen snapshots of the web
+page in order to validate Chromium's rendering architecture end-to-end. Where
+necessary, GPU-specific results are maintained for these tests. Some of these
+tests verify just a few pixels, using handwritten code, in order to use the
+same validation for all brands of GPUs.
+
+The GPU bots use the Chrome infrastructure team's [recipe framework], and
+specifically the [`chromium`][recipes/chromium] and
+[`chromium_trybot`][recipes/chromium_trybot] recipes, to describe what tests to
+execute. Compared to the legacy master-side buildbot scripts, recipes make it
+easy to add new steps to the bots, change the bots' configuration, and run the
+tests locally in the same way that they are run on the bots. Additionally, the
+`chromium` and `chromium_trybot` recipes make it possible to send try jobs which
+add new steps to the bots. This single capability is a huge step forward from
+the previous configuration where new steps were added blindly, and could cause
+failures on the tryservers. For more details about the configuration of the
+bots, see the [GPU bot details].
+
+[recipe framework]: https://chromium.googlesource.com/external/github.com/luci/recipes-py/+/master/doc/user_guide.md
+[recipes/chromium]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes/chromium.py
+[recipes/chromium_trybot]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes/chromium_trybot.py
+[GPU bot details]: gpu_testing_bot_details.md
+
+The physical hardware for the GPU bots lives in the Swarming pool\*. The
+Swarming infrastructure ([new docs][new-testing-infra], [older but currently
+more complete docs][isolated-testing-infra]) provides many benefits:
+
+* Increased parallelism for the tests; all steps for a given tryjob or
+ waterfall build run in parallel.
+* Simpler scaling: just add more hardware in order to get more capacity. No
+ manual configuration or distribution of hardware needed.
+* Easier to run certain tests only on certain operating systems or types of
+ GPUs.
+* Easier to add new operating systems or types of GPUs.
+* Clearer description of the binary and data dependencies of the tests. If
+ they run successfully locally, they'll run successfully on the bots.
+
+(\* All but a few one-off GPU bots are in the swarming pool. The exceptions to
+the rule are described in the [GPU bot details].)
+
+The bots on the [chromium.gpu.fyi] waterfall are configured to always test
+top-of-tree ANGLE. This setup is done with a few lines of code in the
+[tools/build workspace]; search the code for "angle".
+
+These aspects of the bots are described in more detail below, and in linked
+pages. There is a [presentation][bots-presentation] which gives a brief
+overview of this documentation and links back to various portions.
+
+<!-- XXX: broken link -->
+[new-testing-infra]: https://github.com/luci/luci-py/wiki
+[isolated-testing-infra]: https://www.chromium.org/developers/testing/isolated-testing/infrastructure
+[chromium.gpu]: https://build.chromium.org/p/chromium.gpu/console
+[chromium.gpu.fyi]: https://build.chromium.org/p/chromium.gpu.fyi/console
+[tools/build workspace]: https://code.google.com/p/chromium/codesearch#chromium/build/scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py
+[bots-presentation]: https://docs.google.com/presentation/d/1BC6T7pndSqPFnituR7ceG7fMY7WaGqYHhx5i9ECa8EI/edit?usp=sharing
+
+## Fleet Status
+
+Please see the [GPU Pixel Wrangling instructions] for links to dashboards
+showing the status of various bots in the GPU fleet.
+
+[GPU Pixel Wrangling instructions]: pixel_wrangling.md#Fleet-Status
+
+## Using the GPU Bots
+
+Most Chromium developers interact with the GPU bots in two ways:
+
+1. Observing the bots on the waterfalls.
+2. Sending try jobs to them.
+
+The GPU bots are grouped on the [chromium.gpu] and [chromium.gpu.fyi]
+waterfalls. Their current status can be easily observed there.
+
+To send try jobs, you must first upload your CL to the codereview server. Then,
+either clicking the "CQ dry run" link or running from the command line:
+
+```sh
+git cl try
+```
+
+Sends your job to the default set of try servers.
+
+The GPU tests are part of the default set for Chromium CLs, and are run as part
+of the following tryservers' jobs:
+
+* [linux_chromium_rel_ng] on the [tryserver.chromium.linux] waterfall
+* [mac_chromium_rel_ng] on the [tryserver.chromium.mac] waterfall
+* [win_chromium_rel_ng] on the [tryserver.chromium.win] waterfall
+
+[linux_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_rel_ng?numbuilds=100
+[mac_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_ng?numbuilds=100
+[win_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_ng?numbuilds=100
+[tryserver.chromium.linux]: http://build.chromium.org/p/tryserver.chromium.linux/waterfall?numbuilds=100
+[tryserver.chromium.mac]: http://build.chromium.org/p/tryserver.chromium.mac/waterfall?numbuilds=100
+[tryserver.chromium.win]: http://build.chromium.org/p/tryserver.chromium.win/waterfall?numbuilds=100
+
+Scan down through the steps looking for the text "GPU"; that identifies those
+tests run on the GPU bots. For each test the "trigger" step can be ignored; the
+step further down for the test of the same name contains the results.
+
+It's usually not necessary to explicitly send try jobs just for verifying GPU
+tests. If you want to, you must invoke "git cl try" separately for each
+tryserver master you want to reference, for example:
+
+```sh
+git cl try -b linux_chromium_rel_ng
+git cl try -b mac_chromium_rel_ng
+git cl try -b win_chromium_rel_ng
+```
+
+Alternatively, the Gerrit UI can be used to send a patch set to these try
+servers.
+
+Three optional tryservers are also available which run additional tests. As of
+this writing, they ran longer-running tests that can't run against all Chromium
+CLs due to lack of hardware capacity. They are added as part of the included
+tryservers for code changes to certain sub-directories.
+
+* [linux_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
+* [mac_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
+* [win_optional_gpu_tests_rel] on the [luci.chromium.try] waterfall
+
+[linux_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_optional_gpu_tests_rel
+[mac_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac_optional_gpu_tests_rel
+[win_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win_optional_gpu_tests_rel
+
+Tryservers for the [ANGLE project] are also present on the
+[tryserver.chromium.angle] waterfall. These are invoked from the Gerrit user
+interface. They are configured similarly to the tryservers for regular Chromium
+patches, and run the same tests that are run on the [chromium.gpu.fyi]
+waterfall, in the same way (e.g., against ToT ANGLE).
+
+If you find it necessary to try patches against other sub-repositories than
+Chromium (`src/`) and ANGLE (`src/third_party/angle/`), please
+[file a bug](http://crbug.com/new) with component Internals\>GPU\>Testing.
+
+[ANGLE project]: https://chromium.googlesource.com/angle/angle/+/master/README.md
+[tryserver.chromium.angle]: https://build.chromium.org/p/tryserver.chromium.angle/waterfall
+[file a bug]: http://crbug.com/new
+
+## Running the GPU Tests Locally
+
+All of the GPU tests running on the bots can be run locally from a Chromium
+build. Many of the tests are simple executables:
+
+* `angle_unittests`
+* `content_gl_tests`
+* `gl_tests`
+* `gl_unittests`
+* `tab_capture_end2end_tests`
+
+Some run only on the chromium.gpu.fyi waterfall, either because there isn't
+enough machine capacity at the moment, or because they're closed-source tests
+which aren't allowed to run on the regular Chromium waterfalls:
+
+* `angle_deqp_gles2_tests`
+* `angle_deqp_gles3_tests`
+* `angle_end2end_tests`
+* `audio_unittests`
+
+The remaining GPU tests are run via Telemetry. In order to run them, just
+build the `chrome` target and then
+invoke `src/content/test/gpu/run_gpu_integration_test.py` with the appropriate
+argument. The tests this script can invoke are
+in `src/content/test/gpu/gpu_tests/`. For example:
+
+* `run_gpu_integration_test.py context_lost --browser=release`
+* `run_gpu_integration_test.py pixel --browser=release`
+* `run_gpu_integration_test.py webgl_conformance --browser=release --webgl-conformance-version=1.0.2`
+* `run_gpu_integration_test.py maps --browser=release`
+* `run_gpu_integration_test.py screenshot_sync --browser=release`
+* `run_gpu_integration_test.py trace_test --browser=release`
+
+**Note:** If you are on Linux and see this test harness exit immediately with
+`**Non zero exit code**`, it's probably because of some incompatible Python
+packages being installed. Please uninstall the `python-egenix-mxdatetime` and
+`python-logilab-common` packages in this case; see
+[Issue 716241](http://crbug.com/716241).
+
+You can also run a subset of tests with this harness:
+
+* `run_gpu_integration_test.py webgl_conformance --browser=release
+ --test-filter=conformance_attribs`
+
+Figuring out the exact command line that was used to invoke the test on the
+bots can be a little tricky. The bots all\* run their tests via Swarming and
+isolates, meaning that the invocation of a step like `[trigger]
+webgl_conformance_tests on NVIDIA GPU...` will look like:
+
+* `python -u
+ 'E:\b\build\slave\Win7_Release__NVIDIA_\build\src\tools\swarming_client\swarming.py'
+ trigger --swarming https://chromium-swarm.appspot.com
+ --isolate-server https://isolateserver.appspot.com
+ --priority 25 --shards 1 --task-name 'webgl_conformance_tests on NVIDIA GPU...'`
+
+You can figure out the additional command line arguments that were passed to
+each test on the bots by examining the trigger step and searching for the
+argument separator (<code> -- </code>). For a recent invocation of
+`webgl_conformance_tests`, this looked like:
+
+* `webgl_conformance --show-stdout '--browser=release' -v
+ '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc'
+ '--isolated-script-test-output=${ISOLATED_OUTDIR}/output.json'`
+
+You can leave off the --isolated-script-test-output argument, so this would
+leave a full command line of:
+
+* `run_gpu_integration_test.py
+ webgl_conformance --show-stdout '--browser=release' -v
+ '--extra-browser-args=--enable-logging=stderr --js-flags=--expose-gc'`
+
+The Maps test requires you to authenticate to cloud storage in order to access
+the Web Page Reply archive containing the test. See [Cloud Storage Credentials]
+for documentation on setting this up.
+
+[Cloud Storage Credentials]: gpu_testing_bot_details.md#Cloud-storage-credentials
+
+Pixel tests use reference images from cloud storage, bots pass
+`--upload-refimg-to-cloud-storage` argument, but to run locally you need to pass
+`--download-refimg-from-cloud-storage` argument, as well as other arguments bot
+uses, like `--refimg-cloud-storage-bucket` and `--os-type`.
+
+Sample command line for Android:
+
+* `run_gpu_integration_test.py pixel --show-stdout --browser=android-chromium
+ -v --passthrough --extra-browser-args='--enable-logging=stderr
+ --js-flags=--expose-gc' --refimg-cloud-storage-bucket
+ chromium-gpu-archive/reference-images --os-type android
+ --download-refimg-from-cloud-storage`
+
+<!-- XXX: update this section; these isolates don't exist anymore -->
+You can find the isolates for the various tests in
+[src/chrome/](http://src.chromium.org/viewvc/chrome/trunk/src/chrome/):
+
+* [angle_unittests.isolate](https://chromium.googlesource.com/chromium/src/+/master/chrome/angle_unittests.isolate)
+* [content_gl_tests.isolate](https://chromium.googlesource.com/chromium/src/+/master/content/content_gl_tests.isolate)
+* [gl_tests.isolate](https://chromium.googlesource.com/chromium/src/+/master/chrome/gl_tests.isolate)
+* [gles2_conform_test.isolate](https://chromium.googlesource.com/chromium/src/+/master/chrome/gles2_conform_test.isolate)
+* [tab_capture_end2end_tests.isolate](https://chromium.googlesource.com/chromium/src/+/master/chrome/tab_capture_end2end_tests.isolate)
+* [telemetry_gpu_test.isolate](https://chromium.googlesource.com/chromium/src/+/master/chrome/telemetry_gpu_test.isolate)
+
+The isolates contain the full or partial command line for invoking the target.
+The complete command line for any test can be deduced from the contents of the
+isolate plus the stdio output from the test's run on the bot.
+
+Note that for the GN build, the isolates are simply described by build targets,
+and [gn_isolate_map.pyl] describes the mapping between isolate name and build
+target, as well as the command line used to invoke the isolate. Once all
+platforms have switched to GN, the .isolate files will be obsolete and be
+removed.
+
+(\* A few of the one-off GPU configurations on the chromium.gpu.fyi waterfall
+run their tests locally rather than via swarming, in order to decrease the
+number of physical machines needed.)
+
+[gn_isolate_map.pyl]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/gn_isolate_map.pyl
+
+## Running Binaries from the Bots Locally
+
+Any binary run remotely on a bot can also be run locally, assuming the local
+machine loosely matches the architecture and OS of the bot.
+
+The easiest way to do this is to find the ID of the swarming task and use
+"swarming.py reproduce" to re-run it:
+
+* `./src/tools/swarming_client/swarming.py reproduce -S https://chromium-swarm.appspot.com [task ID]`
+
+The task ID can be found in the stdio for the "trigger" step for the test. For
+example, look at a recent build from the [Mac Release (Intel)] bot, and
+look at the `gl_unittests` step. You will see something like:
+
+[Mac Release (Intel)]: https://ci.chromium.org/buildbot/chromium.gpu/Mac%20Release%20%28Intel%29/
+
+```
+Triggered task: gl_unittests on Intel GPU on Mac/Mac-10.12.6/[TRUNCATED_ISOLATE_HASH]/Mac Release (Intel)/83664
+To collect results, use:
+ swarming.py collect -S https://chromium-swarm.appspot.com --json /var/folders/[PATH_TO_TEMP_FILE].json
+Or visit:
+ https://chromium-swarm.appspot.com/user/task/[TASK_ID]
+```
+
+There is a difference between the isolate's hash and Swarming's task ID. Make
+sure you use the task ID and not the isolate's hash.
+
+As of this writing, there seems to be a
+[bug](https://github.com/luci/luci-py/issues/250)
+when attempting to re-run the Telemetry based GPU tests in this way. For the
+time being, this can be worked around by instead downloading the contents of
+the isolate. To do so, look more deeply into the trigger step's log:
+
+* <code>python -u
+ /b/build/slave/Mac_10_10_Release__Intel_/build/src/tools/swarming_client/swarming.py
+ trigger [...more args...] --tag data:[ISOLATE_HASH] [...more args...]
+ [ISOLATE_HASH] -- **[...TEST_ARGS...]**</code>
+
+As of this writing, the isolate hash appears twice in the command line. To
+download the isolate's contents into directory `foo` (note, this is in the
+"Help" section associated with the page for the isolate's task, but I'm not
+sure whether that's accessible only to Google employees or all members of the
+chromium.org organization):
+
+* `python isolateserver.py download -I https://isolateserver.appspot.com
+ --namespace default-gzip -s [ISOLATE_HASH] --target foo`
+
+`isolateserver.py` will tell you the approximate command line to use. You
+should concatenate the `TEST_ARGS` highlighted in red above with
+`isolateserver.py`'s recommendation. The `ISOLATED_OUTDIR` variable can be
+safely replaced with `/tmp`.
+
+Note that `isolateserver.py` downloads a large number of files (everything
+needed to run the test) and may take a while. There is a way to use
+`run_isolated.py` to achieve the same result, but as of this writing, there
+were problems doing so, so this procedure is not documented at this time.
+
+Before attempting to download an isolate, you must ensure you have permission
+to access the isolate server. Full instructions can be [found
+here][isolate-server-credentials]. For most cases, you can simply run:
+
+* `./src/tools/swarming_client/auth.py login
+ --service=https://isolateserver.appspot.com`
+
+The above link requires that you log in with your @google.com credentials. It's
+not known at the present time whether this works with @chromium.org accounts.
+Email kbr@ if you try this and find it doesn't work.
+
+[isolate-server-credentials]: gpu_testing_bot_details.md#Isolate-server-credentials
+
+## Running Locally Built Binaries on the GPU Bots
+
+See the [Swarming documentation] for instructions on how to upload your binaries to the isolate server and trigger execution on Swarming.
+
+[Swarming documentation]: https://www.chromium.org/developers/testing/isolated-testing/for-swes#TOC-Run-a-test-built-locally-on-Swarming
+
+## Adding New Tests to the GPU Bots
+
+The goal of the GPU bots is to avoid regressions in Chrome's rendering stack.
+To that end, let's add as many tests as possible that will help catch
+regressions in the product. If you see a crazy bug in Chrome's rendering which
+would be easy to catch with a pixel test running in Chrome and hard to catch in
+any of the other test harnesses, please, invest the time to add a test!
+
+There are a couple of different ways to add new tests to the bots:
+
+1. Adding a new test to one of the existing harnesses.
+2. Adding an entire new test step to the bots.
+
+### Adding a new test to one of the existing test harnesses
+
+Adding new tests to the GTest-based harnesses is straightforward and
+essentially requires no explanation.
+
+As of this writing it isn't as easy as desired to add a new test to one of the
+Telemetry based harnesses. See [Issue 352807](http://crbug.com/352807). Let's
+collectively work to address that issue. It would be great to reduce the number
+of steps on the GPU bots, or at least to avoid significantly increasing the
+number of steps on the bots. The WebGL conformance tests should probably remain
+a separate step, but some of the smaller Telemetry based tests
+(`context_lost_tests`, `memory_test`, etc.) should probably be combined into a
+single step.
+
+If you are adding a new test to one of the existing tests (e.g., `pixel_test`),
+all you need to do is make sure that your new test runs correctly via isolates.
+See the documentation from the GPU bot details on [adding new isolated
+tests][new-isolates] for the `GYP_DEFINES` and authentication needed to upload
+isolates to the isolate server. Most likely the new test will be Telemetry
+based, and included in the `telemetry_gpu_test_run` isolate. You can then
+invoke it via:
+
+* `./src/tools/swarming_client/run_isolated.py -s [HASH]
+ -I https://isolateserver.appspot.com -- [TEST_NAME] [TEST_ARGUMENTS]`
+
+[new-isolates]: gpu_testing_bot_details.md#Adding-a-new-isolated-test-to-the-bots
+
+o## Adding new steps to the GPU Bots
+
+The tests that are run by the GPU bots are described by a couple of JSON files
+in the Chromium workspace:
+
+* [`chromium.gpu.json`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.json)
+* [`chromium.gpu.fyi.json`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.fyi.json)
+
+These files are autogenerated by the following script:
+
+* [`generate_buildbot_json.py`](https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/generate_buildbot_json.py)
+
+This script is completely self-contained and should hopefully be
+self-explanatory. The JSON files are parsed by the chromium and chromium_trybot
+recipes, and describe two types of tests:
+
+* GTests: those which use the Googletest and Chromium's `base/test/launcher/`
+ frameworks.
+* Telemetry based tests: those which are built on the Telemetry framework and
+ launch the entire browser.
+
+A prerequisite of adding a new test to the bots is that that test [run via
+isolates][new-isolates]. Once that is done, modify `generate_buildbot_json.py` to add the
+test to the appropriate set of bots. Be careful when adding large new test
+steps to all of the bots, because the GPU bots are a limited resource and do
+not currently have the capacity to absorb large new test suites. It is safer to
+get new tests running on the chromium.gpu.fyi waterfall first, and expand from
+there to the chromium.gpu waterfall (which will also make them run against
+every Chromium CL by virtue of the `linux_chromium_rel_ng`,
+`mac_chromium_rel_ng` and `win_chromium_rel_ng` tryservers' mirroring of the
+bots on this waterfall – so be careful!).
+
+Tryjobs which add new test steps to the chromium.gpu.json file will run those
+new steps during the tryjob, which helps ensure that the new test won't break
+once it starts running on the waterfall.
+
+Tryjobs which modify chromium.gpu.fyi.json can be sent to the
+`win_optional_gpu_tests_rel`, `mac_optional_gpu_tests_rel` and
+`linux_optional_gpu_tests_rel` tryservers to help ensure that they won't
+break the FYI bots.
+
+## Updating and Adding New Pixel Tests to the GPU Bots
+
+Adding new pixel tests which require reference images is a slightly more
+complex process than adding other kinds of tests which can validate their own
+correctness. There are a few reasons for this.
+
+* Reference image based pixel tests require different golden images for
+ different combinations of operating system, GPU, driver version, OS
+ version, and occasionally other variables.
+* The reference images must be generated by the main waterfall. The try
+ servers are not allowed to produce new reference images, only consume them.
+ The reason for this is that a patch sent to the try servers might cause an
+ incorrect reference image to be generated. For this reason, the main
+ waterfall bots upload reference images to cloud storage, and the try
+ servers download them and verify their results against them.
+* The try servers will fail if they run a pixel test requiring a reference
+ image that doesn't exist in cloud storage. This is deliberate, but needs
+ more thought; see [Issue 349262](http://crbug.com/349262).
+
+If a reference image based pixel test's result is going to change because of a
+change in ANGLE or Blink (for example), updating the reference images is a
+slightly tricky process. Here's how to do it:
+
+* Mark the pixel test as failing in the [pixel tests]' [test expectations]
+* Commit the change to ANGLE, Blink, etc. which will change the test's
+ results
+* Note that without the failure expectation, this commit would turn some bots
+ red; a Blink change will turn the GPU bots on the chromium.webkit waterfall
+ red, and an ANGLE change will turn the chromium.gpu.fyi bots red
+* Wait for Blink/ANGLE/etc. to roll
+* Commit a change incrementing the revision number associated with the test
+ in the [test pages]
+* Commit a second change removing the failure expectation, once all of the
+ bots on the main waterfall have generated new reference images. This change
+ should go through the commit queue cleanly.
+
+[pixel tests]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/pixel_test_pages.py
+[test expectations]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/pixel_expectations.py
+[test pages]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/pixel_test_pages.py
+
+When adding a brand new pixel test that uses a reference image, the steps are
+similar, but simpler:
+
+* Mark the test as failing in the same commit which introduces the new test
+* Wait for the reference images to be produced by all of the GPU bots on the
+ waterfalls (see [chromium-gpu-archive/reference-images])
+* Commit a change un-marking the test as failing
+
+When making a Chromium-side change which changes the pixel tests' results:
+
+* In your CL, both mark the pixel test as failing in the pixel test's test
+ expectations and increment the test's version number in the page set (see
+ above)
+* After your CL lands, land another CL removing the failure expectations. If
+ this second CL goes through the commit queue cleanly, you know reference
+ images were generated properly.
+
+In general, when adding a new pixel test, it's better to spot check a few
+pixels in the rendered image rather than using a reference image per platform.
+The [GPU rasterization test] is a good example of a recently added test which
+performs such spot checks.
+
+[cloud storage bucket]: https://console.developers.google.com/storage/chromium-gpu-archive/reference-images
+<!-- XXX: old link -->
+[GPU rasterization test]: http://src.chromium.org/viewvc/chrome/trunk/src/content/test/gpu/gpu_tests/gpu_rasterization.py
+
+## Stamping out Flakiness
+
+It's critically important to aggressively investigate and eliminate the root
+cause of any flakiness seen on the GPU bots. The bots have been known to run
+reliably for days at a time, and any flaky failures that are tolerated on the
+bots translate directly into instability of the browser experienced by
+customers. Critical bugs in subsystems like WebGL, affecting high-profile
+products like Google Maps, have escaped notice in the past because the bots
+were unreliable. After much re-work, the GPU bots are now among the most
+reliable automated test machines in the Chromium project. Let's keep them that
+way.
+
+Flakiness affecting the GPU tests can come in from highly unexpected sources.
+Here are some examples:
+
+* Intermittent pixel_test failures on Linux where the captured pixels were
+ black, caused by the Display Power Management System (DPMS) kicking in.
+ Disabled the X server's built-in screen saver on the GPU bots in response.
+* GNOME dbus-related deadlocks causing intermittent timeouts ([Issue
+ 309093](http://crbug.com/309093) and related bugs).
+* Windows Audio system changes causing intermittent assertion failures in the
+ browser ([Issue 310838](http://crbug.com/310838)).
+* Enabling assertion failures in the C++ standard library on Linux causing
+ random assertion failures ([Issue 328249](http://crbug.com/328249)).
+* V8 bugs causing random crashes of the Maps pixel test (V8 issues
+ [3022](https://code.google.com/p/v8/issues/detail?id=3022),
+ [3174](https://code.google.com/p/v8/issues/detail?id=3174)).
+* TLS changes causing random browser process crashes ([Issue
+ 264406](http://crbug.com/264406)).
+* Isolated test execution flakiness caused by failures to reliably clean up
+ temporary directories ([Issue 340415](http://crbug.com/340415)).
+* The Telemetry-based WebGL conformance suite caught a bug in the memory
+ allocator on Android not caught by any other bot ([Issue
+ 347919](http://crbug.com/347919)).
+* context_lost test failures caused by the compositor's retry logic ([Issue
+ 356453](http://crbug.com/356453)).
+* Multiple bugs in Chromium's support for lost contexts causing flakiness of
+ the context_lost tests ([Issue 365904](http://crbug.com/365904)).
+* Maps test timeouts caused by Content Security Policy changes in Blink
+ ([Issue 395914](http://crbug.com/395914)).
+* Weak pointer assertion failures in various webgl\_conformance\_tests caused
+ by changes to the media pipeline ([Issue 399417](http://crbug.com/399417)).
+* A change to a default WebSocket timeout in Telemetry causing intermittent
+ failures to run all WebGL conformance tests on the Mac bots ([Issue
+ 403981](http://crbug.com/403981)).
+* Chrome leaking suspended sub-processes on Windows, apparently a preexisting
+ race condition that suddenly showed up ([Issue
+ 424024](http://crbug.com/424024)).
+* Changes to Chrome's cross-context synchronization primitives causing the
+ wrong tiles to be rendered ([Issue 584381](http://crbug.com/584381)).
+* A bug in V8's handling of array literals causing flaky failures of
+ texture-related WebGL 2.0 tests ([Issue 606021](http://crbug.com/606021)).
+* Assertion failures in sync point management related to lost contexts that
+ exposed a real correctness bug ([Issue 606112](http://crbug.com/606112)).
+* A bug in glibc's `sem_post`/`sem_wait` primitives breaking V8's parallel
+ garbage collection ([Issue 609249](http://crbug.com/609249)).
+
+If you notice flaky test failures either on the GPU waterfalls or try servers,
+please file bugs right away with the component Internals>GPU>Testing and
+include links to the failing builds and copies of the logs, since the logs
+expire after a few days. [GPU pixel wranglers] should give the highest priority
+to eliminating flakiness on the tree.
+
+[GPU pixel wranglers]: pixel_wrangling.md
diff --git a/chromium/docs/gpu/gpu_testing_bot_details.md b/chromium/docs/gpu/gpu_testing_bot_details.md
new file mode 100644
index 00000000000..a0638514465
--- /dev/null
+++ b/chromium/docs/gpu/gpu_testing_bot_details.md
@@ -0,0 +1,539 @@
+# GPU Bot Details
+
+This PAGE describes in detail how the GPU bots are set up, which files affect
+their configuration, and how to both modify their behavior and add new bots.
+
+[TOC]
+
+## Overview of the GPU bots' setup
+
+Chromium's GPU bots, compared to the majority of the project's test machines,
+are physical pieces of hardware. When end users run the Chrome browser, they
+are almost surely running it on a physical piece of hardware with a real
+graphics processor. There are some portions of the code base which simply can
+not be exercised by running the browser in a virtual machine, or on a software
+implementation of the underlying graphics libraries. The GPU bots were
+developed and deployed in order to cover these code paths, and avoid
+regressions that are otherwise inevitable in a project the size of the Chromium
+browser.
+
+The GPU bots are utilized on the [chromium.gpu] and [chromium.gpu.fyi]
+waterfalls, and various tryservers, as described in [Using the GPU Bots].
+
+[chromium.gpu]: https://build.chromium.org/p/chromium.gpu/console
+[chromium.gpu.fyi]: https://build.chromium.org/p/chromium.gpu.fyi/console
+[Using the GPU Bots]: gpu_testing.md#Using-the-GPU-Bots
+
+The vast majority of the hardware for the bots lives in the Chrome-GPU Swarming
+pool. The waterfall bots are simply virtual machines which spawn Swarming tasks
+with the appropriate tags to get them to run on the desired GPU and operating
+system type. So, for example, the [Win10 Release (NVIDIA)] bot is actually a
+virtual machine which spawns all of its jobs with the Swarming parameters:
+
+[Win10 Release (NVIDIA)]: https://ci.chromium.org/buildbot/chromium.gpu/Win10%20Release%20%28NVIDIA%29/?limit=200
+
+```json
+{
+ "gpu": "10de:1cb3-23.21.13.8816",
+ "os": "Windows-10",
+ "pool": "Chrome-GPU"
+}
+```
+
+Since the GPUs in the Swarming pool are mostly homogeneous, this is sufficient
+to target the pool of Windows 10-like NVIDIA machines. (There are a few Windows
+7-like NVIDIA bots in the pool, which necessitates the OS specifier.)
+
+Details about the bots can be found on [chromium-swarm.appspot.com] and by
+using `src/tools/swarming_client/swarming.py`, for example `swarming.py bots`.
+If you are authenticated with @google.com credentials you will be able to make
+queries of the bots and see, for example, which GPUs are available.
+
+[chromium-swarm.appspot.com]: https://chromium-swarm.appspot.com/
+
+The waterfall bots run tests on a single GPU type in order to make it easier to
+see regressions or flakiness that affect only a certain type of GPU.
+
+The tryservers like `win_chromium_rel_ng` which include GPU tests, on the other
+hand, run tests on more than one GPU type. As of this writing, the Windows
+tryservers ran tests on NVIDIA and AMD GPUs; the Mac tryservers ran tests on
+Intel and NVIDIA GPUs. The way these tryservers' tests are specified is simply
+by *mirroring* how one or more waterfall bots work. This is an inherent
+property of the [`chromium_trybot` recipe][chromium_trybot.py], which was designed to eliminate
+differences in behavior between the tryservers and waterfall bots. Since the
+tryservers mirror waterfall bots, if the waterfall bot is working, the
+tryserver must almost inherently be working as well.
+
+[chromium_trybot.py]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes/chromium_trybot.py
+
+There are a few one-off GPU configurations on the waterfall where the tests are
+run locally on physical hardware, rather than via Swarming. A few examples are:
+
+<!-- XXX: update this list -->
+* [Mac Pro Release (AMD)](https://luci-milo.appspot.com/buildbot/chromium.gpu.fyi/Mac%20Pro%20Release%20%28AMD%29/)
+* [Mac Pro Debug (AMD)](https://luci-milo.appspot.com/buildbot/chromium.gpu.fyi/Mac%20Pro%20Debug%20%28AMD%29/)
+* [Linux Release (Intel HD 630)](https://luci-milo.appspot.com/buildbot/chromium.gpu.fyi/Linux%20Release%20%28Intel%20HD%20630%29/)
+* [Linux Release (AMD R7 240)](https://luci-milo.appspot.com/buildbot/chromium.gpu.fyi/Linux%20Release%20%28AMD%20R7%20240%29/)
+
+There are a couple of reasons to continue to support running tests on a
+specific machine: it might be too expensive to deploy the required multiple
+copies of said hardware, or the configuration might not be reliable enough to
+begin scaling it up.
+
+## Adding a new isolated test to the bots
+
+Adding a new test step to the bots requires that the test run via an isolate.
+Isolates describe both the binary and data dependencies of an executable, and
+are the underpinning of how the Swarming system works. See the [LUCI wiki] for
+background on Isolates and Swarming.
+
+<!-- XXX: broken link -->
+[LUCI wiki]: https://github.com/luci/luci-py/wiki
+
+### Adding a new isolate
+
+1. Define your target using the `template("test")` template in
+ [`src/testing/test.gni`][testing/test.gni]. See `test("gl_tests")` in
+ [`src/gpu/BUILD.gn`][gpu/BUILD.gn] for an example. For a more complex
+ example which invokes a series of scripts which finally launches the
+ browser, see [`src/chrome/telemetry_gpu_test.isolate`][telemetry_gpu_test.isolate].
+2. Add an entry to [`src/testing/buildbot/gn_isolate_map.pyl`][gn_isolate_map.pyl] that refers to
+ your target. Find a similar target to yours in order to determine the
+ `type`. The type is referenced in [`src/tools/mb/mb_config.pyl`][mb_config.pyl].
+
+[testing/test.gni]: https://chromium.googlesource.com/chromium/src/+/master/testing/test.gni
+[gpu/BUILD.gn]: https://chromium.googlesource.com/chromium/src/+/master/gpu/BUILD.gn
+<!-- XXX: broken link -->
+[telemetry_gpu_test.isolate]: https://chromium.googlesource.com/chromium/src/+/master/chrome/telemetry_gpu_test.isolate
+[gn_isolate_map.pyl]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/gn_isolate_map.pyl
+[mb_config.pyl]: https://chromium.googlesource.com/chromium/src/+/master/tools/mb/mb_config.pyl
+
+At this point you can build and upload your isolate to the isolate server.
+
+See [Isolated Testing for SWEs] for the most up-to-date instructions. These
+instructions are a copy which show how to run an isolate that's been uploaded
+to the isolate server on your local machine rather than on Swarming.
+
+[Isolated Testing for SWEs]: https://www.chromium.org/developers/testing/isolated-testing/for-swes
+
+If `cd`'d into `src/`:
+
+1. `./tools/mb/mb.py isolate //out/Release [target name]`
+ * For example: `./tools/mb/mb.py isolate //out/Release angle_end2end_tests`
+1. `python tools/swarming_client/isolate.py batcharchive -I https://isolateserver.appspot.com out/Release/[target name].isolated.gen.json`
+ * For example: `python tools/swarming_client/isolate.py batcharchive -I https://isolateserver.appspot.com out/Release/angle_end2end_tests.isolated.gen.json`
+1. This will write a hash to stdout. You can run it via:
+ `python tools/swarming_client/run_isolated.py -I https://isolateserver.appspot.com -s [HASH] -- [any additional args for the isolate]`
+
+See the section below on [isolate server credentials](#Isolate-server-credentials).
+
+### Adding your new isolate to the tests that are run on the bots
+
+See [Adding new steps to the GPU bots] for details on this process.
+
+[Adding new steps to the GPU bots]: gpu_testing.md#Adding-new-steps-to-the-GPU-Bots
+
+## Relevant files that control the operation of the GPU bots
+
+In the [tools/build] workspace:
+
+* [masters/master.chromium.gpu] and [masters/master.chromium.gpu.fyi]:
+ * builders.pyl in these two directories defines the bots that show up on
+ the waterfall. If you are adding a new bot, you need to add it to
+ builders.pyl and use go/bug-a-trooper to request a restart of either
+ master.chromium.gpu or master.chromium.gpu.fyi.
+ * Only changes under masters/ require a waterfall restart. All other
+ changes – for example, to scripts/slave/ in this workspace, or the
+ Chromium workspace – do not require a master restart (and go live the
+ minute they are committed).
+* `scripts/slave/recipe_modules/chromium_tests/`:
+ * <code>[chromium_gpu.py]</code> and
+ <code>[chromium_gpu_fyi.py]</code> define the following for
+ each builder and tester:
+ * How the workspace is checked out (e.g., this is where top-of-tree
+ ANGLE is specified)
+ * The build configuration (e.g., this is where 32-bit vs. 64-bit is
+ specified)
+ * Various gclient defines (like compiling in the hardware-accelerated
+ video codecs, and enabling compilation of certain tests, like the
+ dEQP tests, that can't be built on all of the Chromium builders)
+ * Note that the GN configuration of the bots is also controlled by
+ <code>[mb_config.pyl]</code> in the Chromium workspace; see below.
+ * <code>[trybots.py]</code> defines how try bots *mirror* one or more
+ waterfall bots.
+ * The concept of try bots mirroring waterfall bots ensures there are
+ no differences in behavior between the waterfall bots and the try
+ bots. This helps ensure that a CL will not pass the commit queue
+ and then break on the waterfall.
+ * This file defines the behavior of the following GPU-related try
+ bots:
+ * `linux_chromium_rel_ng`, `mac_chromium_rel_ng`, and
+ `win_chromium_rel_ng`, which run against every Chromium CL, and
+ which mirror the behavior of bots on the chromium.gpu
+ waterfall.
+ * The ANGLE try bots, which run against ANGLE CLs, and mirror the
+ behavior of the chromium.gpu.fyi waterfall (including using
+ top-of-tree ANGLE, and running additional tests not run by the
+ regular Chromium try bots)
+ * The optional GPU try servers `linux_optional_gpu_tests_rel`,
+ `mac_optional_gpu_tests_rel` and
+ `win_optional_gpu_tests_rel`, which are triggered manually and
+ run some tests which can't be run on the regular Chromium try
+ servers mainly due to lack of hardware capacity.
+
+[tools/build]: https://chromium.googlesource.com/chromium/tools/build/
+[masters/master.chromium.gpu]: https://chromium.googlesource.com/chromium/tools/build/+/master/masters/master.chromium.gpu/
+[masters/master.chromium.gpu.fyi]: https://chromium.googlesource.com/chromium/tools/build/+/master/masters/master.chromium.gpu.fyi/
+[chromium_gpu.py]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests/chromium_gpu.py
+[chromium_gpu_fyi.py]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py
+[trybots.py]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests/trybots.py
+
+In the [chromium/src] workspace:
+
+* [src/testing/buildbot]:
+ * <code>[chromium.gpu.json]</code> and
+ <code>[chromium.gpu.fyi.json]</code> define which steps are run on
+ which bots. These files are autogenerated. Don't modify them directly!
+ * <code>[gn_isolate_map.pyl]</code> defines all of the isolates' behavior in the GN
+ build.
+* [`src/tools/mb/mb_config.pyl`][mb_config.pyl]
+ * Defines the GN arguments for all of the bots.
+* [`src/content/test/gpu/generate_buildbot_json.py`][generate_buildbot_json.py]
+ * The generator script for `chromium.gpu.json` and
+ `chromium.gpu.fyi.json`. It defines on which GPUs various tests run.
+ * It's completely self-contained and should hopefully be fairly
+ comprehensible.
+ * When modifying this script, don't forget to also run it, to regenerate
+ the JSON files.
+ * See [Adding new steps to the GPU bots] for more details.
+
+[chromium/src]: https://chromium.googlesource.com/chromium/src/
+[src/testing/buildbot]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot
+[chromium.gpu.json]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.json
+[chromium.gpu.fyi.json]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.fyi.json
+[gn_isolate_map.pyl]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/gn_isolate_map.pyl
+[mb_config.pyl]: https://chromium.googlesource.com/chromium/src/+/master/tools/mb/mb_config.pyl
+[generate_buildbot_json.py]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/generate_buildbot_json.py
+
+In the [infradata/config] workspace (Google internal only, sorry):
+
+* [configs/chromium-swarm/bots.cfg]
+ * Defines a `Chrome-GPU` Swarming pool which contains most of the
+ specialized hardware: as of this writing, the Windows and Linux NVIDIA
+ bots, the Windows AMD bots, and the MacBook Pros with NVIDIA and AMD
+ GPUs. New GPU hardware should be added to this pool.
+
+[infradata/config]: https://chrome-internal.googlesource.com/infradata/config
+[configs/chromium-swarm/bots.cfg]: https://chrome-internal.googlesource.com/infradata/config/+/master/configs/chromium-swarm/bots.cfg
+
+## Walkthroughs of various maintenance scenarios
+
+This section describes various common scenarios that might arise when
+maintaining the GPU bots, and how they'd be addressed.
+
+### How to add a new test or an entire new step to the bots
+
+This is described in [Adding new tests to the GPU bots].
+
+[Adding new tests to the GPU bots]: https://www.chromium.org/developers/testing/gpu-testing/#TOC-Adding-New-Tests-to-the-GPU-Bots
+
+### How to add a new bot
+
+The first decision point when adding a new GPU bot is whether it is a one-off
+piece of hardware, or one which is expected to be scaled up at some point. If
+it's a one-off piece of hardware, it can be added to the chromium.gpu.fyi
+waterfall as a non-swarmed test machine. If it's expected to be scaled up at
+some point, the hardware should be added to the swarming pool. These two
+scenarios are described in more detail below.
+
+#### How to add a new, non-swarmed, physical bot to the chromium.gpu.fyi waterfall
+
+1. Work with the Chrome Infrastructure Labs team to get the hardware deployed
+ so it can talk to the chromium.gpu.fyi master.
+1. Create a CL in the build workspace which:
+ 1. Add the new machine to
+ [`masters/master.chromium.gpu.fyi/builders.pyl`][master.chromium.gpu.fyi/builders.pyl].
+ 1. Add the new machine to
+ [`scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py`][chromium_gpu_fyi.py].
+ Set the `enable_swarming` property to `False`.
+ 1. Retrain recipe expectations
+ (`scripts/slave/recipes.py --use-bootstrap test train`) and add the
+ newly created JSON file(s) corresponding to the new machines to your CL.
+1. Create a CL in the Chromium workspace to:
+ 1. Add the new machine to
+ [`src/content/test/gpu/generate_buildbot_json.py`][generate_buildbot_json.py].
+ Make sure to set the `swarming` property to `False`.
+ 1. If the machine runs GN, add a description to
+ [`src/tools/mb/mb_config.pyl`][mb_config.pyl].
+1. Once the build workspace CL lands, use go/bug-a-trooper (or contact kbr@)
+ to schedule a restart of the chromium.gpu.fyi waterfall. This is only
+ necessary when modifying files under the masters/ directory. A reboot of
+ the machine may be needed once the waterfall has been restarted in order to
+ make it connect properly.
+1. The CLs from (2) and (3) can land in either order, though it is preferable
+ to land the Chromium-side CL first so that the machine knows what tests to
+ run the first time it boots up.
+
+[master.chromium.gpu.fyi/builders.pyl]: https://chromium.googlesource.com/chromium/tools/build/+/master/masters/master.chromium.gpu.fyi/builders.pyl
+
+#### How to add a new swarmed bot to the chromium.gpu.fyi waterfall
+
+When deploying a new GPU configuration, it should be added to the
+chromium.gpu.fyi waterfall first. The chromium.gpu waterfall should be reserved
+for those GPUs which are tested on the commit queue. (Some of the bots violate
+this rule – namely, the Debug bots – though we should strive to eliminate these
+differences.) Once the new configuration is ready to be fully deployed on
+tryservers, bots can be added to the chromium.gpu waterfall, and the tryservers
+changed to mirror them.
+
+In order to add Release and Debug waterfall bots for a new configuration,
+experience has shown that at least 4 physical machines are needed in the
+swarming pool. The reason is that the tests all run in parallel on the Swarming
+cluster, so the load induced on the swarming bots is higher than it would be
+for a non-swarmed bot that executes its tests serially.
+
+With these prerequisites, these are the steps to add a new swarmed bot.
+(Actually, pair of bots -- Release and Debug.)
+
+1. Work with the Chrome Infrastructure Labs team to get the (minimum 4)
+ physical machines added to the Swarming pool. Use
+ [chromium-swarm.appspot.com] or `src/tools/swarming_client/swarming.py bots`
+ to determine the PCI IDs of the GPUs in the bots. (These instructions will
+ need to be updated for Android bots which don't have PCI buses.)
+ 1. Make sure to add these new machines to the Chrome-GPU Swarming pool by
+ creating a CL against [`configs/chromium-swarm/bots.cfg`][bots.cfg] in
+ the [infradata/config] workspace.
+1. File a Chrome Infrastructure Labs ticket requesting 2 virtual machines for
+ the testers. These need to match the OS of the physical machines and
+ builders because of limitations in the scripts which transfer builds from
+ the builder to the tester; see [this feature
+ request](http://crbug.com/581953). For example, if you're adding a "Windows
+ 7 CoolNewGPUType" tester, you'll need 2 Windows VMs.
+1. Once the VMs are ready, create a CL in the build workspace which:
+ 1. Adds the new VMs as the Release and Debug bots in
+ [`master.chromium.gpu.fyi/builders.pyl`][master.chromium.gpu.fyi/builders.pyl].
+ 1. Adds the new VMs to [`chromium_gpu_fyi.py`][chromium_gpu_fyi.py]. Make
+ sure to set the `enable_swarming` and `serialize_tests` properties to
+ `True`. Double-check the `parent_buildername` property for each. It
+ must match the Release/Debug flavor of the builder.
+ 1. Retrain recipe expectations
+ (`scripts/slave/recipes.py --use-bootstrap test train`) and add the
+ newly created JSON file(s) corresponding to the new machines to your CL.
+1. Create a CL in the Chromium workspace which:
+ 1. Adds the new machine to
+ `src/content/test/gpu/generate_buildbot_json.py`.
+ 1. The swarming dimensions are crucial. These must match the GPU and
+ OS type of the physical hardware in the Swarming pool. This is what
+ causes the VMs to spawn their tests on the correct hardware. Make
+ sure to use the Chrome-GPU pool, and that the new machines were
+ specifically added to that pool.
+ 1. Make sure to set the `swarming` property to `True` for both the
+ Release and Debug bots.
+ 1. Make triply sure that there are no collisions between the new
+ hardware you're adding and hardware already in the Swarming pool.
+ For example, it used to be the case that all of the Windows NVIDIA
+ bots ran the same OS version. Later, the Windows 8 flavor bots were
+ added. In order to avoid accidentally running tests on Windows 8
+ when Windows 7 was intended, the OS in the swarming dimensions of
+ the Win7 bots had to be changed from `win` to
+ `Windows-2008ServerR2-SP1` (the Win7-like flavor running in our
+ data center). Similarly, the Win8 bots had to have a very precise
+ OS description (`Windows-2012ServerR2-SP0`).
+ 1. If the machine runs GN, adds a description to
+ [`src/tools/mb/mb_config.pyl`][mb_config.pyl].
+1. Once the tools/build CL lands, use go/bug-a-trooper (or contact kbr@) to
+ schedule a restart of the chromium.gpu.fyi waterfall. This is only
+ necessary when modifying files under the masters/ directory. A reboot of
+ the VMs may be needed once the waterfall has been restarted in order to
+ make them connect properly.
+1. The CLs from (3) and (4) can land in either order, though it is preferable
+ to land the Chromium-side CL first so that the machine knows what tests to
+ run the first time it boots up.
+
+[bots.cfg]: https://chrome-internal.googlesource.com/infradata/config/+/master/configs/chromium-swarm/bots.cfg
+[infradata/config]: https://chrome-internal.googlesource.com/infradata/config/
+
+#### How to start running tests on a new GPU type on an existing try bot
+
+Let's say that you want to cause the `win_chromium_rel_ng` try bot to run tests
+on CoolNewGPUType in addition to the types it currently runs (as of this
+writing, NVIDIA and AMD). To do this:
+
+1. Make sure there is enough hardware capacity. Unfortunately, tools to report
+ utilization of the Swarming pool are still being developed, but a
+ back-of-the-envelope estimate is that you will need a minimum of 30
+ machines in the Swarming pool to run the current set of GPU tests on the
+ tryservers. We estimate that 90 machines will be needed in order to
+ additionally run the WebGL 2.0 conformance tests. Plan for the larger
+ capacity, as it's desired to run the larger test suite on as many
+ configurations as possible.
+2. Deploy Release and Debug testers on the chromium.gpu waterfall, following
+ the instructions for the chromium.gpu.fyi waterfall above. You will also
+ need to temporarily add suppressions to
+ [`tests/masters_recipes_test.py`][tests/masters_recipes_test.py] for these
+ new testers since they aren't yet covered by try bots and are going on a
+ non-FYI waterfall. Make sure these run green for a day or two before
+ proceeding.
+3. Create a CL in the tools/build workspace, adding the new Release tester
+ to `win_chromium_rel_ng`'s `bot_ids` list
+ in `scripts/slave/recipe_modules/chromium_tests/trybots.py`. Rerun
+ `scripts/slave/recipes.py --use-bootstrap test train`.
+4. Once the CL in (3) lands, the commit queue will **immediately** start
+ running tests on the CoolNewGPUType configuration. Be vigilant and make
+ sure that tryjobs are green. If they are red for any reason, revert the CL
+ and figure out offline what went wrong.
+
+[tests/masters_recipes_test.py]: https://chromium.googlesource.com/chromium/tools/build/+/master/tests/masters_recipes_test.py
+
+#### How to add a new optional try bot
+
+The "optional" GPU try bots are a concession to the reality that there are some
+long-running GPU test suites that simply can not run against every Chromium CL.
+They run some additional tests that are usually run only on the
+chromium.gpu.fyi waterfall. Some of these tests, like the WebGL 2.0 conformance
+suite, are intended to be run on the normal try bots once hardware capacity is
+available. Some are not intended to ever run on the normal try bots.
+
+The optional try bots are a little different because they mirror waterfall bots
+that don't actually exist. The waterfall bots' specifications exist only to
+tell the optional try bots which tests to run.
+
+Let's say that you intended to add a new such optional try bot on Windows. Call
+it `win_new_optional_tests_rel` for example. Now, if you wanted to just add
+this GPU type to the existing `win_optional_gpu_tests_rel` try bot, you'd
+just follow the instructions above
+([How to start running tests on a new GPU type on an existing try bot](#How-to-start-running-tests-on-a-new-GPU-type-on-an-existing-try-bot)). The steps below describe how to spin up
+an entire new optional try bot.
+
+1. Make sure that you have some swarming capacity for the new GPU type. Since
+ it's not running against all Chromium CLs you don't need the recommended 30
+ minimum bots, though ~10 would be good.
+1. Create a CL in the Chromium workspace:
+ 1. Add your new bot (for example, "Optional Win7 Release
+ (CoolNewGPUType)") to the chromium.gpu.fyi waterfall in
+ [generate_buildbot_json.py]. (Note, this is a bad example: the
+ "optional" bots have special semantics in this script. You'd probably
+ want to define some new category of bot if you didn't intend to add
+ this to `win_optional_gpu_tests_rel`.)
+ 1. Re-run the script to regenerate the JSON files.
+1. Land the above CL.
+1. Create a CL in the tools/build workspace:
+ 1. Modify `masters/master.tryserver.chromium.win`'s [master.cfg] and
+ [slaves.cfg] to add the new tryserver. Follow the pattern for the
+ existing `win_optional_gpu_tests_rel` tryserver. Namely, add the new
+ entry to master.cfg, and add the new tryserver to the
+ `optional_builders` list in `slaves.cfg`.
+ 1. Modify [`chromium_gpu_fyi.py`][chromium_gpu_fyi.py] to add the new
+ "Optional Win7 Release (CoolNewGPUType)" entry.
+ 1. Modify [`trybots.py`][trybots.py] to add
+ the new `win_new_optional_tests_rel` try bot, mirroring "Optional
+ Win7 Release (CoolNewGPUType)".
+1. Land the above CL and request an off-hours restart of the
+ tryserver.chromium.win waterfall.
+1. Now you can send CLs to the new bot with:
+ `git cl try -m tryserver.chromium.win -b win_new_optional_tests_rel`
+
+[master.cfg]: https://chromium.googlesource.com/chromium/tools/build/+/master/masters/master.tryserver.chromium.win/master.cfg
+[slaves.cfg]: https://chromium.googlesource.com/chromium/tools/build/+/master/masters/master.tryserver.chromium.win/slaves.cfg
+
+#### How to test and deploy a driver update
+
+Let's say that you want to roll out an update to the graphics drivers on one of
+the configurations like the Win7 NVIDIA bots. The responsible way to do this is
+to run the new driver on one of the waterfalls for a day or two to make sure
+the tests are running reliably green before rolling out the driver update
+everywhere. To do this:
+
+1. Work with the Chrome Infrastructure Labs team to deploy a single,
+ non-swarmed, physical machine on the chromium.gpu.fyi waterfall running the
+ new driver. The OS and GPU should exactly match the configuration you
+ intend to upgrade. See
+ [How to add a new, non-swarmed, physical bot to the chromium.gpu.fyi waterfall](#How-to-add-a-new_non-swarmed_physical-bot-to-the-chromium_gpu_fyi-waterfall).
+2. Hopefully, the new machine will pass the pixel tests. If it doesn't, then
+ unfortunately, it'll be necessary to follow the instructions on
+ [updating the pixel tests] to temporarily suppress the failures on this
+ particular configuration. Keep the time window for these test suppressions
+ as narrow as possible.
+3. Watch the new machine for a day or two to make sure it's stable.
+4. When it is, ask the Chrome Infrastructure Labs team to roll out the driver
+ update across all of the similarly configured bots in the swarming pool.
+5. If necessary, update pixel test expectations and remove the suppressions
+ added above.
+6. Prepare and land a CL removing the temporary machine from the
+ chromium.gpu.fyi waterfall. Request a waterfall restart.
+7. File a ticket with the Chrome Infrastructure Labs team to reclaim the
+ temporary machine.
+
+Note that with recent improvements to Swarming, in particular [this
+RFE](https://github.com/luci/luci-py/issues/253) and others, these steps are no
+longer strictly necessary – it's possible to target Swarming jobs at a
+particular driver version. If
+[`generate_buildbot_json.py`][generate_buildbot_json.py] were improved to be
+more specific about the driver version on the various bots, then the machines
+with the new drivers could simply be added to the Swarming pool, and this
+process could be a lot simpler. Patches welcome. :)
+
+[updating the pixel tests]: https://www.chromium.org/developers/testing/gpu-testing/#TOC-Updating-and-Adding-New-Pixel-Tests-to-the-GPU-Bots
+
+## Credentials for various servers
+
+Working with the GPU bots requires credentials to various services: the isolate
+server, the swarming server, and cloud storage.
+
+### Isolate server credentials
+
+To upload and download isolates you must first authenticate to the isolate
+server. From a Chromium checkout, run:
+
+* `./src/tools/swarming_client/auth.py login
+ --service=https://isolateserver.appspot.com`
+
+This will open a web browser to complete the authentication flow. A @google.com
+email address is required in order to properly authenticate.
+
+To test your authentication, find a hash for a recent isolate. Consult the
+instructions on [Running Binaries from the Bots Locally] to find a random hash
+from a target like `gl_tests`. Then run the following:
+
+[Running Binaries from the Bots Locally]: https://www.chromium.org/developers/testing/gpu-testing#TOC-Running-Binaries-from-the-Bots-Locally
+
+If authentication succeeded, this will silently download a file called
+`delete_me` into the current working directory. If it failed, the script will
+report multiple authentication errors. In this case, use the following command
+to log out and then try again:
+
+* `./src/tools/swarming_client/auth.py logout
+ --service=https://isolateserver.appspot.com`
+
+### Swarming server credentials
+
+The swarming server uses the same `auth.py` script as the isolate server. You
+will need to authenticate if you want to manually download the results of
+previous swarming jobs, trigger your own jobs, or run `swarming.py reproduce`
+to re-run a remote job on your local workstation. Follow the instructions
+above, replacing the service with `https://chromium-swarm.appspot.com`.
+
+### Cloud storage credentials
+
+Authentication to Google Cloud Storage is needed for a couple of reasons:
+uploading pixel test results to the cloud, and potentially uploading and
+downloading builds as well, at least in Debug mode. Use the copy of gsutil in
+`depot_tools/third_party/gsutil/gsutil`, and follow the [Google Cloud Storage
+instructions] to authenticate. You must use your @google.com email address and
+be a member of the Chrome GPU team in order to receive read-write access to the
+appropriate cloud storage buckets. Roughly:
+
+1. Run `gsutil config`
+2. Copy/paste the URL into your browser
+3. Log in with your @google.com account
+4. Allow the app to access the information it requests
+5. Copy-paste the resulting key back into your Terminal
+6. Press "enter" when prompted for a project-id (i.e., leave it empty)
+
+At this point you should be able to write to the cloud storage bucket.
+
+Navigate to
+<https://console.developers.google.com/storage/chromium-gpu-archive> to view
+the contents of the cloud storage bucket.
+
+[Google Cloud Storage instructions]: https://developers.google.com/storage/docs/gsutil
diff --git a/chromium/docs/gpu/images/wrangler.png b/chromium/docs/gpu/images/wrangler.png
new file mode 100644
index 00000000000..ff269644202
--- /dev/null
+++ b/chromium/docs/gpu/images/wrangler.png
Binary files differ
diff --git a/chromium/docs/gpu/pixel_wrangling.md b/chromium/docs/gpu/pixel_wrangling.md
new file mode 100644
index 00000000000..7d1fcda47fb
--- /dev/null
+++ b/chromium/docs/gpu/pixel_wrangling.md
@@ -0,0 +1,298 @@
+# GPU Bots & Pixel Wrangling
+
+![](images/wrangler.png)
+
+(December 2017: presentation on GPU bots and pixel wrangling: see [slides].)
+
+GPU Pixel Wrangling is the process of keeping various GPU bots green. On the
+GPU bots, tests run on physical hardware with real GPUs, not in VMs like the
+majority of the bots on the Chromium waterfall.
+
+[slides]: https://docs.google.com/presentation/d/1sZjyNe2apUhwr5sinRfPs7eTzH-3zO0VQ-Cj-8DlEDQ/edit?usp=sharing
+
+[TOC]
+
+## Fleet Status
+
+The following links (sorry, Google employees only) show the status of various
+GPU bots in the fleet.
+
+Primary configurations:
+
+* [Windows 10 Quadro P400 Pool](http://shortn/_dmtaFfY2Jq)
+* [Windows 10 Intel HD 630 Pool](http://shortn/_QsoGIGIFYd)
+* [Linux Quadro P400 Pool](http://shortn/_fNgNs1uROQ)
+* [Linux Intel HD 630 Pool](http://shortn/_dqEGjCGMHT)
+* [Mac AMD Retina 10.12.6 GPU Pool](http://shortn/_BcrVmfRoSo)
+* [Mac Mini Chrome Pool](http://shortn/_Ru8NESapPM)
+* [Android Nexus 5X Chrome Pool](http://shortn/_G3j7AVmuNR)
+
+Secondary configurations:
+
+* [Windows 7 Quadro P400 Pool](http://shortn/_cuxSKC15UX)
+* [Windows AMD R7 240 GPU Pool](http://shortn/_XET7RTMHQm)
+* [Mac NVIDIA Retina 10.12.6 GPU Pool](http://shortn/_jQWG7W71Ek)
+
+## GPU Bots' Waterfalls
+
+The waterfalls work much like any other; see the [Tour of the Chromium Buildbot
+Waterfall] for a more detailed explanation of how this is laid out. We have
+more subtle configurations because the GPU matters, not just the OS and release
+v. debug. Hence we have Windows Nvidia Release bots, Mac Intel Debug bots, and
+so on. The waterfalls we’re interested in are:
+
+* [Chromium GPU]
+ * Various operating systems, configurations, GPUs, etc.
+* [Chromium GPU FYI]
+ * These bots run less-standard configurations like Windows with AMD GPUs,
+ Linux with Intel GPUs, etc.
+ * These bots build with top of tree ANGLE rather than the `DEPS` version.
+ * The [ANGLE tryservers] help ensure that these bots stay green. However,
+ it is possible that due to ANGLE changes these bots may be red while
+ the chromium.gpu bots are green.
+ * The [ANGLE Wrangler] is on-call to help resolve ANGLE-related breakage
+ on this watefall.
+ * To determine if a different ANGLE revision was used between two builds,
+ compare the `got_angle_revision` buildbot property on the GPU builders
+ or `parent_got_angle_revision` on the testers. This revision can be
+ used to do a `git log` in the `third_party/angle` repository.
+
+<!-- TODO(kainino): update link when the page is migrated -->
+[Tour of the Chromium Buildbot Waterfall]: http://www.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot
+[Chromium GPU]: https://ci.chromium.org/p/chromium/g/chromium.gpu/console?reload=120
+[Chromium GPU FYI]: https://ci.chromium.org/p/chromium/g/chromium.gpu.fyi/console?reload=120
+[ANGLE tryservers]: https://build.chromium.org/p/tryserver.chromium.angle/waterfall
+<!-- TODO(kainino): update link when the page is migrated -->
+[ANGLE Wrangler]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/angle-wrangling
+
+## Test Suites
+
+The bots run several test suites. The majority of them have been migrated to
+the Telemetry harness, and are run within the full browser, in order to better
+test the code that is actually shipped. As of this writing, the tests included:
+
+* Tests using the Telemetry harness:
+ * The WebGL conformance tests: `webgl_conformance_integration_test.py`
+ * A Google Maps test: `maps_integration_test.py`
+ * Context loss tests: `context_lost_integration_test.py`
+ * Depth capture tests: `depth_capture_integration_test.py`
+ * GPU process launch tests: `gpu_process_integration_test.py`
+ * Hardware acceleration validation tests:
+ `hardware_accelerated_feature_integration_test.py`
+ * Pixel tests validating the end-to-end rendering pipeline:
+ `pixel_integration_test.py`
+ * Stress tests of the screenshot functionality other tests use:
+ `screenshot_sync_integration_test.py`
+* `angle_unittests`: see `src/gpu/gpu.gyp`
+* drawElements tests (on the chromium.gpu.fyi waterfall): see
+ `src/third_party/angle/src/tests/BUILD.gn`
+* `gles2_conform_test` (requires internal sources): see
+ `src/gpu/gles2_conform_support/gles2_conform_test.gyp`
+* `gl_tests`: see `src/gpu/BUILD.gn`
+* `gl_unittests`: see `src/ui/gl/BUILD.gn`
+
+And more. See `src/content/test/gpu/generate_buildbot_json.py` for the
+complete description of bots and tests.
+
+Additionally, the Release bots run:
+
+* `tab_capture_end2end_tests:` see
+ `src/chrome/browser/extensions/api/tab_capture/tab_capture_apitest.cc` and
+ `src/chrome/browser/extensions/api/cast_streaming/cast_streaming_apitest.cc`
+
+### More Details
+
+More details about the bots' setup can be found on the [GPU Testing] page.
+
+[GPU Testing]: https://sites.google.com/a/chromium.org/dev/developers/testing/gpu-testing
+
+## Wrangling
+
+### Prerequisites
+
+1. Ideally a wrangler should be a Chromium committer. If you're on the GPU
+pixel wrangling rotation, there will be an email notifying you of the upcoming
+shift, and a calendar appointment.
+ * If you aren't a committer, don't panic. It's still best for everyone on
+ the team to become acquainted with the procedures of maintaining the
+ GPU bots.
+ * In this case you'll upload CLs to Gerrit to perform reverts (optionally
+ using the new "Revert" button in the UI), and might consider using
+ `TBR=` to speed through trivial and urgent CLs. In general, try to send
+ all CLs through the commit queue.
+ * Contact bajones, kainino, kbr, vmiura, zmo, or another member of the
+ Chrome GPU team who's already a committer for help landing patches or
+ reverts during your shift.
+2. Apply for [access to the bots].
+
+[access to the bots]: https://sites.google.com/a/google.com/chrome-infrastructure/golo/remote-access?pli=1
+
+### How to Keep the Bots Green
+
+1. Watch for redness on the tree.
+ 1. [Sheriff-O-Matic now has support for the chromium.gpu.fyi waterfall]!
+ 1. The chromium.gpu bots are covered under Sheriff-O-Matic's [Chromium
+ tab]. As pixel wrangler, ignore any non-GPU test failures in this tab.
+ 1. The bots are expected to be green all the time. Flakiness on these bots
+ is neither expected nor acceptable.
+ 1. If a bot goes consistently red, it's necessary to figure out whether a
+ recent CL caused it, or whether it's a problem with the bot or
+ infrastructure.
+ 1. If it looks like a problem with the bot (deep problems like failing to
+ check out the sources, the isolate server failing, etc.) notify the
+ Chromium troopers and file a P1 bug with labels: Infra\>Labs,
+ Infra\>Troopers and Internals\>GPU\>Testing. See the general [tree
+ sheriffing page] for more details.
+ 1. Otherwise, examine the builds just before and after the redness was
+ introduced. Look at the revisions in the builds before and after the
+ failure was introduced.
+ 1. **File a bug** capturing the regression range and excerpts of any
+ associated logs. Regressions should be marked P1. CC engineers who you
+ think may be able to help triage the issue. Keep in mind that the logs
+ on the bots expire after a few days, so make sure to add copies of
+ relevant logs to the bug report.
+ 1. Use the `Hotlist=PixelWrangler` label to mark bugs that require the
+ pixel wrangler's attention, so it's easy to find relevant bugs when
+ handing off shifts.
+ 1. Study the regression range carefully. Use drover to revert any CLs
+ which break the chromium.gpu bots. Use your judgment about
+ chromium.gpu.fyi, since not all bots are covered by trybots. In the
+ revert message, provide a clear description of what broke, links to
+ failing builds, and excerpts of the failure logs, because the build
+ logs expire after a few days.
+1. Make sure the bots are running jobs.
+ 1. Keep an eye on the console views of the various bots.
+ 1. Make sure the bots are all actively processing jobs. If they go offline
+ for a long period of time, the "summary bubble" at the top may still be
+ green, but the column in the console view will be gray.
+ 1. Email the Chromium troopers if you find a bot that's not processing
+ jobs.
+1. Make sure the GPU try servers are in good health.
+ 1. The GPU try servers are no longer distinct bots on a separate
+ waterfall, but instead run as part of the regular tryjobs on the
+ Chromium waterfalls. The GPU tests run as part of the following
+ tryservers' jobs:
+ 1. <code>[linux_chromium_rel_ng]</code> on the [luci.chromium.try]
+ waterfall
+<!-- TODO(kainino): update link to luci.chromium.try -->
+ 1. <code>[mac_chromium_rel_ng]</code> on the [tryserver.chromium.mac]
+ waterfall
+<!-- TODO(kainino): update link to luci.chromium.try -->
+ 1. <code>[win7_chromium_rel_ng]</code> on the [tryserver.chromium.win]
+ waterfall
+ 1. The best tool to use to quickly find flakiness on the tryservers is the
+ new [Chromium Try Flakes] tool. Look for the names of GPU tests (like
+ maps_pixel_test) as well as the test machines (e.g.
+ mac_chromium_rel_ng). If you see a flaky test, file a bug like [this
+ one](http://crbug.com/444430). Also look for compile flakes that may
+ indicate that a bot needs to be clobbered. Contact the Chromium
+ sheriffs or troopers if so.
+ 1. Glance at these trybots from time to time and see if any GPU tests are
+ failing frequently. **Note** that test failures are **expected** on
+ these bots: individuals' patches may fail to apply, fail to compile, or
+ break various tests. Look specifically for patterns in the failures. It
+ isn't necessary to spend a lot of time investigating each individual
+ failure. (Use the "Show: 200" link at the bottom of the page to see
+ more history.)
+ 1. If the same set of tests are failing repeatedly, look at the individual
+ runs. Examine the swarming results and see whether they're all running
+ on the same machine. (This is the "Bot assigned to task" when clicking
+ any of the test's shards in the build logs.) If they are, something
+ might be wrong with the hardware. Use the [Swarming Server Stats] tool
+ to drill down into the specific builder.
+ 1. If you see the same test failing in a flaky manner across multiple
+ machines and multiple CLs, it's crucial to investigate why it's
+ happening. [crbug.com/395914](http://crbug.com/395914) was one example
+ of an innocent-looking Blink change which made it through the commit
+ queue and introduced widespread flakiness in a range of GPU tests. The
+ failures were also most visible on the try servers as opposed to the
+ main waterfalls.
+1. Check if any pixel test failures are actual failures or need to be
+ rebaselined.
+ 1. For a given build failing the pixel tests, click the "stdio" link of
+ the "pixel" step.
+ 1. The output will contain a link of the form
+ <http://chromium-browser-gpu-tests.commondatastorage.googleapis.com/view_test_results.html?242523_Linux_Release_Intel__telemetry>
+ 1. Visit the link to see whether the generated or reference images look
+ incorrect.
+ 1. All of the reference images for all of the bots are stored in cloud
+ storage under [chromium-gpu-archive/reference-images]. They are indexed
+ by version number, OS, GPU vendor, GPU device, and whether or not
+ antialiasing is enabled in that configuration. You can download the
+ reference images individually to examine them in detail.
+1. Rebaseline pixel test reference images if necessary.
+ 1. Follow the [instructions on the GPU testing page].
+ 1. Alternatively, if absolutely necessary, you can use the [Chrome
+ Internal GPU Pixel Wrangling Instructions] to delete just the broken
+ reference images for a particular configuration.
+1. Update Telemetry-based test expectations if necessary.
+ 1. Most of the GPU tests are run inside a full Chromium browser, launched
+ by Telemetry, rather than a Gtest harness. The tests and their
+ expectations are contained in [src/content/test/gpu/gpu_tests/] . See
+ for example <code>[webgl_conformance_expectations.py]</code>,
+ <code>[gpu_process_expectations.py]</code> and
+ <code>[pixel_expectations.py]</code>.
+ 1. See the header of the file a list of modifiers to specify a bot
+ configuration. It is possible to specify OS (down to a specific
+ version, say, Windows 7 or Mountain Lion), GPU vendor
+ (NVIDIA/AMD/Intel), and a specific GPU device.
+ 1. The key is to maintain the highest coverage: if you have to disable a
+ test, disable it only on the specific configurations it's failing. Note
+ that it is not possible to discern between Debug and Release
+ configurations.
+ 1. Mark tests failing or skipped, which will suppress flaky failures, only
+ as a last resort. It is only really necessary to suppress failures that
+ are showing up on the GPU tryservers, since failing tests no longer
+ close the Chromium tree.
+ 1. Please read the section on [stamping out flakiness] for motivation on
+ how important it is to eliminate flakiness rather than hiding it.
+1. For the remaining Gtest-style tests, use the [`DISABLED_`
+ modifier][gtest-DISABLED] to suppress any failures if necessary.
+
+[Sheriff-O-Matic now has support for the chromium.gpu.fyi waterfall]: https://sheriff-o-matic.appspot.com/chromium.gpu.fyi
+[Chromium tab]: https://sheriff-o-matic.appspot.com/chromium
+[tree sheriffing page]: https://sites.google.com/a/chromium.org/dev/developers/tree-sheriffs
+[linux_chromium_rel_ng]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_chromium_rel_ng
+[luci.chromium.try]: https://ci.chromium.org/p/chromium/g/luci.chromium.try/builders
+[mac_chromium_rel_ng]: https://ci.chromium.org/buildbot/tryserver.chromium.mac/mac_chromium_rel_ng/
+[tryserver.chromium.mac]: https://ci.chromium.org/p/chromium/g/tryserver.chromium.mac/builders
+[win7_chromium_rel_ng]: https://ci.chromium.org/buildbot/tryserver.chromium.win/win7_chromium_rel_ng/
+[tryserver.chromium.win]: https://ci.chromium.org/p/chromium/g/tryserver.chromium.win/builders
+[Chromium Try Flakes]: http://chromium-try-flakes.appspot.com/
+<!-- TODO(kainino): link doesn't work, but is still included from chromium-swarm homepage so not removing it now -->
+[Swarming Server Stats]: https://chromium-swarm.appspot.com/stats
+[chromium-gpu-archive/reference-images]: https://console.developers.google.com/storage/chromium-gpu-archive/reference-images
+[instructions on the GPU testing page]: https://sites.google.com/a/chromium.org/dev/developers/testing/gpu-testing#TOC-Updating-and-Adding-New-Pixel-Tests-to-the-GPU-Bots
+[Chrome Internal GPU Pixel Wrangling Instructions]: https://sites.google.com/a/google.com/client3d/documents/chrome-internal-gpu-pixel-wrangling-instructions
+[src/content/test/gpu/gpu_tests/]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/
+[webgl_conformance_expectations.py]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/webgl_conformance_expectations.py
+[gpu_process_expectations.py]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/gpu_process_expectations.py
+[pixel_expectations.py]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/pixel_expectations.py
+[stamping out flakiness]: gpu_testing.md#Stamping-out-Flakiness
+[gtest-DISABLED]: https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md#temporarily-disabling-tests
+
+### When Bots Misbehave (SSHing into a bot)
+
+1. See the [Chrome Internal GPU Pixel Wrangling Instructions] for information
+ on ssh'ing in to the GPU bots.
+
+[Chrome Internal GPU Pixel Wrangling Instructions]: https://sites.google.com/a/google.com/client3d/documents/chrome-internal-gpu-pixel-wrangling-instructions
+
+### Reproducing WebGL conformance test failures locally
+
+1. From the buildbot build output page, click on the failed shard to get to
+ the swarming task page. Scroll to the bottom of the left panel for a
+ command to run the task locally. This will automatically download the build
+ and any other inputs needed.
+2. Alternatively, to run the test on a local build, pass the arguments
+ `--browser=exact --browser-executable=/path/to/binary` to
+ `content/test/gpu/run_gpu_integration_test.py`.
+ Also see the [telemetry documentation].
+
+[telemetry documentation]: https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/docs/run_benchmarks_locally.md
+
+## Extending the GPU Pixel Wrangling Rotation
+
+See the [Chrome Internal GPU Pixel Wrangling Instructions] for information on extending the rotation.
+
+[Chrome Internal GPU Pixel Wrangling Instructions]: https://sites.google.com/a/google.com/client3d/documents/chrome-internal-gpu-pixel-wrangling-instructions
diff --git a/chromium/docs/how_to_add_your_feature_flag.md b/chromium/docs/how_to_add_your_feature_flag.md
index 5707e686f58..c46d165067a 100644
--- a/chromium/docs/how_to_add_your_feature_flag.md
+++ b/chromium/docs/how_to_add_your_feature_flag.md
@@ -22,11 +22,11 @@ to see
[[1](https://chromium-review.googlesource.com/c/554510/8/content/common/service_worker/service_worker_utils.cc#153)]
3. how to wire the base::Feature to WebRuntimeFeatures
[[1](https://chromium-review.googlesource.com/c/554510/8/content/child/runtime_features.cc)]
-[[2](https://chromium-review.googlesource.com/c/554510/8/third_party/WebKit/public/platform/WebRuntimeFeatures.h)]
-[[3](https://chromium-review.googlesource.com/c/554510/third_party/WebKit/Source/platform/exported/WebRuntimeFeatures.cpp)]
-[[4](https://chromium-review.googlesource.com/c/554510/8/third_party/WebKit/Source/platform/runtime_enabled_features.json5)]
+[[2](https://chromium-review.googlesource.com/c/554510/8/third_party/blink/public/platform/web_runtime_features.h)]
+[[3](https://chromium-review.googlesource.com/c/554510/third_party/blink/Source/platform/exported/web_runtime_features.cc)]
+[[4](https://chromium-review.googlesource.com/c/554510/8/third_party/blink/renderer/platform/runtime_enabled_features.json5)]
4. how to use it in blink
-[[1](https://chromium-review.googlesource.com/c/554510/8/third_party/WebKit/Source/core/workers/WorkerThread.cpp)]
+[[1](https://chromium-review.googlesource.com/c/554510/8/third_party/blnk/renderere/core/workers/worker_thread.cc)]
Also, this patch added a virtual test for running layout tests with the flag.
When you add a flag, you can consider to use that.
diff --git a/chromium/docs/images/code_coverage_component_view.png b/chromium/docs/images/code_coverage_component_view.png
new file mode 100644
index 00000000000..0ee3159baa4
--- /dev/null
+++ b/chromium/docs/images/code_coverage_component_view.png
Binary files differ
diff --git a/chromium/docs/images/code_coverage_directory_view.png b/chromium/docs/images/code_coverage_directory_view.png
new file mode 100644
index 00000000000..50ecd88bb7a
--- /dev/null
+++ b/chromium/docs/images/code_coverage_directory_view.png
Binary files differ
diff --git a/chromium/docs/images/code_coverage_workflow.png b/chromium/docs/images/code_coverage_workflow.png
new file mode 100644
index 00000000000..284be2fe62c
--- /dev/null
+++ b/chromium/docs/images/code_coverage_workflow.png
Binary files differ
diff --git a/chromium/docs/ios/coverage.md b/chromium/docs/ios/coverage.md
deleted file mode 100644
index 72857182fe4..00000000000
--- a/chromium/docs/ios/coverage.md
+++ /dev/null
@@ -1,78 +0,0 @@
-# Generate code coverage data
-
-1. Build a test target for coverage:
-
- ```
- ninja -C out/Coverage-iphonesimulator ios_chrome_unittests
- ```
-
- Targets that support code coverage need to call
- `coverage_util::ConfigureCoverageReportPath()`.
- If you don't use `setup-gn.py`, you can set the gn argument
- `ios_enable_coverage` to `true`.
-
-1. Run the test target. If using Xcode, don't forget to set `Coverage` in the
- target's scheme:
-
- ![](images/coverage_xcode.png)
-
-1. Find the `coverage.profraw` in the `Documents` folder of the app. You can
- look in the console output of the instrumented target. For example:
-
- ```
- Coverage data at /Users/johndoe/Library/Developer/CoreSimulator/Devices/
- 82D642FA-FC18-4EDB-AFE0-A17454804BE4/data/Containers/Data/Application/
- E6B2B898-CE13-4958-93F3-E8B500446381/Documents/coverage.profraw
- ```
-
-1. Create a `coverage.profdata` file out of the `coverage.profraw` file:
-
- ```
- xcrun llvm-profdata merge \
- -o out/Coverage-iphonesimulator/coverage.profdata \
- path/to/coverage.profraw
- ```
-
-1. To see the **line coverage** for *all the instrumented source files*:
-
- ```
- xcrun llvm-cov show \
- out/Coverage-iphonesimulator/ios_chrome_unittests.app/ios_chrome_unittests \
- -instr-profile=out/Coverage-iphonesimulator/coverage.profdata \
- -arch=x86_64
- ```
-
- ![](images/llvm-cov_show.png)
-
- To see the **line coverage** for a *specific instrumented source
- file/folder* (e.g.
- `ios/chrome/browser/ui/coordinators/browser_coordinator.mm`):
-
- ```
- xcrun llvm-cov show \
- out/Coverage-iphonesimulator/ios_chrome_unittests.app/ios_chrome_unittests \
- -instr-profile=out/Coverage-iphonesimulator/coverage.profdata \
- -arch=x86_64 ios/chrome/browser/ui/coordinators/browser_coordinator.mm
- ```
-
- ![](images/llvm-cov_show_file.png)
-
- To see a **complete report**:
-
- ```
- xcrun llvm-cov report \
- out/Coverage-iphonesimulator/ios_chrome_unittests.app/ios_chrome_unittests \
- -instr-profile=path/to/coverage.profdata -arch=x86_64
- ```
-
- ![](images/llvm-cov_report.png)
-
- To see a **report** for a *folder/file* (e.g. `ios/chrome/browser` folder):
-
- ```
- xcrun llvm-cov show \
- out/Coverage-iphonesimulator/ios_chrome_unittests.app/ios_chrome_unittests \
- -instr-profile=path/to/coverage.profdata -arch=x86_64 ios/chrome/browser
- ```
-
- ![](images/llvm-cov_report_folder.png)
diff --git a/chromium/docs/ios/images/coverage_xcode.png b/chromium/docs/ios/images/coverage_xcode.png
deleted file mode 100644
index 73607346962..00000000000
--- a/chromium/docs/ios/images/coverage_xcode.png
+++ /dev/null
Binary files differ
diff --git a/chromium/docs/ios/images/llvm-cov_report.png b/chromium/docs/ios/images/llvm-cov_report.png
deleted file mode 100644
index aac9e13ee14..00000000000
--- a/chromium/docs/ios/images/llvm-cov_report.png
+++ /dev/null
Binary files differ
diff --git a/chromium/docs/ios/images/llvm-cov_report_folder.png b/chromium/docs/ios/images/llvm-cov_report_folder.png
deleted file mode 100644
index 510baddc0cb..00000000000
--- a/chromium/docs/ios/images/llvm-cov_report_folder.png
+++ /dev/null
Binary files differ
diff --git a/chromium/docs/ios/images/llvm-cov_show.png b/chromium/docs/ios/images/llvm-cov_show.png
deleted file mode 100644
index bf5c9c78e21..00000000000
--- a/chromium/docs/ios/images/llvm-cov_show.png
+++ /dev/null
Binary files differ
diff --git a/chromium/docs/ios/images/llvm-cov_show_file.png b/chromium/docs/ios/images/llvm-cov_show_file.png
deleted file mode 100644
index 45a18a351fc..00000000000
--- a/chromium/docs/ios/images/llvm-cov_show_file.png
+++ /dev/null
Binary files differ
diff --git a/chromium/docs/jumbo.md b/chromium/docs/jumbo.md
index 6832d7414c4..e9fed8d3b70 100644
--- a/chromium/docs/jumbo.md
+++ b/chromium/docs/jumbo.md
@@ -51,10 +51,10 @@ source files.
## Tuning
-By default at most `200` files are merged at a time. The more files
-are merged, the less total CPU time is needed, but parallelism is
-reduced. This can be changed by setting `jumbo_file_merge_limit` to
-something else than `200`.
+By default at most `50`, or `8` when using goma, files are merged at a
+time. The more files that are are merged, the less total CPU time is
+needed, but parallelism is reduced. This number can be changed by
+setting `jumbo_file_merge_limit`.
## Naming
diff --git a/chromium/docs/layout_tests_linux.md b/chromium/docs/layout_tests_linux.md
index 5b62e043a8a..33a99b781b0 100644
--- a/chromium/docs/layout_tests_linux.md
+++ b/chromium/docs/layout_tests_linux.md
@@ -31,10 +31,10 @@ build/install-build-deps.sh
2. Double check that
```shell
-ls third_party/content_shell_fonts/content_shell_test_fonts/
+ls third_party/test_fonts/test_fonts/
```
-is not empty and lists the fonts downloaded through the `content_shell_fonts`
+is not empty and lists the fonts downloaded through the `test_fonts`
hook in the top level `DEPS` file.
## Plugins
diff --git a/chromium/docs/linux_chromium_packages.md b/chromium/docs/linux_chromium_packages.md
index 8df687398f1..a55c46e4f93 100644
--- a/chromium/docs/linux_chromium_packages.md
+++ b/chromium/docs/linux_chromium_packages.md
@@ -17,6 +17,10 @@ TODO: Move away from tables.
| ALT Linux | Andrey Cherepanov (Андрей Черепанов) `cas@altlinux.org` | http://packages.altlinux.org/en/Sisyphus/srpms/chromium | http://git.altlinux.org/gears/c/chromium.git?a=tree |
| Mageia | Dexter Morgan `dmorgan@mageia.org` | http://svnweb.mageia.org/packages/cauldron/chromium-browser-stable/current/SPECS/ | http://svnweb.mageia.org/packages/cauldron/chromium-browser-stable/current/SOURCES/ |
| NixOS | aszlig `"^[0-9]+$"@regexmail.net` | http://hydra.nixos.org/search?query=pkgs.chromium | https://github.com/NixOS/nixpkgs/tree/master/pkgs/applications/networking/browsers/chromium |
+| OpenMandriva | Bernhard Rosenkraenzer `bero@lindev.ch` | n/a | https://github.com/OpenMandrivaAssociation/chromium-browser-stable https://github.com/OpenMandrivaAssociation/chromium-browser-beta https://github.com/OpenMandrivaAssociation/chromium-browser-dev |
+| Fedora | Tom Callaway `tcallawa@redhat.com` | https://src.fedoraproject.org/rpms/chromium/ | https://src.fedoraproject.org/rpms/chromium/tree/master |
+| Yocto | Raphael Kubo da Costa `raphael.kubo.da.costa@intel.com` | https://github.com/OSSystems/meta-browser | https://github.com/OSSystems/meta-browser/tree/master/recipes-browser/chromium/files |
+| Exherbo | Timo Gurr `tgurr@exherbo.org` | https://git.exherbo.org/summer/packages/net-www/chromium-stable/ | https://git.exherbo.org/desktop.git/tree/packages/net-www/chromium-stable/files |
## Unofficial packages
@@ -24,14 +28,13 @@ Packages in this section are not part of the distro's official repositories.
| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
|:-----------|:------------|:---------------------|:------------------------------------|
-| Fedora | Tom Callaway `tcallawa@redhat.com` | http://repos.fedorapeople.org/repos/spot/chromium/ | ?? |
| Slackware | Eric Hameleers `alien@slackware.com` | http://www.slackware.com/~alien/slackbuilds/chromium/ | http://www.slackware.com/~alien/slackbuilds/chromium/ |
## Other Unixes
| **System** | **Contact** | **URL for packages** | **URL for patches** |
|:-----------|:------------|:---------------------|:--------------------|
-| FreeBSD | http://lists.freebsd.org/mailman/listinfo/freebsd-chromium | http://wiki.freebsd.org/Chromium | http://trillian.chruetertee.ch/chromium |
+| FreeBSD | http://lists.freebsd.org/mailman/listinfo/freebsd-chromium | http://wiki.freebsd.org/Chromium | https://svnweb.freebsd.org/ports/head/www/chromium/files/ |
| OpenBSD | Robert Nagy `robert@openbsd.org` | http://openports.se/www/chromium | http://www.openbsd.org/cgi-bin/cvsweb/ports/www/chromium/patches/ |
## Updating the list
diff --git a/chromium/docs/linux_gtk_theme_integration.md b/chromium/docs/linux_gtk_theme_integration.md
index caf0941993c..0a051da8994 100644
--- a/chromium/docs/linux_gtk_theme_integration.md
+++ b/chromium/docs/linux_gtk_theme_integration.md
@@ -3,11 +3,6 @@
The GTK+ port of Chromium has a mode where we try to match the user's GTK theme
(which can be enabled under Settings -> Appearance -> Use GTK+ theme).
-# GTK3
-
-At some point after version 57, Chromium will switch to using the GTK3 theme by
-default.
-
## How Chromium determines which colors to use
GTK3 added a new CSS theming engine which gives fine-tuned control over how
@@ -55,133 +50,3 @@ For GTK3.20 or later, themes will as usual have to replace ".entry" with
The list of CSS selectors that Chromium uses to determine its colors is in
//src/chrome/browser/ui/libgtkui/native_theme_gtk3.cc.
-
-# GTK2
-
-Chromium's GTK2 theme will soon be deprecated, and this section will be removed.
-
-## Describing the previous heuristics
-
-The heuristics often don't pick good colors due to a lack of information in the
-GTK themes. The frame heuristics were simple. Query the `bg[SELECTED]` and
-`bg[INSENSITIVE]` colors on the `MetaFrames` class and darken them
-slightly. This usually worked OK until the rise of themes that try to make a
-unified titlebar/menubar look. At roughly that time, it seems that people
-stopped specifying color information for the `MetaFrames` class and this has
-lead to the very orange chrome frame on Maverick.
-
-`MetaFrames` is (was?) a class that was used to communicate frame color data to
-the window manager around the Hardy days. (It's still defined in most of
-[XFCE's themes](http://packages.ubuntu.com/maverick/gtk2-engines-xfce)). In
-chrome's implementation, `MetaFrames` derives from `GtkWindow`.
-
-If you are happy with the defaults that chrome has picked, no action is
-necessary on the part of the theme author.
-
-## Introducing `ChromeGtkFrame`
-
-For cases where you want control of the colors chrome uses, Chrome gives you a
-number of style properties for injecting colors and other information about how
-to draw the frame. For example, here's the proposed modifications to Ubuntu's
-Ambiance:
-
-```
-style "chrome-gtk-frame"
-{
- ChromeGtkFrame::frame-color = @fg_color
- ChromeGtkFrame::inactive-frame-color = lighter(@fg_color)
-
- ChromeGtkFrame::frame-gradient-size = 16
- ChromeGtkFrame::frame-gradient-color = "#5c5b56"
-
- ChromeGtkFrame::scrollbar-trough-color = @bg_color
- ChromeGtkFrame::scrollbar-slider-prelight-color = "#F8F6F2"
- ChromeGtkFrame::scrollbar-slider-normal-color = "#E7E0D3"
-}
-
-class "ChromeGtkFrame" style "chrome-gtk-frame"
-```
-
-### Frame color properties
-
-These are the frame's main solid color.
-
-| **Property** | **Type** | **Description** | **If unspecified** |
-|:-------------|:---------|:----------------|:-------------------|
-| `frame-color` | `GdkColor` | The main color of active chrome windows. | Darkens `MetaFrame::bg[SELECTED]` |
-| `inactive-frame-color` | `GdkColor` | The main color of inactive chrome windows. | Darkens `MetaFrame::bg[INSENSITIVE]` |
-| `incognito-frame-color` | `GdkColor` | The main color of active incognito windows. | Tints `frame-color` by the default incognito tint |
-| `incognito-inactive-frame-color` | `GdkColor` | The main color of inactive incognito windows. | Tints `inactive-frame-color` by the default incognito tint |
-
-### Frame gradient properties
-
-Chrome's frame (along with many normal window manager themes) have a slight
-gradient at the top, before filling the rest of the frame background image with
-a solid color. For example, the top `frame-gradient-size` pixels would be a
-gradient starting from `frame-gradient-color` at the top to `frame-color` at the
-bottom, with the rest of the frame being filled with `frame-color`.
-
-| **Property** | **Type** | **Description** | **If unspecified** |
-|:-------------|:---------|:----------------|:-------------------|
-| `frame-gradient-size` | Integers 0 through 128 | How large the gradient should be. Set to zero to disable drawing a gradient | Defaults to 16 pixels tall |
-| `frame-gradient-color` | `GdkColor` | Top color of the gradient | Lightens `frame-color` |
-| `inactive-frame-gradient-color` | `GdkColor` | Top color of the inactive gradient | Lightents `inactive-frame-color` |
-| `incognito-frame-gradient-color` | `GdkColor` | Top color of the incognito gradient | Lightens `incognito-frame-color` |
-| `incognito-inactive-frame-gradient-color` | `GdkColor` | Top color of the incognito inactive gradient. | Lightens `incognito-inactive-frame-color` |
-
-### Scrollbar control
-
-Because widget rendering is done in a separate, sandboxed process that doesn't
-have access to the X server or the filesystem, there's no current way to do
-GTK+ widget rendering. We instead pass WebKit a few colors and let it draw a
-default scrollbar. We have a very
-[complex fallback](http://git.chromium.org/gitweb/?p=chromium.git;a=blob;f=chrome/browser/gtk/gtk_theme_provider.cc;h=a57ab6b182b915192c84177f1a574914c44e2e71;hb=3f873177e192f5c6b66ae591b8b7205d8a707918#l424)
-where we render the widget and then average colors if this information isn't
-provided.
-
-| **Property** | **Type** | **Description** |
-|:-------------|:---------|:----------------|
-| `scrollbar-slider-prelight-color` | `GdkColor` | Color of the slider on mouse hover. |
-| `scrollbar-slider-normal-color` | `GdkColor` | Color of the slider otherwise |
-| `scrollbar-trough-color` | `GdkColor` | Color of the scrollbar trough |
-
-## Anticipated Q&A
-
-### Will you patch themes upstream?
-
-I am at the very least hoping we can get Radiance and Ambiance patches since we
-make very poor frame decisions on those themes, and hopefully a few others.
-
-### How about control over the min/max/close buttons?
-
-I actually tried this locally. There's a sort of uncanny valley effect going on;
-as the frame looks more native, it's more obvious that it isn't behaving like a
-native frame. (Also my implementation added a startup time hit.)
-
-### Why use style properties instead of (i.e.) bg[STATE]?
-
-There's no way to distinguish between colors set on different classes. Using
-style properties allows us to be backwards compatible and maintain the
-heuristics since not everyone is going to modify their themes for chromium (and
-the heuristics do a reasonable job).
-
-### Why now?
-
-* I (erg@) was putting off major changes to the window frame stuff in
- anticipation of finally being able to use GTK+'s theme rendering for the
- window border with client side decorations, but client side decorations
- either isn't happening or isn't happening anytime soon, so there's no
- justification for pushing this task off into the future.
-* Chrome looks pretty bad under Ambiance on Maverick.
-
-### Details about `MetaFrames` and `ChromeGtkFrame` relationship and history?
-
-`MetaFrames` is a class that was used in metacity to communicate color
-information to the window manager. During the Hardy Heron days, we slurped up
-the data and used it as a key part of our heuristics. At least on my Lucid Lynx
-machine, none of the GNOME GTK+ themes have `MetaFrames` styling. (As mentioned
-above, several of the XFCE themes do, though.)
-
-Internally to chrome, our `ChromeGtkFrame` class inherits from `MetaFrames`
-(again, which inherits from `GtkWindow`) so any old themes that style the
-`MetaFrames` class are backwards compatible.
diff --git a/chromium/docs/memory-infra/README.md b/chromium/docs/memory-infra/README.md
index eb3252b077b..84593a8b2bd 100644
--- a/chromium/docs/memory-infra/README.md
+++ b/chromium/docs/memory-infra/README.md
@@ -7,48 +7,38 @@ click of a button you can understand where memory is being used in your system.
[TOC]
-## Getting Started
+## Taking a memory-infra trace
- 1. Get a bleeding-edge or tip-of-tree build of Chrome.
-
- 2. [Record a trace as usual][record-trace]: open [chrome://tracing][tracing]
+ 1. [Record a trace as usual][record-trace]: open [chrome://tracing][tracing]
on Desktop Chrome or [chrome://inspect?tracing][inspect-tracing] to trace
Chrome for Android.
- 3. Make sure to enable the **memory-infra** category on the right.
+ 2. Make sure to enable the **memory-infra** category on the right.
![Tick the memory-infra checkbox when recording a trace.][memory-infra-box]
- 4. For now, some subsystems only work if Chrome is started with the
- `--no-sandbox` flag.
- <!-- TODO(primiano) TODO(ssid): https://crbug.com/461788 -->
[record-trace]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs
[tracing]: chrome://tracing
[inspect-tracing]: chrome://inspect?tracing
[memory-infra-box]: https://storage.googleapis.com/chromium-docs.appspot.com/1c6d1886584e7cc6ffed0d377f32023f8da53e02
-![Timeline View and Analysis View][tracing-views]
-
-After recording a trace, you will see the **timeline view**. Timeline view
-shows:
+## Navigating a memory-infra trace
- * Total resident memory grouped by process (at the top).
- * Total resident memory grouped by subsystem (at the top).
- * Allocated memory per subsystem for every process.
+![Timeline View and Analysis View][tracing-views]
-Click one of the ![M][m-blue] dots to bring up the **analysis view**. Click
-on a cell in analysis view to reveal more information about its subsystem.
-PartitionAlloc for instance, has more details about its partitions.
+After recording a trace, you will see the **timeline view**. The **timeline
+view** is primarily used for other tracing features. Click one of the
+![M][m-purple] dots to bring up the **analysis view**. Click on a cell in
+analysis view to reveal more information about its subsystem. PartitionAlloc for
+instance, has more details about its partitions.
![Component details for PartitionAlloc][partalloc-details]
-The purple ![M][m-purple] dots represent heavy dumps. In these dumps, components
-can provide more details than in the regular dumps. The full details of the
-MemoryInfra UI are explained in its [design doc][mi-ui-doc].
+The full details of the MemoryInfra UI are explained in its [design
+doc][mi-ui-doc].
[tracing-views]: https://storage.googleapis.com/chromium-docs.appspot.com/db12015bd262385f0f8bd69133330978a99da1ca
-[m-blue]: https://storage.googleapis.com/chromium-docs.appspot.com/b60f342e38ff3a3767bbe4c8640d96a2d8bc864b
[partalloc-details]: https://storage.googleapis.com/chromium-docs.appspot.com/02eade61d57c83f8ef8227965513456555fc3324
[m-purple]: https://storage.googleapis.com/chromium-docs.appspot.com/d7bdf4d16204c293688be2e5a0bcb2bf463dbbc3
[mi-ui-doc]: https://docs.google.com/document/d/1b5BSBEd1oB-3zj_CBAQWiQZ0cmI0HmjmXG-5iNveLqw/edit
@@ -101,7 +91,7 @@ and it is discounted from malloc and the blue columns.
<!-- TODO(primiano): Improve this. https://crbug.com/??? -->
-[oilpan]: /third_party/WebKit/Source/platform/heap/BlinkGCDesign.md
+[oilpan]: /third_party/blink/renderer/platform/heap/BlinkGCDesign.md
[discardable]:base/memory/discardable_memory.h
[cc-memory]: probe-cc.md
[gpu-memory]: probe-gpu.md
diff --git a/chromium/docs/memory-infra/heap_profiler.md b/chromium/docs/memory-infra/heap_profiler.md
index 6a3ad2da531..9cb8bf0dac4 100644
--- a/chromium/docs/memory-infra/heap_profiler.md
+++ b/chromium/docs/memory-infra/heap_profiler.md
@@ -62,7 +62,8 @@ similar effect to the various `memlog` flags.
3. Scroll down all the way to _Heap Details_.
- 4. Pinpoint the memory bug and live happily ever after.
+ 4. To navigate allocations, select a frame in the right-side pane and press
+ Enter/Return. To pop up the stack, press Backspace/Delete.
[memory-infra]: README.md
[m-purple]: https://storage.googleapis.com/chromium-docs.appspot.com/d7bdf4d16204c293688be2e5a0bcb2bf463dbbc3
diff --git a/chromium/docs/memory/README.md b/chromium/docs/memory/README.md
index 1f378f75669..e8ee0c8566c 100644
--- a/chromium/docs/memory/README.md
+++ b/chromium/docs/memory/README.md
@@ -39,17 +39,11 @@ instead.
Follow [these instructions](/docs/memory/filing_memory_bugs.md) to file a high
quality bug.
-## I have a reproducible memory problem, what do I do?
+## I'm a developer trying to investigate a memory issues, what do I do?
-Yay! Please file a [memory
-bug](https://bugs.chromium.org/p/chromium/issues/entry?template=Memory%20usage).
+See [this page](/docs/memory/debugging_memory_issues.md) for further instructions.
-If you are willing to do a bit more, please grab a memory infra trace and upload
-that. Here are [instructions for MacOS](https://docs.google.com/document/d/15mBOu_uZbgP5bpdHZJXEnF9csSRq7phUWXnZcteVr0o/edit).
-(TODO: Add instructions for easily grabbing a trace for all platforms.)
-
-
-## I'm a dev and I want to help. How do I get started?
+## I'm a developer looking for more information. How do I get started?
Great! First, sign up for the mailing lists above and check out the slack channel.
@@ -60,7 +54,6 @@ Second, familiarize yourself with the following:
| [Key Concepts in Chrome Memory](/docs/memory/key_concepts.md) | Primer for memory terminology in Chrome. |
| [memory-infra](/docs/memory-infra/README.md) | The primary tool used for inspecting allocations. |
-
## What are people actively working on?
| Project | Description |
|---------|-------------|
diff --git a/chromium/docs/memory/debugging_memory_issues.md b/chromium/docs/memory/debugging_memory_issues.md
new file mode 100644
index 00000000000..209f1a65ddc
--- /dev/null
+++ b/chromium/docs/memory/debugging_memory_issues.md
@@ -0,0 +1,131 @@
+# Debugging Memory Issues
+
+This page is designed to help Chromium developers debug memory issues.
+
+When in doubt, reach out to memory-dev@chromium.org.
+
+[TOC]
+
+## Investigating Reproducible Memory Issues
+
+Let's say that there's a CL or feature that reproducibly increases memory usage
+when it's landed/enabled, given a particular set of repro steps.
+
+* Take a look at [the documentation](/docs/memory/README.md) for both
+ taking and navigating memory-infra traces.
+* Take two memory-infra traces. One with the reproducible memory regression, and
+ one without.
+* Load the memory-infra traces into two tabs.
+* Compare the memory dump providers and look for the one that shows the
+ regression. Follow the relevant link.
+ * [The regression is in the Malloc MemoryDumpProvider.](#Investigating-Reproducible-Memory-Issues)
+ * [The regression is in a non-Malloc
+ MemoryDumpProvider.](#Regression-in-Non-Malloc-MemoryDumpProvider)
+ * [The regression is only observed in **private
+ footprint**.](#Regression-only-in-Private-Footprint)
+ * [No regression is observed.](#No-observed-regression)
+
+### Regression in Malloc MemoryDumpProvider
+
+Repeat the above steps, but this time also [take a heap
+dump](#Taking-a-Heap-Dump). Confirm that the regression is also visible in the
+heap dump, and then compare the two heap dumps to find the difference. You can
+also use
+[diff_heap_profiler.py](https://cs.chromium.org/chromium/src/third_party/catapult/experimental/tracing/bin/diff_heap_profiler.py)
+to perform the diff.
+
+### Regression in Non-Malloc MemoryDumpProvider
+
+Hopefully the MemoryDumpProvider has sufficient information to help diagnose the
+leak. Depending on the whether the leaked object is allocated via malloc or new
+- it usually should be, you can also use the steps for debugging a Malloc
+MemoryDumpProvider regression.
+
+### Regression only in Private Footprint
+
+* Repeat the repro steps, but instead of taking a memory-infra trace, use
+ the following tools to map the process's virtual space:
+ * On macOS, use vmmap
+ * On Windows, use SysInternal VMMap
+ * On other OSes, use /proc/<pid\>/smaps.
+* The results should help diagnose what's happening. Contact the
+ memory-dev@chromium.org mailing list for more help.
+
+### No observed regression
+
+* If there isn't a regression in PrivateMemoryFootprint, then this might become
+ a question of semantics for what constitutes a memory regression. Common
+ problems include:
+ * Shared Memory, which is hard to attribute, but is mostly accounted for in
+ the memory-infra trace.
+ * Binary size, which is currently not accounted for anywhere.
+
+## Investigating Heap Dumps From the Wild
+
+For a small set of Chrome users in the wild, Chrome will record and upload
+anonymized heap dumps. This has the benefit of wider coverage for real code
+paths, at the expense of reproducibility.
+
+These heap dumps can take some time to grok, but frequently yield valuable
+insight. At the time of this writing, heap dumps from the wild have resulted in
+real, high impact bugs being found in Chrome code ~90% of the time.
+
+* The first thing to do upon receiving a heap dump is to open it in the [trace
+ viewer](/docs/memory-infra/heap_profiler.md#how-to-manually-browse-a-heap-dump).
+ This will tell us the counts, sizes, and allocating stack traces of the
+ potentially leaked objects. Look for stacks that result in >100 MB of live
+ memory. Frequently, sets of objects will be leaked with similar counts. This
+ can provide insight into the nature of the leak.
+ * Important note: Heap profiling in the field uses
+ [poison process sampling](https://bugs.chromium.org/p/chromium/issues/detail?id=810748)
+ with a rate parameter of 10000. This means that for large/frequent allocations
+ [e.g. >100 MB], the noise will be quite small [much less than 1%]. But
+ there is noise so counts will not be exact.
+* The stack trace is almost always sufficient to tell the type of object being
+ leaked as well, since most functions in Chrome have a limited number of calls
+ to new and malloc.
+* The next thing to do is to determine whether the memory usage is intentional.
+ Very rarely, components in Chrome legitimately need to use many 100s of MBs of
+ memory. In this case, it's important to create a
+ [MemoryDumpProvider](https://cs.chromium.org/chromium/src/base/trace_event/memory_dump_provider.h)
+ to report this memory usage, so that we have a better understanding of which
+ components are using a lot of memory. For an example, see
+ [Issue 813046](https://bugs.chromium.org/p/chromium/issues/detail?id=813046).
+* Assuming the memory usage is not intentional, the next thing to do is to
+ figure out what is causing the memory leak.
+ * The most common cause is adding elements to a container with no limit.
+ Usually the code makes assumptions about how frequently it will be called
+ in the wild, and something breaks those assumptions. Or sometimes the code
+ to clear the container is not called as frequently as expected [or at
+ all]. [Example
+ 1](https://bugs.chromium.org/p/chromium/issues/detail?id=798012). [Example
+ 2](https://bugs.chromium.org/p/chromium/issues/detail?id=804440).
+ * Retain cycles for ref-counted objects.
+ [Example](https://bugs.chromium.org/p/chromium/issues/detail?id=814334#c23)
+ * Straight up leaks resulting from incorrect use of APIs. [Example
+ 1](https://bugs.chromium.org/p/chromium/issues/detail?id=801702#c31).
+ [Example
+ 2](https://bugs.chromium.org/p/chromium/issues/detail?id=814444#c17).
+
+## Taking a Heap Dump
+
+Navigate to chrome://flags and search for **memlog**. There are several options
+that can be used to configure heap dumps. All of these options are also
+available as command line flags, for automated test runs [e.g. telemetry].
+
+* `#memlog` controls which processes are profiled. It's also possible to
+ manually specify the process via the interface at `chrome://memory-internals`.
+* `#memlog-sampling` will greatly reduce the overhead of the heap profiler, at
+ the expense of inaccuracy in small or infrequent allocations. Unless
+ performance is a concern, leave it disabled.
+* `#memlog-stack-mode` describes the type of metadata recorded for each
+ allocation. `native` stacks provide the most utility. The only time the other
+ options should be considered is for Android official builds, most of which do
+ not support `native` stacks.
+* `#memlog-keep-small-allocations` should be enabled, as it prevents the heap
+ dump exporter from pruning small allocations. Doing so yields smaller traces,
+ which is desirable when heap profiling is enabled in the wild.
+
+Once the flags have been set appropriately, restart Chrome and take a
+memory-infra trace. The results will have a heap dump.
+
diff --git a/chromium/docs/memory/key_concepts.md b/chromium/docs/memory/key_concepts.md
index 507a94efc92..3c1bf2703b5 100644
--- a/chromium/docs/memory/key_concepts.md
+++ b/chromium/docs/memory/key_concepts.md
@@ -89,16 +89,80 @@ systems.
## Terms and definitions
-TODO(awong): To through Erik's Consistent Memory Metrics doc and pull out bits
-that reconcile with this.
-
-### Commited Memory
-### Discardable memory
-### Proportional Set Size
-### Image memory
-### Shared Memory.
-
-TODO(awong): Write overview of our platform diversity, windows vs \*nix memory models (eg,
-"committed" memory), what "discardable" memory is, GPU memory, zram, overcommit,
-the various Chrome heaps (pageheap, partitionalloc, oilpan, v8, malloc...per
-platform), etc.
+Each platform exposes a different memory model. This section describes a
+consistent set of terminology that will be used by this document. This
+terminology is intentionally Linux-biased, since that is the platform most
+readers are expected to be familiar with.
+
+### Supported platforms
+* Linux
+* Android
+* ChromeOS
+* Windows [kernel: Windows NT]
+* macOS/iOS [kernel: Darwin/XNU/Mach]
+
+### Terminology
+Warning: This terminology is neither complete, nor precise, when compared to the
+terminology used by any specific platform. Any in-depth discussion should occur
+on a per-platform basis, and use terminology specific to that platform.
+
+* **Virtual memory** - A per-process abstraction layer exposed by the kernel. A
+ contiguous region divided into 4kb **virtual pages**.
+* **Physical memory** - A per-machine abstraction layer internal to the kernel.
+ A contiguous region divided into 4kb **physical pages**. Each **physical
+ page** represents 4kb of physical memory.
+* **Resident** - A virtual page whose contents is backed by a physical
+ page.
+* **Swapped/Compressed** - A virtual page whose contents is backed by
+ something other than a physical page.
+* **Swapping/Compression** - [verb] The process of taking Resident pages and
+ making them Swapped/Compressed pages. This frees up physical pages.
+* **Unlocked Discardable/Reusable** - Android [Ashmem] and Darwin specific. A virtual
+ page whose contents is backed by a physical page, but the Kernel is free
+ to reuse the physical page at any point in time.
+* **Private** - A virtual page whose contents will only be modifiable by the
+ current process.
+* **Copy on Write** - A private virtual page owned by the parent process.
+ When either the parent or child process attempts to make a modification, the
+ child is given a private copy of the page.
+* **Shared** - A virtual page whose contents could be shared with other
+ processes.
+* **File-backed** - A virtual page whose contents reflect those of a
+ file.
+* **Anonymous** - A virtual page that is not file-backed.
+
+## Platform Specific Sources of Truth
+Memory is a complex topic, fraught with potential miscommunications. In an
+attempt to forestall disagreement over semantics, these are the sources of truth
+used to determine memory usage for a given process.
+
+* Windows: [SysInternals
+ VMMap](https://docs.microsoft.com/en-us/sysinternals/downloads/vmmap)
+* Darwin:
+ [vmmap](https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man1/vmmap.1.html)
+* Linux/Derivatives:
+ [/proc/<pid\>/smaps](http://man7.org/linux/man-pages/man5/proc.5.html)
+
+## Shared Memory
+
+Accounting for shared memory is poorly defined. If a memory region is mapped
+into multiple processes [possibly multiple times], which ones should it count
+towards?
+
+On Linux, one common solution is to use proportional set size, which counts
+1/Nth of the resident size, where N is the number of other processes that have
+page faulted the region. This has the nice property of being additive across
+processes. The downside is that it is context dependent. e.g. If a user opens
+more tabs, thus causing a system library to be mapped into more processes, the
+PSS for previous tabs will go down.
+
+File backed shared memory regions are typically not interesting to report, since
+they typically represent shared system resources, libraries, and the browser
+binary itself, all of which are outside of the control of developers. This is
+particularly problematic across different versions of the OS, where the set of
+base libraries that get linked by default into a process highly varies, out of
+Chrome's control.
+
+In Chrome, we have implemented ownership tracking for anonymous shared memory
+regions - each shared memory region counts towards exactly one process, which is
+determined by the type and usage of the shared memory region.
diff --git a/chromium/docs/memory/tools.md b/chromium/docs/memory/tools.md
index e1f923aaa41..8b3d2c0c1d5 100644
--- a/chromium/docs/memory/tools.md
+++ b/chromium/docs/memory/tools.md
@@ -123,7 +123,7 @@ TODO(awong): Write about options to script and the flame graph.
Heap dumps provide extremely detailed data about object allocations and is
useful for finding code locations that are generating a large number of live
allocations. Data is tracked and recorded using the [Out-of-process Heap
-Profiler (OOPHP)](../../src/chrome/profiling/README.md).
+Profiler (OOPHP)](../../src/components/services/heap_profiling/README.md).
For the Browser and GPU process, this often quickly finds objects that leak over
time.
@@ -138,10 +138,11 @@ looking similar due to the nature of DOM node allocation.
`VirtualAlloc()`) will not be tracked.
* Utility processes are currently not profiled.
* Allocations are only recorded after the
- [ProfilingService](../../src/chrome/profiling/profiling_service.h) has spun up the
- profiling process and created a connection to the target process. The ProfilingService
- is a mojo service that can be configured to start early in browser startup
- but it still takes time to spin up and early allocations are thus lost.
+ [HeapProfilingService](../../src/components/services/heap_profiling/heap_profiling_service.h)
+ has spun up the profiling process and created a connection to the target
+ process. The HeapProfilingService is a mojo service that can be configured to
+ start early in browser startup but it still takes time to spin up and early
+ allocations are thus lost.
### Instructions
#### <a name="configure-oophp"></a>Configuration and setup
diff --git a/chromium/docs/network_traffic_annotations.md b/chromium/docs/network_traffic_annotations.md
index d682db7e05a..427eb18efc8 100644
--- a/chromium/docs/network_traffic_annotations.md
+++ b/chromium/docs/network_traffic_annotations.md
@@ -13,8 +13,8 @@ provide the following answers:
* What is the intent behind each network request?
* What user data is sent in the request, and where does it go?
-Besides these requirements, the following information helps Enterprise admins
-and help desk:
+Besides these requirements, the following information helps users, admins, and
+help desk:
* How can a network communication be stopped or controlled?
* What are the traces of the communication on the client?
@@ -60,17 +60,19 @@ into one connection, like when a socket merges several data frames and sends
them together, or a device location is requested by different components, and
just one network request is made to fetch it. In these cases, the merge point
can ensure that all received requests are properly annotated and just pass one
-of them to the downstream step.
+of them to the downstream step. It can also pass a local annotation stating that
+it is a merged request on behalf of other requests of type X, which were ensured
+to all have annotations.
This decision is driven from the fact that we do not need to transmit the
annotation metadata in runtime and enforced annotation arguments are just to
ensure that the request is annotated somewhere upstream.
## Coverage
-Network traffic annotations are currently enforced on all url requests in
-Windows and Linux, and are expanding to sockets and native API functions in
-2017,Q4 - 2018,Q1.
-Currently there is no plan to expand the task to other platforms.
+Network traffic annotations are currently enforced on all url requests and
+socket writes, except for the code which is not compiled on Windows or Linux.
+This effort may expand to ChromeOS in future and currently there is no plan to
+expand it to other platforms.
## Network Traffic Annotation Tag
@@ -97,7 +99,7 @@ Each network traffic annotation should specify the following items:
network request’s content and reason.
* `sender`: What component triggers the request. The components should be
human readable and don’t need to reflect the components/ directory. Avoid
- abbreviations.
+ abbreviations, and use a common value for all annotations in one component.
* `description`: Plaintext description of the network request in language
that is understandable by admins (ideally also users). Please avoid
acronyms and describe the feature and the feature's value proposition as
@@ -284,31 +286,38 @@ change list. These checks include:
* Unique ids are unique, through history (even if an annotation gets deprecated,
its unique id cannot be reused to keep the stats sound).
-To do these tests, traffic_annotation_auditor binary runs over the whole
-repository and using a clang tool, checks if all above items are correct.
-Running the `traffic_annotation_auditor` requires exiting a compiled build
-directory and can be done with the following syntax.
-`tools/traffic_annotation/bin/[linux64/windows32/mac]/traffic_annotation_auditor
+### Presubmit tests
+To perform tests prior to submit, one can use traffic_annotation_auditor binary.
+It runs over the whole repository and using a clang tool, checks if all above
+items are correct.
+Running the `traffic_annotation_auditor` requires having a COMPLETE compiled
+build directory and can be done with the following syntax.
+`tools/traffic_annotation/bin/[linux64/win32]/traffic_annotation_auditor
--build-path=[out/Default]`
-If you are running the auditor on Windows, please refer to extra instructions in
-`tools/traffic_annotation/auditor/README.md`.
The latest executable of `traffic_annotation_auditor` for supported platforms
can be found in `tools/traffic_annotation/bin/[platform]`.
As this test is slow, it is not a mandatory step of the presubmit checks on
-clients, and one can run it manually. The test is done on trybots as a commit
-queue step.
+clients, and one can run it manually.
+
+### Waterfall tests
+Two commit queue trybots test traffic annotations on changed files using the
+scripts in `tools/traffic_annotation/scripts`. To run these tests faster and to
+avoid spamming the commit queue if an unforeseen error has happed in downstream
+scripts or tools, they are run in error resilient mode, only on changed files,
+and using heuristics to decide which files to process.
+An FYI bot runs more detailed tests on the whole repository and with different
+switches, to make sure that the heuristics that trybot tests use and the limited
+scope of tests have not neglected any issues.
## Annotations Review
-Network traffic annotations require review by privacy, enterprise, and legal
-teams. To shorten the process of review, only privacy review is a blocking step
-and review by the other two teams will be done after code submission.
-Privacy reviews are enforced through keeping a summary of annotations in
-`tools/traffic_annotation/summary/annotations.xml`, which is owned by privacy
-team. Once a new annotation is added, one is updated, or deleted, this file
+Network traffic annotations require review before landing in code and this is
+enforced through keeping a summary of annotations in
+`tools/traffic_annotation/summary/annotations.xml`.
+Once a new annotation is added, one is updated, or deleted, this file
should also be updated. To update the file automatically, one can run
-`traffic_annotation_auditor` as specified in above step. But if it is not
+`traffic_annotation_auditor` as specified in presubmit tests. But if it is not
possible to do so (e.g., if you are changing the code from an unsupported
platform or you don’t have a compiled build directory), the code can be
submitted to the trybot and the test on trybot will tell you the required
diff --git a/chromium/docs/optional.md b/chromium/docs/optional.md
index 14b3ed834d1..fee3a1a4bd9 100644
--- a/chromium/docs/optional.md
+++ b/chromium/docs/optional.md
@@ -109,7 +109,9 @@ undefined value when the expected value can't be negative.
It is recommended to not use `base::Optional<T>` as a function parameter as it
will force the callers to use `base::Optional<T>`. Instead, it is recommended to
keep using `T*` for arguments that can be omitted, with `nullptr` representing
-no value.
+no value. A helper, `base::OptionalOrNullptr`, is available in
+[stl_util.h](https://code.google.com/p/chromium/codesearch#chromium/src/base/stl_util.h)
+and can make it easier to convert `base::Optional<T>` to `T*`.
Furthermore, depending on `T`, MSVC might fail to compile code using
`base::Optional<T>` as a parameter because of memory alignment issues.
diff --git a/chromium/docs/origin_trials_integration.md b/chromium/docs/origin_trials_integration.md
index f5c1fb5f4b1..47f6f20dc78 100644
--- a/chromium/docs/origin_trials_integration.md
+++ b/chromium/docs/origin_trials_integration.md
@@ -134,7 +134,7 @@ as tests for script-added tokens. For examples, refer to the existing tests in
[chrome_origin_trial_policy.cc]: /chrome/common/origin_trials/chrome_origin_trial_policy.cc
[generate_token.py]: /tools/origin_trials/generate_token.py
[Developer Guide]: https://github.com/jpchase/OriginTrials/blob/gh-pages/developer-guide.md
-[OriginTrialEnabled]: /third_party/WebKit/Source/bindings/IDLExtendedAttributes.md#_OriginTrialEnabled_i_m_a_c_
+[OriginTrialEnabled]: /third_party/blink/renderer/bindings/IDLExtendedAttributes.md#_OriginTrialEnabled_i_m_a_c_
[origin_trials/webexposed]: /third_party/WebKit/LayoutTests/http/tests/origin_trials/webexposed/
-[runtime\_enabled\_features.json5]: /third_party/WebKit/Source/platform/runtime_enabled_features.json5
+[runtime\_enabled\_features.json5]: /third_party/blink/renderer/platform/runtime_enabled_features.json5
[trial_token_unittest.cc]: /content/common/origin_trials/trial_token_unittest.cc
diff --git a/chromium/docs/process/merge_request.md b/chromium/docs/process/merge_request.md
index 454322f5662..9ee67df5c15 100644
--- a/chromium/docs/process/merge_request.md
+++ b/chromium/docs/process/merge_request.md
@@ -63,9 +63,13 @@ Chrome TPMs.
**Phase 2: First Four Weeks of Beta Rollout**
-During the first four weeks of Beta, merges should only be requested if
-the bug is considered either release blocking or
-considered a high-impact regression.
+During the first four weeks of Beta, merges should only be requested if:
+
+* The bug is considered either release blocking or
+ considered a high-impact regression
+* The merge is related to a feature which (1) is entirely gated behind
+ a flag and (2) does not change user functionality in a substantial way
+ (e.g. minor tweaks and metrics code are OK, workflow changes are not)
Security bugs should be consulted with
[chrome-security@google.com](chrome-security@google.com) to
diff --git a/chromium/docs/security/mojo.md b/chromium/docs/security/mojo.md
index 01b481c794d..93c0a989e48 100644
--- a/chromium/docs/security/mojo.md
+++ b/chromium/docs/security/mojo.md
@@ -196,8 +196,8 @@ the browser process to:
Mojo interfaces often cross privilege boundaries. Having well-defined interfaces
that don't contain stubbed out methods or unused parameters makes it easier to
-understand and evaluate the implications of crossing these boundaries. Some
-common guidelines to follow are below.
+understand and evaluate the implications of crossing these boundaries. Several
+common areas to watch out for:
#### Do use EnableIf to guard platform-specific constructs
@@ -294,19 +294,74 @@ class SpaceshipPrototype : public mojom::Spaceship {
```
-#### Do not define unused enumerator values
+#### Do not define placeholder enumerator values
-Avoid the pattern of defining a `LAST` or `MAX` value. The `LAST` value is
-typically used in conjunction with legacy IPC macros to validate enums; this is
-not needed with Mojo enums, which are automatically validated.
+Do not define placeholder enumerator values like `kLast`, `kMax`, `kCount`, et
+cetera. Instead, rely on the autogenerated `kMaxValue` enumerator emitted for
+Mojo C++ bindings.
-The `MAX` value is typically used as an invalid sentinel value for UMA
-histograms: unfortunately, simply defining a `MAX` value in a Mojo enum will
-cause Mojo to treat it as valid. This forces all IPC handling to do manual
-checks that the semantically invalid `MAX` value isn't accidentally or
-maliciously passed around.
+For UMA histograms, logging a Mojo enum is simple: simply use the two argument
+version of `UMA_HISTOGRAM_ENUMERATION`:
-> Improving UMA logging is tracked in <https://crbug.com/742517>.
+**_Good_**
+```c++
+// mojom definition:
+enum GoatStatus {
+ kHappy,
+ kSad,
+ kHungry,
+ kGoaty,
+};
+
+// C++:
+UMA_HISTOGRAM_ENUMERATION("Goat.Status", status);
+```
+
+Using a `kCount` sentinel complicates `switch` statements and makes it harder to
+enforce invariants: code needs to actively enforce that the otherwise invalid
+`kCount` sentinel value is not incorrectly passed around.
+
+**_Bad_**
+```c++
+// mojom definition:
+enum CatStatus {
+ kAloof,
+ kCount,
+};
+
+// C++
+switch (cat_status) {
+ case CatStatus::kAloof:
+ IgnoreHuman();
+ break;
+ case CatStatus::kCount:
+ // this should never happen
+}
+```
+
+Defining `kLast` manually results in ugly casts to perform arithmetic:
+
+**_Bad_**
+```c++
+// mojom definition:
+enum WhaleStatus {
+ kFail,
+ kNotFail,
+ kLast = kNotFail,
+};
+
+// C++:
+UMA_HISTOGRAM_ENUMERATION("Whale.Status", status,
+ static_cast<int>(WhaleStatus::kLast) + 1);
+```
+
+For interoperation with legacy IPC, also use `kMaxValue` rather than defining a
+custom `kLast`:
+
+**_Good_**
+```c++
+IPC_ENUM_TRAITS_MAX_VALUE(GoatStatus, GoatStatus::kMaxValue);
+```
### Use structured types
@@ -314,17 +369,17 @@ maliciously passed around.
Where possible, use structured types: this allows the type system to help
enforce that the input data is valid. Common ones to watch out for:
-* Files: use `mojo.common.mojom.File`, not raw descriptor types like `HANDLE`
+* Files: use `mojo_base.mojom.File`, not raw descriptor types like `HANDLE`
and `int`.
-* File paths: use `mojo.common.mojom.FilePath`, not `string`.
+* File paths: use `mojo_base.mojom.FilePath`, not `string`.
* JSON: use `mojo.common.mojom.Value`, not `string`.
* Mojo interfaces: use `Interface` or `Interface&`, not `handle` or
`handle<message_pipe>`.
-* Nonces: use `mojo.common.mojom.UnguessableToken`, not `string`.
+* Nonces: use `mojo_base.mojom.UnguessableToken`, not `string`.
* Origins: use `url.mojom.Origin`, not `url.mojom.Url` and certainly not
`string`.
-* Time types: use `mojo.common.mojom.TimeDelta` /
- `mojo.common.mojom.TimeTicks` / `mojo.common.mojom.Time`, not `int64` /
+* Time types: use `mojo_base.mojom.TimeDelta` /
+ `mojo_base.mojom.TimeTicks` / `mojo_base.mojom.Time`, not `int64` /
`uint64` / `double` / et cetera.
* URLs: use `url.mojom.Url`, not `string`.
@@ -332,7 +387,7 @@ enforce that the input data is valid. Common ones to watch out for:
```c++
interface ReportingService {
- ReportDeprecation(mojo.common.mojom.TimeTicks time,
+ ReportDeprecation(mojo_base.mojom.TimeTicks time,
url.mojom.Url resource,
uint32 line_number);
};
diff --git a/chromium/docs/security/sheriff.md b/chromium/docs/security/sheriff.md
index ba5a34c7755..925c1ace2e2 100644
--- a/chromium/docs/security/sheriff.md
+++ b/chromium/docs/security/sheriff.md
@@ -110,6 +110,7 @@ Browsing list:**
**Restrict-View-Google** label
* Change **Type-Bug-Security** label to **Type-Bug**
* Add the **Security** component
+ * See below for reporting URLs to SafeBrowsing
* **If the report is a potentially valid bug but is not a security vulnerability:**
* remove the **Restrict-View-SecurityTeam** label. If necessary, add one of the
other **Restrict-View-?** labels:
@@ -128,8 +129,8 @@ information, add the **Needs-Feedback** label and wait for 24 hours for a respon
#### Step 1. Reproduce legitimate-sounding issues.
-If you can't reproduce the issue, ask for help on IRC (#chrome-security), or
-find an area owner to help.
+If you can't reproduce the issue, ask for help on IRC (#chrome-security) or the
+Chrome Security chat, or find an area owner to help.
Tips for reproducing bugs:
@@ -188,17 +189,26 @@ Generally, see [the Security Labels document](security-labels.md).
**Ensure the comment adequately explains any status changes.** Severity,
milestone, and priority assignment generally require explanatory text.
-* Report suspected malicious URLs to SafeBrowsing.
+* Report suspected malicious URLs to SafeBrowsing:
* Public URL:
- [https://www.google.com/safebrowsing/report_badware/](https://www.google.com/safebrowsing/report_badware/)
- * Googlers: see instructions at [go/report-safe-browsing](go/report-safe-browsing)
-* Report suspected malicious file attachments to SafeBrowsing and VirusTotal.
+ [https://support.google.com/websearch/contact/safe_browsing](https://support.google.com/websearch/contact/safe_browsing)
+ * Googlers: see instructions at [go/safebrowsing-escalation](https://goto.google.com/safebrowsing-escalation)
+ * Report suspected malicious file attachments to SafeBrowsing and VirusTotal.
* Make sure the report is properly forwarded when the vulnerability is in an
upstream project, the OS, or some other dependency.
* For vulnerabilities in services Chrome uses (e.g. Omaha, Chrome Web Store,
SafeBrowsing), make sure the affected team is informed and has access to the
necessary bugs.
+##### Labeling For Chrome On iOS
+
+* Reproduce using iOS device, desktop Safari, or [Browserstack](http://browserstack.com/)
+* Assign severity, impact, milestone, and component labels
+* CC Apple friends (if you don't know who they are, ping awhalley@)
+* Label **ExternalDependency**
+* File the bug at [bugs.webkit.org](https://bugs.webkit.org) or with
+ product-security@apple.com.
+
### Find An Owner To Fix The Bug
That owner can be you! Otherwise, this is one of the more grey areas of
diff --git a/chromium/docs/servicification.md b/chromium/docs/servicification.md
index 5e9854926f1..e57b35f33b8 100644
--- a/chromium/docs/servicification.md
+++ b/chromium/docs/servicification.md
@@ -73,11 +73,8 @@ question hinges on the nature and location of the code that you are converting:
- If you are looking to convert all or part of a component (i.e., a feature in
//components) into a service, the question arises of whether your new service
is worthy of being in //services (i.e., is it a foundational service?). If
- not, then it can be placed in an appropriate subdirectory of the component
- itself. See this [email
- thread](https://groups.google.com/a/chromium.org/forum/#!topic/services-dev/3AJx3gjHbZE) and its [resulting CL](https://codereview.chromium.org/2832633002)
- for discussion of this point, and if in doubt, start a similar email thread
- discussing your feature.
+ not, then it can be placed in //components/services. See this
+ [document](https://docs.google.com/document/d/1Zati5ZohwjUM0vz5qj6sWg5r-_I0iisUoSoAMNdd7C8/edit#) for discussion of this point.
### If your service is embedded in the browser process, what is its threading model?
diff --git a/chromium/docs/speed/README.md b/chromium/docs/speed/README.md
index c2ccfc8c0f9..5cd2d46141c 100644
--- a/chromium/docs/speed/README.md
+++ b/chromium/docs/speed/README.md
@@ -18,7 +18,7 @@
## Core Teams and Work
- * **[Speed tracks](speed_tracks.md)**: Most of the speed
+ * **[Speed tracks](speed_tracks.md)**: Most of the speed
work on Chrome is organized into these tracks.
* **[Chrome Speed Operations](chrome_speed_operations.md)**: provides the
benchmarks, infrastructure, and releasing oversight to track regressions.
@@ -27,9 +27,9 @@
* Benchmark-specific discussion: benchmarking-dev@chromium.org
<!--- TODO: Requests for new benchmarks: chrome-benchmarking-request mailing list link -->
* Performance dashboard, bisect, try jobs: speed-services-dev@chromium.org
- * **[Chrome Speed Metrics](https://docs.google.com/document/d/1wBT5fauGf8bqW2Wcg2A5Z-3_ZvgPhE8fbp1Xe6xfGRs/edit#heading=h.8ieoiiwdknwt)**: provides a set of high-quality metrics that represent real-world user experience, and exposes these metrics to both Chrome and Web Developers.
+ * **Chrome Speed Metrics**: provides a set of high-quality metrics that represent real-world user experience, and exposes these metrics to both Chrome and Web Developers.
* General discussion: progressive-web-metrics@chromium.org
- * The actual metrics: [tracking](https://docs.google.com/spreadsheets/d/1gY5hkKPp8RNVqmOw1d-bo-f9EXLqtq4wa3Z7Q8Ek9Tk/edit#gid=0)
+ * The actual metrics: [speed launch metrics survey.](https://docs.google.com/document/d/1Ww487ZskJ-xBmJGwPO-XPz_QcJvw-kSNffm0nPhVpj8/edit#heading=h.2uunmi119swk)
## For Googlers
diff --git a/chromium/docs/speed/benchmark/benchmark_ownership.md b/chromium/docs/speed/benchmark/benchmark_ownership.md
new file mode 100644
index 00000000000..1df0fa1bb91
--- /dev/null
+++ b/chromium/docs/speed/benchmark/benchmark_ownership.md
@@ -0,0 +1,37 @@
+# Updating a Benchmark Owner
+
+## Who is a benchmark owner?
+A benchmark owner is the main point of contact for a given benchmark. The owner is responsible for:
+
+- Triaging breakages and ensuring they are fixed
+- Following up when CL authors who regressed performance have questions about the benchmark and metric
+- Escalating when a regression is WontFix-ed and they disagree with the decision.
+
+There can be multiple owners of a benchmark, for example if there are multiple types of metrics, or if one owner drives metric development and another drives performance improvements in an area.
+
+## How do I update the owner on a benchmark?
+
+### Telemetry Benchmarks
+1. Open [`src/tools/perf/benchmarks/benchmark_name.py`](https://cs.chromium.org/chromium/src/tools/perf/benchmarks), where `benchmark_name` is the part of the benchmark before the “.”, like `smoothness` in `smoothness.top_25_smooth`.
+1. Find the class for the benchmark. It has a `Name` method that should match the full name of the benchmark.
+1. Add a `benchmark.Owner` decorator above the class.
+
+ Example:
+
+ ```
+ @benchmark.Owner(
+ emails=['owner1@chromium.org', 'owner2@samsung.com'],
+ component=’GoatTeleporter>Performance’)
+ ```
+
+ In this example, there are two owners for the benchmark, specified by email, and a bug component (we are working on getting the bug component automatically added to all perf regressions in Q2 2018).
+
+1. Run `tools/perf/generate_perf_data` to update `tools/perf/benchmarks.csv`.
+1. Upload the benchmark python file and `benchmarks.csv` to a CL for review. Please add any previous owners to the review.
+
+### C++ Perf Benchmarks
+1. Open [`src/tools/perf/core/perf_data_generator.py`](https://cs.chromium.org/chromium/src/tools/perf/core/perf_data_generator.py).
+1. Find the BenchmarkMetadata for the benchmark. It will be in a dictionary named `NON_TELEMETRY_BENCHMARKS` or `NON_WATERFALL_BENCHMARKS`.
+1. Update the email (first field of `BenchmarkMetadata`).
+1. Run `tools/perf/generate_perf_data` to update `tools/perf/benchmarks.csv`.
+1. Upload `perf_data_generator.py` and `benchmarks.csv` to a CL for review. Please add any previous owners to the review.
diff --git a/chromium/docs/speed/benchmark/benchmark_short_list.md b/chromium/docs/speed/benchmark/benchmark_short_list.md
new file mode 100644
index 00000000000..aeebf030853
--- /dev/null
+++ b/chromium/docs/speed/benchmark/benchmark_short_list.md
@@ -0,0 +1,29 @@
+# Chrome Speed Benchmark Shortlist
+
+Target audience: if you are a Chrome developer who wants to quickly gauge how
+your change would affect Chrome performance, then this doc is for you.
+
+> **Warning:** this doc just gives you a general reduced list of Chrome
+> benchmarks to try out if you don’t know where to start or just want some quick
+> feedback. There is no guarantee that if your change doesn’t regress these
+> benchmarks, it won’t regress any benchmarks or regress UMA metrics. We believe
+> it’s best to rely on [key UMA metrics](https://docs.google.com/document/d/1Ww487ZskJ-xBmJGwPO-XPz_QcJvw-kSNffm0nPhVpj8/edit#heading=h.2uunmi119swk)
+> to evaluate performance effects of any Chrome change.
+
+Here is the list of benchmarks which we recommend:
+Android:
+* system_health.common_mobile
+* system_health.memory_mobile
+* experimental.startup.android.coldish
+
+Desktop:
+* system_health.common_desktop
+* system_health.memory_desktop
+
+Both desktop & mobile:
+* speedometer2
+
+
+Instructions for how to run these benchmarks:
+* [Running them locally](https://github.com/catapult-project/catapult/blob/master/telemetry/docs/run_benchmarks_locally.md)
+* [Running on perf trybot](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/perf_trybots.md)
diff --git a/chromium/docs/speed/benchmark_harnesses/blink_perf.md b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
index 94f4fd8890b..94f4fd8890b 100644
--- a/chromium/docs/speed/benchmark_harnesses/blink_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
diff --git a/chromium/docs/speed/benchmark_harnesses/power_perf.md b/chromium/docs/speed/benchmark/harnesses/power_perf.md
index c7c4a172d98..c7c4a172d98 100644
--- a/chromium/docs/speed/benchmark_harnesses/power_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/power_perf.md
diff --git a/chromium/docs/speed/benchmark_harnesses/system_health.md b/chromium/docs/speed/benchmark/harnesses/system_health.md
index 5ec3d68a78a..5ec3d68a78a 100644
--- a/chromium/docs/speed/benchmark_harnesses/system_health.md
+++ b/chromium/docs/speed/benchmark/harnesses/system_health.md
diff --git a/chromium/docs/speed/benchmark_harnesses/webrtc_perf.md b/chromium/docs/speed/benchmark/harnesses/webrtc_perf.md
index e54dbee9037..e54dbee9037 100644
--- a/chromium/docs/speed/benchmark_harnesses/webrtc_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/webrtc_perf.md
diff --git a/chromium/docs/speed/benchmark/telemetry_device_setup.md b/chromium/docs/speed/benchmark/telemetry_device_setup.md
new file mode 100644
index 00000000000..7479ccbc73d
--- /dev/null
+++ b/chromium/docs/speed/benchmark/telemetry_device_setup.md
@@ -0,0 +1,71 @@
+# Setting up devices for Telemetry benchmarks
+
+[TOC]
+
+## Install extra python dependencies
+
+If you only use Telemetry through `tools/perf/run_benchmark` script,
+`vpython` should already automatically install all the required deps for you,
+e.g:
+
+```
+$ tools/perf/run_benchmark --browser=system dummy_benchmark.noisy_benchmark_1
+```
+
+Otherwise have a look at the required catapult dependencies listed in the
+[.vpython](https://chromium.googlesource.com/chromium/src/+/master/.vpython)
+spec file.
+
+## Desktop benchmarks
+
+### Mac, Windows, Linux
+
+We support the most popular version of these OSes (Mac 10.9+, Windows7/8/10,
+Linux Ubuntu). In most cases, your desktop environment should be ready to
+run Telemetry benchmarks. To keep the benchmark results stable, it’s recommended
+that you kill as many background processes on your desktop environment as
+possible (e.g: AntiVirus,..) before running the benchmarks.
+
+### ChromeOS
+
+Virtual Machine: see
+[cros_vm.md doc](https://chromium.googlesource.com/chromiumos/docs/+/master/cros_vm.md)
+
+Actual CrOS machine: please contact achuith@, cywang@ from CrOS teams for
+advice.
+
+## Android benchmarks
+
+To run Telemetry Android benchmarks, you need a host machine and an Android
+device attached to the host machine through USB.
+
+> **WARNING:** it’s highly recommended that you don’t use your personal Android device
+for this. Some of the steps below will wipe out the phone completely.
+
+**Host machine:** we only support Linux Ubuntu as the host.
+
+**Devices:** we only support rooted userdebug devices of Android version K, L,
+M, N and O.
+
+### Setting up device ###
+* **Enable USB Debugging:** Go to Settings -> About Phone. Tap many times on
+the Build number field. This will enable Developer options. Go to them, enable
+USB debugging.
+* For Googler only: follow instructions to
+ [flash your device to a userdebug build.](http://go/flash-device)
+* Although not necessary, to make your device behave similar as they do on
+ bots, it is recommended to run:
+ ```
+ export CATAPULT=$CHROMIUM_SRC/third_party/catapult
+ $CATAPULT/devil/devil/android/tools/provision_devices.py --disable-network --disable-java-debug
+ ```
+ If you are planning to test WebView on Android M or lower also add
+ `--remove-system-webview` to this command, otherwise Telemetry will
+ have trouble installing the required APKs. This should take care of
+ everything, but see [build instructions for WebView](https://www.chromium.org/developers/how-tos/build-instructions-android-webview)
+ if you run into problems.
+* Finally, use one of the supported [`--browser` types](https://github.com/catapult-project/catapult/blob/d5b0db081b74c717effa1080ca06c4f679136b73/telemetry/telemetry/internal/backends/android_browser_backend_settings.py#L150)
+ on your
+ [`run_benchmark`](https://cs.chromium.org/chromium/src/tools/perf/run_benchmark)
+ or [`run_tests`](https://cs.chromium.org/chromium/src/tools/perf/run_tests)
+ command.
diff --git a/chromium/docs/speed/how_does_chrome_measure_performance.md b/chromium/docs/speed/how_does_chrome_measure_performance.md
index 5f93a6e6d96..4d5d925e2a5 100644
--- a/chromium/docs/speed/how_does_chrome_measure_performance.md
+++ b/chromium/docs/speed/how_does_chrome_measure_performance.md
@@ -61,3 +61,9 @@ The **[Speed Launch Metrics](https://docs.google.com/document/d/1Ww487ZskJ-xBmJG
doc explains metrics available in UMA for end user performance. If you want to
test how your change impacts these metrics for end users, you'll probably want
to **[Run a Finch Trial](http://goto.google.com/finch101)**.
+
+The **[UMA Sampling Profiler (Googlers only)](http://goto.google.com/uma-sampling-profiler-overview)**
+measures Chrome execution using statistical profiling, producing aggregate
+execution profiles across the function call tree. The profiler is useful for
+understanding how your code performs for end users, and the precise performance
+impact of code changes.
diff --git a/chromium/docs/speed/perf_bot_sheriffing.md b/chromium/docs/speed/perf_bot_sheriffing.md
index 303f5fc1358..ef703ed9e62 100644
--- a/chromium/docs/speed/perf_bot_sheriffing.md
+++ b/chromium/docs/speed/perf_bot_sheriffing.md
@@ -351,9 +351,11 @@ linked to multiple bugs, an entry for each bug is needed.
A list of supported conditions can be found [here](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/story/expectations.py).
-Multiple conditions when listed in a single expectation are treated as logical
-_AND_ so a platform must meet all conditions to be disabled. Each failing
-platform requires its own expectations entry.
+When multiple conditions are listed in a single expectation they are treated
+as a logical _AND_, so the platform must meet _all_ conditions to be disabled.
+
+If you need to disable a single story on multiple different platforms, you must
+list each platform separately on its own line as a separate entry.
To determine which stories are failing in a given run, go to the buildbot page
for that run and search for `Unexpected failures` in the failing test entry.
diff --git a/chromium/docs/sublime_ide.md b/chromium/docs/sublime_ide.md
index 77db8b8a391..1e692d3f150 100644
--- a/chromium/docs/sublime_ide.md
+++ b/chromium/docs/sublime_ide.md
@@ -300,6 +300,8 @@ resource directory instead of that supplied by SublimeClang.
cd SublimeClang
# Copy libclang.so to the internals dir
cp /usr/lib/llvm-3.9/lib/libclang.so.1 internals/libclang.so
+ # Fix src/main.cpp (shared_ptr -> std::shared_ptr)
+ sed -i -- 's/shared_ptr/std::shared_ptr/g' src/main.cpp
# Make the project - should be really quick, since libclang.so is already built
cd src && mkdir build && cd build
cmake ..
@@ -309,7 +311,7 @@ resource directory instead of that supplied by SublimeClang.
1. Edit your project file `Project > Edit Project` to call the script above
(replace `/path/to/depot_tools` with your depot_tools directory):
- ```
+ ```json
{
"folders":
[
@@ -330,6 +332,16 @@ resource directory instead of that supplied by SublimeClang.
true. This way you use the resource directory we set instead of the ancient
ones included in the repository. Without this you won't have C++14 support.
+1. (Optional) To remove errors that sometimes show up from importing out of
+ third_party, edit your SublimeClang settings and set:
+
+ ```json
+ "diagnostic_ignore_dirs":
+ [
+ "${project_path}/src/third_party/"
+ ],
+ ```
+
1. Restart Sublime. Now when you save a file, you should see a "Reparsing…"
message in the footer and errors will show up in the output panel. Also,
variables and function definitions should auto-complete as you type.
@@ -338,6 +350,20 @@ resource directory instead of that supplied by SublimeClang.
your settings file will print more to the console (accessed with ``Ctrl + ` ``)
which can be helpful when debugging.
+**Debugging:** If things don't seem to be working, the console ``Ctrl + ` `` is
+your friend. Here are some basic errors which have workarounds:
+
+1. Bad Libclang args
+ - *problem:* ```tu is None...``` is showing up repeatedly in the console:
+ - *solution:* ninja_options_script.py is generating arguments that libclang
+ can't parse properly. To fix this, make sure to
+ ```export CHROMIUM_OUT_DIR="{Default Out Directory}"```
+ This is because the ninja_options_script.py file will use the most recently
+ modified build directory unless specified to do otherwise. If the chosen
+ build directory has unusual args (say for thread sanitization), libclang may
+ fail.
+
+
### Mac (not working)
1. Install cmake if you don't already have it
@@ -427,27 +453,6 @@ about it with this command: Windows: `git config --global core.excludesfile
%USERPROFILE%\.gitignore` Mac, Linux: `git config --global core.excludesfile
~/.gitignore`
-### Build a single file
-Copy the file `compile_current_file.py` to your Packages directory:
-
-```shell
-cd /path/to/chromium/src
-cp tools/sublime/compile_current_file.py ~/.config/sublime-text-3/Packages/User
-```
-
-This will give you access to a command `"compile_current_file"`, which you can
-then add to your `Preferences > Keybindings - User` file:
-
-```json
-[
- { "keys": ["ctrl+f7"], "command": "compile_current_file", "args": {"target_build": "Debug"} },
- { "keys": ["ctrl+shift+f7"], "command": "compile_current_file", "args": {"target_build": "Release"} },
-]
-```
-
-You can then press those key combinations to compile the current file in the
-given target build.
-
## Building inside Sublime
To build inside Sublime Text, we first have to create a new build system.
@@ -481,7 +486,7 @@ If you're using goma, add the -j parameter (replace out/Debug with your out dire
"cmd": ["ninja", "-j", "1000", "-C", "out/Debug", "chrome"],
```
-**Regex explanation:** Aims to capture these these error formats while respecting
+**Regex explanation:** Aims to capture these error formats while respecting
[Sublime's perl-like group matching](http://docs.sublimetext.info/en/latest/reference/build_systems/configuration.html#build-capture-error-output):
1. `d:\src\chrome\src\base\threading\sequenced_worker_pool.cc(670): error
@@ -523,6 +528,10 @@ build targets with `Ctrl + Shift + B`:
"name": "Browser Tests",
"cmd": ["ninja", "-j", "1000", "-C", "out/Debug", "browser_tests"],
},
+ {
+ "name": "Current file",
+ "cmd": ["compile_single_file", "--build-dir", "out/Debug", "--file-path", "$file"],
+ },
]
```
diff --git a/chromium/docs/sync/model_api.md b/chromium/docs/sync/model_api.md
index 67ff6e3c936..4d193695c5d 100644
--- a/chromium/docs/sync/model_api.md
+++ b/chromium/docs/sync/model_api.md
@@ -193,7 +193,7 @@ base::Optional<ModelError> DeviceInfoSyncBridge::ApplySyncChanges(
}
}
- batch->TransferMetadataChanges(std::move(metadata_change_list));
+ batch->TakeMetadataChangesFrom(std::move(metadata_change_list));
store_->CommitWriteBatch(std::move(batch), base::Bind(...));
NotifyModelOfChanges();
return {};
diff --git a/chromium/docs/sync/uss/shared_model_type_processor.md b/chromium/docs/sync/uss/client_tag_based_model_type_processor.md
index 46bc5a1114e..bcadfc9a40a 100644
--- a/chromium/docs/sync/uss/shared_model_type_processor.md
+++ b/chromium/docs/sync/uss/client_tag_based_model_type_processor.md
@@ -1,6 +1,6 @@
-# SharedModelTypeProcessor
+# ClientTagBasedModelTypeProcessor
-The [`SharedModelTypeProcessor`][SMTP] is a crucial piece of the USS codepath.
+The [`ClientTagBasedModelTypeProcessor`][SMTP] is a crucial piece of the USS codepath.
It lives on the model thread and performs the tracking of sync metadata for the
[`ModelTypeSyncBridge`][MTSB] that owns it by implementing the
[`ModelTypeChangeProcessor`][MTCP] interface, as well as sending commit requests
@@ -8,7 +8,7 @@ to the [`ModelTypeWorker`][MTW] on the sync thread via the [`CommitQueue`][CQ]
interface and receiving updates from the same worker via the
[`ModelTypeProcessor`][MTP] interface.
-[SMTP]: https://cs.chromium.org/chromium/src/components/sync/model_impl/shared_model_type_processor.h
+[SMTP]: https://cs.chromium.org/chromium/src/components/sync/model_impl/client_tag_based_model_type_processor.h
[MTSB]: https://cs.chromium.org/chromium/src/components/sync/model/model_type_sync_bridge.h
[MTCP]: https://cs.chromium.org/chromium/src/components/sync/model/model_type_change_processor.h
[MTW]: https://cs.chromium.org/chromium/src/components/sync/engine_impl/model_type_worker.h
diff --git a/chromium/docs/testing/writing_layout_tests.md b/chromium/docs/testing/writing_layout_tests.md
index d8e6f8ee112..a000b913e7f 100644
--- a/chromium/docs/testing/writing_layout_tests.md
+++ b/chromium/docs/testing/writing_layout_tests.md
@@ -21,7 +21,7 @@ Layout tests should be used to accomplish one of the following goals:
get better. This is very much in line with our goal to move the Web forward.
2. When a Blink feature cannot be tested using the tools provided by WPT, and
cannot be easily covered by
- [C++ unit tests](https://cs.chromium.org/chromium/src/third_party/WebKit/Source/web/tests/?q=webframetest&sq=package:chromium&type=cs),
+ [C++ unit tests](https://cs.chromium.org/chromium/src/third_party/blink/renderer/web/tests/?q=webframetest&sq=package:chromium&type=cs),
the feature must be covered by layout tests, to avoid unexpected regressions.
These tests will use Blink-specific testing APIs that are only available in
[content_shell](./layout_tests_in_content_shell.md).
diff --git a/chromium/docs/threading_and_tasks.md b/chromium/docs/threading_and_tasks.md
index 13d06c9a62b..e1ff61479f8 100644
--- a/chromium/docs/threading_and_tasks.md
+++ b/chromium/docs/threading_and_tasks.md
@@ -596,28 +596,6 @@ TEST(MyTest, MyTest) {
}
```
-## Legacy Post Task APIs
-
-The Chrome browser process has a few legacy named threads (aka
-“BrowserThreads”). Each of these threads runs a specific type of task (e.g. the
-`FILE` thread handles low priority file operations, the `FILE_USER_BLOCKING`
-thread handles high priority file operations, the `CACHE` thread handles cache
-operations…). Usage of these named threads is now discouraged. New code should
-post tasks to task scheduler via
-[`base/task_scheduler/post_task.h`](https://cs.chromium.org/chromium/src/base/task_scheduler/post_task.h)
-instead.
-
-If for some reason you absolutely need to post a task to a legacy named thread
-(e.g. because it needs mutual exclusion with a task running on one of these
-threads), this is how you do it:
-
-```cpp
-content::BrowserThread::GetTaskRunnerForThread(content::BrowserThread::[IDENTIFIER])
- ->PostTask(FROM_HERE, base::BindOnce(&Task));
-```
-
-Where `IDENTIFIER` is one of: `DB`, `FILE`, `FILE_USER_BLOCKING`, `PROCESS_LAUNCHER`, `CACHE`.
-
## Using TaskScheduler in a New Process
TaskScheduler needs to be initialized in a process before the functions in
diff --git a/chromium/docs/user_handle_mapping.md b/chromium/docs/user_handle_mapping.md
index 9c07ca38db7..2918c2d07f5 100644
--- a/chromium/docs/user_handle_mapping.md
+++ b/chromium/docs/user_handle_mapping.md
@@ -63,6 +63,7 @@ For Chromium contributors that have different nicks on other domains.
| levin | dave\_levin | levin |
| lfg | lfg\_ | lfg |
| littledan | littledan | dehrenberg |
+| loonybear | loonybear | lunalu |
| luken | luken_chromium | luken |
| mark | markmentovai | mmentovai |
| mbarbella | mbarbella | mbarbella |
diff --git a/chromium/docs/vscode.md b/chromium/docs/vscode.md
index cda5fcc4753..bc3dc0a24f8 100644
--- a/chromium/docs/vscode.md
+++ b/chromium/docs/vscode.md
@@ -352,6 +352,28 @@ You might have to adjust the commands to your situation and needs.
"file": 1, "severity": 3, "message": 4
}
}]
+ },
+ {
+ "taskName": "6-build_current_file",
+ "command": "compile_single_file --build-dir=out/Debug --file-path=${file}",
+ "isShellCommand": true,
+ "problemMatcher": [
+ {
+ "owner": "cpp",
+ "fileLocation": ["relative", "${workspaceRoot}"],
+ "pattern": {
+ "regexp": "^../../(.*):(\\d+):(\\d+):\\s+(warning|\\w*\\s?error):\\s+(.*)$",
+ "file": 1, "line": 2, "column": 3, "severity": 4, "message": 5
+ }
+ },
+ {
+ "owner": "cpp",
+ "fileLocation": ["relative", "${workspaceRoot}"],
+ "pattern": {
+ "regexp": "^../../(.*?):(.*):\\s+(warning|\\w*\\s?error):\\s+(.*)$",
+ "file": 1, "severity": 3, "message": 4
+ }
+ }]
}]
}
```
@@ -502,18 +524,19 @@ Here are some key bindings that are likely to be useful for you:
#### The `out` folder
Automatically generated code is put into a subfolder of out/, which means that
these files are ignored by VS Code (see files.exclude above) and cannot be
-opened e.g. from quick-open (`Ctrl+P`). On Linux, you can create a symlink as a
-work-around:
-```
- cd ~/chromium/src
- mkdir _out
- ln -s ../out/Debug/gen _out/gen
-```
-We picked _out since it is already in .gitignore, so it won't show up in git
-status.
-
-Note: As of version 1.9, VS Code does not support negated glob commands, but
-once it does, you can use
+opened e.g. from quick-open (`Ctrl+P`).
+As of version 1.21, VS Code does not support negated glob commands, but you can
+define a set of exclude pattern to include only out/Debug/gen:
+
+"files.exclude": {
+ // Ignore build output folders. Except out/Debug/gen/
+ "out/[^D]*/": true,
+ "out/Debug/[^g]*": true,
+ "out/Debug/g[^e]*": true,
+ "out_*/**": true,
+},
+
+Once it does, you can use
```
"!out/Debug/gen/**": true
```
diff --git a/chromium/docs/webui_explainer.md b/chromium/docs/webui_explainer.md
index 1244741f152..b9bb5a7205b 100644
--- a/chromium/docs/webui_explainer.md
+++ b/chromium/docs/webui_explainer.md
@@ -138,7 +138,7 @@ class DonutsUI : public content::WebUIController {
content::WebUIDataSource::Add(source);
// Handles messages from JavaScript to C++ via chrome.send().
- web_ui->AddMessageHandler(base::MakeUnique<OvenHandler>());
+ web_ui->AddMessageHandler(std::make_unique<OvenHandler>());
}
};
```
diff --git a/chromium/docs/webui_in_components.md b/chromium/docs/webui_in_components.md
index b808bd22f2d..b06c0c165ff 100644
--- a/chromium/docs/webui_in_components.md
+++ b/chromium/docs/webui_in_components.md
@@ -241,8 +241,8 @@ You probably want your new WebUI page to be able to do something or get informat
+
+ // Register callback handler.
+ RegisterMessageCallback("addNumbers",
-+ base::Bind(&HelloWorldUI::AddNumbers,
-+ base::Unretained(this)));
++ base::BindRepeating(&HelloWorldUI::AddNumbers,
++ base::Unretained(this)));
// Localized strings.
...
diff --git a/chromium/docs/win_cross.md b/chromium/docs/win_cross.md
index 2ba25add7ba..c2639740675 100644
--- a/chromium/docs/win_cross.md
+++ b/chromium/docs/win_cross.md
@@ -62,9 +62,9 @@ to the Windows box and run it to install the chrome you just built.
You can run the Windows binaries you built on swarming, like so:
- tools/run_swarmed.py -C out/gnwin -t base_unittests [ --gtest_filter=... ]
+ tools/run-swarmed.py -C out/gnwin -t base_unittests [ --gtest_filter=... ]
-See the contents of run_swarmed.py for how to do this manually.
+See the contents of run-swarmed.py for how to do this manually.
There's a bot doing 64-bit release cross builds at
https://ci.chromium.org/buildbot/chromium.clang/linux-win_cross-rel/
diff --git a/chromium/docs/windows_build_instructions.md b/chromium/docs/windows_build_instructions.md
index 1b5c389e474..40a620ffc99 100644
--- a/chromium/docs/windows_build_instructions.md
+++ b/chromium/docs/windows_build_instructions.md
@@ -23,25 +23,28 @@ Are you a Google employee? See
### Visual Studio
-As of September, 2017 (R503915) Chromium requires Visual Studio 2017 update 3.2
-with the 15063 (Creators Update) Windows SDK or later to build. Visual Studio
-Community Edition should work if its license is appropriate for you. You must
-install the "Desktop development with C++" component and the "MFC and ATL
-support" sub-component. This can be done from the command line by passing these
-arguments to the Visual Studio installer that you download:
+As of September, 2017 (R503915) Chromium requires Visual Studio 2017 update 3.x
+to build. The clang-cl compiler is used but Visual Studio's header files,
+libraries, and some tools are required. Visual Studio Community Edition should
+work if its license is appropriate for you. You must install the "Desktop
+development with C++" component and the "MFC and ATL support" sub-component.
+This can be done from the command line by passing these arguments to the Visual
+Studio installer that you download:
```shell
--add Microsoft.VisualStudio.Workload.NativeDesktop
--add Microsoft.VisualStudio.Component.VC.ATLMFC --includeRecommended
```
-You must have the Windows 10 SDK installed, version 10.0.15063 or later.
-The 10.0.15063 SDK initially had errors but the 10.0.15063.468 version works
-well. Most of this will be installed by Visual Studio.
-If the Windows 10 SDK was installed via the Visual Studio installer, the Debugging
-Tools can be installed by going to: Control Panel → Programs →
-Programs and Features → Select the "Windows Software Development Kit" →
-Change → Change → Check "Debugging Tools For Windows" → Change. Or, you can
-download the standalone SDK installer and use it to install the Debugging Tools.
+You must have the version 10.0.15063 Windows 10 SDK installed. This can be
+installed separately or by checking the appropriate box in the Visual Studio
+Installer.
+
+The SDK Debugging Tools must also be installed. If the Windows 10 SDK was
+installed via the Visual Studio installer, then they can be installed by going
+to: Control Panel → Programs → Programs and Features → Select the "Windows
+Software Development Kit" → Change → Change → Check "Debugging Tools For
+Windows" → Change. Or, you can download the standalone SDK installer and use it
+to install the Debugging Tools.
## Install `depot_tools`
@@ -175,17 +178,37 @@ IDE files up to date automatically when you build.
The generated solution will contain several thousand projects and will be very
slow to load. Use the `--filters` argument to restrict generating project files
-for only the code you're interested in, although this will also limit what
-files appear in the project explorer. A minimal solution that will let you
-compile and run Chrome in the IDE but will not show any source files is:
+for only the code you're interested in. Although this will also limit what
+files appear in the project explorer, debugging will still work and you can
+set breakpoints in files that you open manually. A minimal solution that will
+let you compile and run Chrome in the IDE but will not show any source files
+is:
```
-$ gn gen --ide=vs --filters=//chrome out\Default
+$ gn gen --ide=vs --filters=//chrome --no-deps out\Default
```
+You can selectively add other directories you care about to the filter like so:
+`--filters=//chrome;//third_party/WebKit/*;//gpu/*`.
+
There are other options for controlling how the solution is generated, run `gn
help gen` for the current documentation.
+By default when you start debugging in Visual Studio the debugger will only
+attach to the main browser process. To debug all of Chrome, install
+[Microsoft's Child Process Debugging Power Tool](https://blogs.msdn.microsoft.com/devops/2014/11/24/introducing-the-child-process-debugging-power-tool/).
+You will also need to run Visual Studio as administrator, or it will silently
+fail to attach to some of Chrome's child processes.
+
+It is also possible to debug and develop Chrome in Visual Studio without a
+solution file. Simply "open" your chrome.exe binary with
+`File->Open->Project/Solution`, or from a Visual Studio command prompt like
+so: `devenv /debugexe out\Debug\chrome.exe <your arguments>`. Many of Visual
+Studio's code editing features will not work in this configuration, but by
+installing the [VsChromium Visual Studio Extension](https://chromium.github.io/vs-chromium/)
+you can get the source code to appear in the solution explorer window along
+with other useful features such as code search.
+
### Faster builds
* Reduce file system overhead by excluding build directories from
@@ -199,7 +222,7 @@ in the editor that appears when you create your output directory
(`gn args out/Default`) or on the gn gen command line
(`gn gen out/Default --args="is_component_build = true is_debug = true"`).
Some helpful settings to consider using include:
-* `use_jumbo_build = true` - *Experimental* [Jumbo/unity](jumbo.md) builds.
+* `use_jumbo_build = true` - *experimental* [Jumbo/unity](jumbo.md) builds.
* `is_component_build = true` - this uses more, smaller DLLs, and incremental
linking.
* `enable_nacl = false` - this disables Native Client which is usually not
@@ -210,31 +233,113 @@ don't' set enable_nacl = false then build times may get worse.
* `remove_webcore_debug_symbols = true` - turn off source-level debugging for
blink to reduce build times, appropriate if you don't plan to debug blink.
-In addition, Google employees should consider using goma, a distributed
-compilation system. Detailed information is available internally but the
-relevant gn args are:
+In order to ensure that linking is fast enough we recommend that you use one of
+these settings - they all have tradeoffs:
+* `use_lld = true` - this linker is very fast on full links but does not support
+incremental linking.
+* `is_win_fastlink = true` - this option makes the Visual Studio linker run much
+faster, and incremental linking is supported, but it can lead to debugger
+slowdowns or out-of-memory crashes.
+* `symbol_level = 1` - this option reduces the work the linker has to do but
+when this option is set you cannot do source-level debugging.
+
+In addition, Google employees should use goma, a distributed compilation system.
+Detailed information is available internally but the relevant gn arg is:
* `use_goma = true`
-* `symbol_level = 2` - by default goma builds change symbol_level from 2 to 1
-which disables source-level debugging. This turns it back on. This actually
-makes builds slower, but it makes goma more usable.
-* `is_win_fastlink = true` - this is required if you have goma enabled and
-symbol_level set to 2.
-
-Note that debugging of is_win_fastlink built binaries is unreliable prior to
-VS 2017 Update 3 and may crash Visual Studio.
To get any benefit from goma it is important to pass a large -j value to ninja.
-A good default is 10\*numCores to 20\*numCores. If you run autoninja.bat then it
-will pass an appropriate -j value to ninja for goma or not, automatically.
+A good default is 10\*numCores to 20\*numCores. If you run autoninja then it
+will automatically pass an appropriate -j value to ninja for goma or not.
+
+```shell
+$ autoninja -C out\Default chrome
+```
When invoking ninja specify 'chrome' as the target to avoid building all test
binaries as well.
Still, builds will take many hours on many machines.
+### Why is my build slow?
+
+Many things can make builds slow, with Windows Defender slowing process startups
+being a frequent culprit. Have you ensured that the entire Chromium src
+directory is excluded from antivirus scanning (on Google machines this means
+putting it in a ``src`` directory in the root of a drive)? Have you tried the
+different settings listed above, including different link settings and -j
+values? Have you asked on the chromium-dev mailing list to see if your build is
+slower than expected for your machine's specifications?
+
+The next step is to gather some data. There are several options. Setting
+[NINJA_STATUS](https://ninja-build.org/manual.html#_environment_variables) lets
+you configure Ninja's output so that, for instance, you can see how many
+processes are running at any given time, how long the build has been running,
+etc., as shown here:
+
+```shell
+$ set NINJA_STATUS=[%r processes, %f/%t @ %o/s : %es ]
+$ autoninja -C out\Default base
+ninja: Entering directory `out\Default'
+[1 processes, 86/86 @ 2.7/s : 31.785s ] LINK(DLL) base.dll base.dll.lib base.dll.pdb
+```
+
+In addition, if you set the ``NINJA_SUMMARIZE_BUILD`` environment variable to 1 then
+autoninja will print a build performance summary when the build completes,
+showing the slowest build steps and build-step types, as shown here:
+
+```shell
+$ set NINJA_SUMMARIZE_BUILD=1
+$ autoninja -C out\Default base
+ Longest build steps:
+...
+ 1.2 weighted s to build base.dll, base.dll.lib, base.dll.pdb (1.2 s CPU time)
+ 8.5 weighted s to build obj/base/base/base_jumbo_38.obj (30.1 s CPU time)
+ Time by build-step type:
+...
+ 1.2 s weighted time to generate 1 PEFile (linking) files (1.2 s CPU time)
+ 30.3 s weighted time to generate 45 .obj files (688.8 s CPU time)
+ 31.8 s weighted time (693.8 s CPU time, 21.8x parallelism)
+ 86 build steps completed, average of 2.71/s
+```
+
+You can also generate these reports by manually running the script after a build:
+
+```shell
+$ python depot_tools\post_build_ninja_summary.py -C out\Default
+```
+
+You can also get a visual report of the build performance with
+[ninjatracing](https://github.com/nico/ninjatracing). This converts the
+.ninja_log file into a .json file which can be loaded into chrome://tracing:
+
+```shell
+$ python ninjatracing out\Default\.ninja_log >build.json
+```
+
+Finally, Ninja can report on its own overhead which can be helpful if, for
+instance, process creation is making builds slow, perhaps due to antivirus
+interference due to clang-cl not being in an excluded directory:
+
+```shell
+$ autoninja -d stats -C out\Default base
+metric count avg (us) total (ms)
+.ninja parse 3555 1539.4 5472.6
+canonicalize str 1383032 0.0 12.7
+canonicalize path 1402349 0.0 11.2
+lookup node 1398245 0.0 8.1
+.ninja_log load 2 118.0 0.2
+.ninja_deps load 2 67.5 0.1
+node stat 2516 29.6 74.4
+depfile load 2 1132.0 2.3
+StartEdge 88 3508.1 308.7
+FinishCommand 87 1670.9 145.4
+CLParser::Parse 45 1889.1 85.0
+```
+
## Build Chromium
-Build Chromium (the "chrome" target) with Ninja using the command:
+Build Chromium (the "chrome" target) with Ninja (or autoninja) using the
+command:
```shell
$ ninja -C out\Default chrome