summaryrefslogtreecommitdiff
path: root/chromium/docs
diff options
context:
space:
mode:
authorAllan Sandfeld Jensen <allan.jensen@qt.io>2018-08-28 15:28:34 +0200
committerAllan Sandfeld Jensen <allan.jensen@qt.io>2018-08-28 13:54:51 +0000
commit2a19c63448c84c1805fb1a585c3651318bb86ca7 (patch)
treeeb17888e8531aa6ee5e85721bd553b832a7e5156 /chromium/docs
parentb014812705fc80bff0a5c120dfcef88f349816dc (diff)
downloadqtwebengine-chromium-2a19c63448c84c1805fb1a585c3651318bb86ca7.tar.gz
BASELINE: Update Chromium to 69.0.3497.70
Change-Id: I2b7b56e4e7a8b26656930def0d4575dc32b900a0 Reviewed-by: Allan Sandfeld Jensen <allan.jensen@qt.io>
Diffstat (limited to 'chromium/docs')
-rw-r--r--chromium/docs/android_build_instructions.md8
-rw-r--r--chromium/docs/android_emulator.md65
-rw-r--r--chromium/docs/android_studio.md34
-rw-r--r--chromium/docs/android_test_instructions.md64
-rw-r--r--chromium/docs/chromoting_android_hacking.md9
-rw-r--r--chromium/docs/clang_tool_refactoring.md6
-rw-r--r--chromium/docs/clion_dev.md118
-rw-r--r--chromium/docs/closure_compilation.md98
-rw-r--r--chromium/docs/fuchsia_build_instructions.md2
-rw-r--r--chromium/docs/gpu/debugging_gpu_related_code.md2
-rw-r--r--chromium/docs/gpu/gpu_testing.md94
-rw-r--r--chromium/docs/gpu/gpu_testing_bot_details.md70
-rw-r--r--chromium/docs/gpu/pixel_wrangling.md8
-rw-r--r--chromium/docs/gpu/sync_token_internals.md132
-rw-r--r--chromium/docs/infra/new_builder.md340
-rw-r--r--chromium/docs/jumbo.md2
-rw-r--r--chromium/docs/linux_chromium_packages.md2
-rw-r--r--chromium/docs/linux_debugging.md2
-rw-r--r--chromium/docs/linux_minidump_to_core.md49
-rw-r--r--chromium/docs/linux_running_asan_tests.md13
-rw-r--r--chromium/docs/linux_sandbox_ipc.md34
-rw-r--r--chromium/docs/mac_build_instructions.md10
-rw-r--r--chromium/docs/media/profile-screenshot.pngbin0 -> 228014 bytes
-rw-r--r--chromium/docs/memory/README.md10
-rw-r--r--chromium/docs/mojo_guide.md157
-rw-r--r--chromium/docs/ozone_drm_for_linux.md96
-rw-r--r--chromium/docs/privacy/OWNERS6
-rw-r--r--chromium/docs/privacy/nonsecure-cookies.md6
-rw-r--r--chromium/docs/profiling.md152
-rw-r--r--chromium/docs/security/OWNERS1
-rw-r--r--chromium/docs/security/faq.md20
-rw-r--r--chromium/docs/security/sheriff.md1
-rw-r--r--chromium/docs/security/side-channel-threat-model.md374
-rw-r--r--chromium/docs/speed/addressing_performance_regressions.md4
-rw-r--r--chromium/docs/speed/apk_size_regressions.md26
-rw-r--r--chromium/docs/speed/benchmark/benchmark_ownership.md11
-rw-r--r--chromium/docs/speed/benchmark/harnesses/blink_perf.md4
-rw-r--r--chromium/docs/speed/benchmark/harnesses/loading.md101
-rw-r--r--chromium/docs/speed/benchmark/harnesses/power_perf.md91
-rw-r--r--chromium/docs/speed/benchmark/harnesses/rendering.md94
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/how_to_access_test_logs.md24
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/images/flakiness_dashboard_new_recipe.pngbin0 -> 144128 bytes
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_benchmark_logs_link.pngbin0 -> 46620 bytes
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_choose_builder.pngbin0 -> 64114 bytes
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_identify_failed_tests.pngbin0 -> 74325 bytes
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_story_log.pngbin0 -> 58220 bytes
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/main.md2
-rw-r--r--chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md5
-rw-r--r--chromium/docs/sublime_ide.md30
-rw-r--r--chromium/docs/testing/json_test_results_format.md54
-rw-r--r--chromium/docs/testing/layout_test_expectations.md11
-rw-r--r--chromium/docs/testing/layout_tests_tips.md6
-rw-r--r--chromium/docs/testing/layout_tests_with_manual_fallback.md2
-rw-r--r--chromium/docs/testing/web_platform_tests.md18
-rw-r--r--chromium/docs/testing/writing_layout_tests.md16
-rw-r--r--chromium/docs/updating_clang.md38
-rw-r--r--chromium/docs/useful_urls.md32
-rw-r--r--chromium/docs/vscode.md5
-rw-r--r--chromium/docs/win_cross.md41
-rw-r--r--chromium/docs/windows_build_instructions.md13
60 files changed, 2169 insertions, 444 deletions
diff --git a/chromium/docs/android_build_instructions.md b/chromium/docs/android_build_instructions.md
index b0170f4703d..e5de463787e 100644
--- a/chromium/docs/android_build_instructions.md
+++ b/chromium/docs/android_build_instructions.md
@@ -6,7 +6,8 @@ There are instructions for other platforms linked from the
## Instructions for Google Employees
Are you a Google employee? See
-[go/building-chrome](https://goto.google.com/building-chrome) instead.
+[go/building-android-chrome](https://goto.google.com/building-android-chrome)
+instead.
[TOC]
@@ -352,6 +353,11 @@ incremental_apk_by_default = true
This will make `chrome_public_apk` build in incremental mode.
+## Installing and Running Chromium on an Emulator
+
+Running on an emulator is the same as on a device. Refer to
+[android_emulator.md](android_emulator.md) for setting up emulators.
+
## Tips, tricks, and troubleshooting
diff --git a/chromium/docs/android_emulator.md b/chromium/docs/android_emulator.md
new file mode 100644
index 00000000000..e4de232bf47
--- /dev/null
+++ b/chromium/docs/android_emulator.md
@@ -0,0 +1,65 @@
+# Using an Android Emulator
+Always use x86 emulators. Although arm emulators exist, they are so slow that
+they are not worth your time.
+
+## Building for Emulation
+You need to target the correct architecture via GN args:
+```
+target_cpu = "x86"
+```
+
+## Creating an Emulator Image
+By far the easiest way to set up emulator images is to use Android Studio.
+If you don't have an [Android Studio project](android_studio.md) already, you
+can create a blank one to be able to reach the Virtual Device Manager screen.
+
+Refer to: https://developer.android.com/studio/run/managing-avds.html
+
+Where files live:
+ * System partition images are stored within the sdk directory.
+ * Emulator configs and data partition images are stored within
+ `~/.android/avd/`.
+
+When creating images:
+ * Choose a skin with a small screen for better performance (unless you care
+ about testing large screens).
+ * Under "Advanced":
+ * Set internal storage to 4000MB (component builds are really big).
+ * Set SD card to 1000MB (our tests push a lot of files to /sdcard).
+
+Known issues:
+ * Our test & installer scripts do not work with pre-MR1 Jelly Bean.
+ * Component builds do not work on pre-KitKat (due to the OS having a max
+ number of shared libraries).
+ * Jelly Bean and KitKat images sometimes forget to mount /sdcard :(.
+ * This causes tests to fail.
+ * To ensure it's there: `adb -s emulator-5554 shell mount` (look for /sdcard)
+ * Can often be fixed by editing `~/.android/avd/YOUR_DEVICE/config.ini`.
+ * Look for `hw.sdCard=no` and set it to `yes`
+
+### Cloning an Image
+Running tests on two emulators is twice as fast as running on one. Rather
+than use the UI to create additional avds, you can clone an existing one via:
+
+```shell
+tools/android/emulator/clone_avd.py \
+ --source-ini ~/.android/avd/EMULATOR_ID.ini \
+ --dest-ini ~/.android/avd/EMULATOR_ID_CLONED.ini \
+ --display-name "Cloned Emulator"
+```
+
+## Starting an Emulator from the Command Line
+Refer to: https://developer.android.com/studio/run/emulator-commandline.html.
+
+Note: Ctrl-C will gracefully close an emulator.
+
+If running under remote desktop:
+```
+sudo apt-get install virtualgl
+vglrun ~/Android/Sdk/tools/emulator @EMULATOR_ID
+```
+
+## Using an Emulator
+ * Emulators show up just like devices via `adb devices`
+ * Device serials will look like "emulator-5554", "emulator-5556", etc.
+
diff --git a/chromium/docs/android_studio.md b/chromium/docs/android_studio.md
index 69e143ddb9e..0e5f2f8a0cd 100644
--- a/chromium/docs/android_studio.md
+++ b/chromium/docs/android_studio.md
@@ -11,10 +11,6 @@ Make sure you have followed
build/android/gradle/generate_gradle.py --output-directory out/Debug
```
-Use the flag `--sdk AndroidStudioDefault` to create and use a custom sdk
-directory and avoid issues with `gclient sync` and to use emulators. This will
-become the default soon.
-
```shell
build/android/gradle/generate_gradle.py --output-directory out/Debug --sdk AndroidStudioDefault
```
@@ -26,17 +22,14 @@ To import the project:
* Use "Import Project", and select the directory containing the generated
project, e.g. `out/Debug/gradle`.
-For first-time Android Studio users:
-* Only run the setup wizard if you are planning to use emulators.
- * The wizard will force you to download SDK components that are only needed
- for emulation.
- * To skip it, select "Cancel" when it comes up.
-
See [android_test_instructions.md](android_test_instructions.md#Using-Emulators)
for more information about building and running emulators.
If you're asked to use Studio's Android SDK:
-* No. (Always use your project's SDK configured by generate_gradle.py)
+* No.
+ * Selecting No ensures that the SDK used by Android Studio is the same as
+ the one set by `generate_gradle.py`. If you want a different SDK pass
+ `--sdk` to `generate_gradle.py`.
If you're asked to use Studio's Gradle wrapper:
* Yes.
@@ -76,16 +69,16 @@ allows imports and refactoring to be across all targets.
### Extracting .srcjars
Most generated .java files in GN are stored as `.srcjars`. Android Studio does
-not support them. It is very slow to build all these generated files and they
-rarely change. The generator script does not do anything with them by default.
-If `--full` is passed then the generator script builds and extracts them all to
-`extracted-srcjars/` subdirectories for each target that contains them. This is
-the reason that the `_all` pseudo module may contain multiple copies of
-generated files.
+not support them. The generator script builds and extracts them to
+`extracted-srcjars/` subdirectories for each target that contains generated
+files. This is the reason that the `_all` pseudo module may contain multiple
+copies of generated files. It can be slow to build all these generated files,
+so if `--fast` is passed then the generator script skips building and
+extracting them.
*** note
-** TLDR:** Re-generate project files with `--full` when generated files change (
-includes `R.java`) and to remove some red underlines in java files.
+** TLDR:** Always re-generate project files when generated files change (this
+includes `R.java`).
***
### Native Files
@@ -181,8 +174,9 @@ resources, native libraries, etc.
* Android Studio v3.0-v3.2.
* Java editing.
+ * Application code in `main` sourceset.
+ * Instrumentation test code in `androidTest` sourceset.
* Native code editing (experimental).
-* Instrumentation tests included as androidTest.
* Symlinks to existing .so files in jniLibs (doesn't generate them).
* Editing resource xml files
* Layout editor (limited functionality).
diff --git a/chromium/docs/android_test_instructions.md b/chromium/docs/android_test_instructions.md
index c065974b092..6fc4649dc58 100644
--- a/chromium/docs/android_test_instructions.md
+++ b/chromium/docs/android_test_instructions.md
@@ -53,68 +53,8 @@ adb shell settings put global package_verifier_enable 0
### Using Emulators
-#### Building for emulation
-
-The fast Android emulators use the X86 instruction set, so anything run on such
-an emulator has to be built for X86. Add
-```
-target_cpu = "x86"
-```
-to your args.gn file. You may want use different out directories for your X86
-and ARM builds.
-
-#### Setting up your workstation
-
-The Android emulators support VM acceleration. This, however, needs to be
-enabled on your workstation, as described in
-https://developer.android.com/studio/run/emulator-acceleration.html#accel-vm.
-
-#### Creating and running emulators from Android Studio
-
-The easiest way to create and run an emulator is to use Android Studio's
-Virtual Device Manager. See
-https://developer.android.com/studio/run/managing-avds.html.
-
-Creating emulators in Android Studio will modify the current SDK. If you are
-using the project's SDK then this can cause problems the next time you sync
-the project, so it is normally better to use a different SDK root when
-creating emulators. You can set this up either by creating the Android Studio
-project using generate_gradle.py's --sdk or --sdk-path options or by
-changing the SDK location within AndroidStudio's settings.
-
-#### Starting an emulator from the command line
-
-Once you have created an emulator (using Android Studio or otherwise) you can
-start it from the command line using the
-[emulator](https://developer.android.com/studio/run/emulator-commandline.html)
-command:
-
-```
-{$ANDROID_SDK_ROOT}/tools/emulator @emulatorName
-```
-
-where emulatorName is the name of the emulator you want to start (e.g.
-Nexus_5X_API_27). The command
-
-```
-{$ANDROID_SDK_ROOT}/tools/emulator -list-avds
-```
-
-will list the available emulators.
-
-#### Creating an emulator from the command line
-
-New emulators can be created from the command line using the
-[avdmanager](https://developer.android.com/studio/command-line/avdmanager.html)
-command. This, however, does not provide any way of creating new device types,
-and provides far fewer options than the Android Studio UI for creating new
-emulators.
-
-The device types are configured through a devices.xml file. The devices.xml
-file for standard device types are within Android Studio's install, and that
-for any additional devices you define are in $ANDROID_EMULATOR_HOME (defaulting
-to ~/.android/). The contents of devices.xml is, however, undocumented (and
-presumably subject to change), so this is best modified using Android Studio.
+Running tests on emulators is the same as on device. Refer to
+[android_emulator.md](android_emulator.md) for setting up emulators.
## Building Tests
diff --git a/chromium/docs/chromoting_android_hacking.md b/chromium/docs/chromoting_android_hacking.md
index 540444b4011..834b5e2da3e 100644
--- a/chromium/docs/chromoting_android_hacking.md
+++ b/chromium/docs/chromoting_android_hacking.md
@@ -54,7 +54,7 @@ display log messages to the `LogCat` pane.
<classpathentry kind="src" path="components/cronet/android/sample/src"/>
<classpathentry kind="src" path="components/cronet/android/sample/javatests/src"/>
<classpathentry kind="src" path="components/autofill/core/browser/android/java/src"/>
-<classpathentry kind="src" path="components/web_contents_delegate_android/java/src"/>
+<classpathentry kind="src" path="components/embedder_support/android/java/src"/>
<classpathentry kind="src" path="components/dom_distiller/android/java/src"/>
<classpathentry kind="src" path="components/navigation_interception/android/java/src"/>
<classpathentry kind="src" path="ui/android/java/src"/>
@@ -69,9 +69,10 @@ display log messages to the `LogCat` pane.
<classpathentry kind="src" path="sync/test/android/javatests/src"/>
<classpathentry kind="src" path="sync/android/java/src"/>
<classpathentry kind="src" path="sync/android/javatests/src"/>
-<classpathentry kind="src" path="mojo/public/java/src"/>
-<classpathentry kind="src" path="mojo/android/system/src"/>
-<classpathentry kind="src" path="mojo/android/javatests/src"/>
+<classpathentry kind="src" path="mojo/public/java/base/src"/>
+<classpathentry kind="src" path="mojo/public/java/bindings/src"/>
+<classpathentry kind="src" path="mojo/public/java/system/javatests/src"/>
+<classpathentry kind="src" path="mojo/public/java/system/src"/>
<classpathentry kind="src" path="testing/android/java/src"/>
<classpathentry kind="src" path="printing/android/java/src"/>
<classpathentry kind="src" path="tools/binary_size/java/src"/>
diff --git a/chromium/docs/clang_tool_refactoring.md b/chromium/docs/clang_tool_refactoring.md
index b23d8d8556d..107ff8e006d 100644
--- a/chromium/docs/clang_tool_refactoring.md
+++ b/chromium/docs/clang_tool_refactoring.md
@@ -123,9 +123,9 @@ ninja -C out/Debug # For non-Windows
ninja -d keeprsp -C out/Debug # For Windows
# experimental alternative:
-$gen_targets = $(ninja -C out/gn -t targets all \
+$gen_targets = $(ninja -C out/Debug -t targets all \
| grep '^gen/[^: ]*\.[ch][pc]*:' \
- | cut -f 1 -d :`)
+ | cut -f 1 -d :)
ninja -C out/Debug $gen_targets
```
@@ -150,7 +150,7 @@ from `//base` that got included by source files in `//cc`).
```shell
tools/clang/scripts/run_tool.py --tool empty_string \
- --generated-compdb \
+ --generate-compdb \
-p out/Debug net >/tmp/list-of-edits.debug
```
diff --git a/chromium/docs/clion_dev.md b/chromium/docs/clion_dev.md
new file mode 100644
index 00000000000..889cad09e60
--- /dev/null
+++ b/chromium/docs/clion_dev.md
@@ -0,0 +1,118 @@
+# CLion Dev
+
+CLion is an IDE
+
+Prerequisite:
+[Checking out and building the chromium code base](README.md#Checking-Out-and-Building)
+
+[TOC]
+
+## Setting up CLion
+
+1. Install CLion
+
+1. Authenticate License
+ - https://g3doc.corp.google.com/devtools/ide/intellij/g3doc/docs/license-server.md?cl=head
+
+1. Run CLion
+
+1. Increase CLion's memory allocation
+ - This step will help performance with large projects
+ 1. Option 1
+ 1. At the startup dialogue, in the bottom right corner, click `configure`
+ 1. Setup `Edit Custom VM Options`:
+ ```
+ -Xss2m
+ -Xms1g
+ -Xmx5g
+ ```
+ 1. Setup `Edit Custom Properties`:
+ ```
+ idea.max.intellisense.filesize=12500
+ ```
+ 1. Option 2; 2017 and prior versions may not include the options to setup your `VM Options` and `Properties` in the `configure` menu. Instead:
+ 1. `Create New Project`
+ 1. `Help` > `Edit Custom VM Options`
+ 1. `Help` > `Edit Custom Properties`
+
+## Chromium in CLion
+
+1. Import project
+ - At the startup dialogue, select `Import Project` and select your chromium directory
+
+1. Modify the `CMakeList.txt` file
+ 1. Open the `CMakeList.txt` file
+ 1. Add the following to the top
+ ```
+ set(CMAKE_BUILD_TYPE Debug)
+ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src)
+ ```
+ 1. Remove any other `include_directories` the file contains
+ - the head should look like
+ ```
+ cmake_minimum_required(VERSION 3.10)
+ project(chromium)
+
+ set(CMAKE_CXX_STANDARD 11)
+
+ set(CMAKE_BUILD_TYPE Debug)
+
+ include_directories(${CMAKE_CURRENT_SOURCE_DIR}/src)
+
+ add_executable(chromium
+ ...)
+ ```
+
+## Building, Running, and Debugging within CLion
+
+1. `Run` > `Edit Configurations`
+1. Click `+` in the top left and select `Application`
+1. Setup:
+ ```
+ Target: All targets
+ Executable: src/out/Defaults/chrome
+ Program arguments (optional): --disable-seccomp-sandbox --single-process
+ Working directory: .../chromium/src/out/Default
+ ```
+1. Click `+` next to the `Before launch` section and select `Run External tool`
+1. In the dialog that appears, click `+` and setup:
+ ```
+ Program: .../depot_tools/ninja
+ Arguments: -C out/Default -j 1000 chrome
+ Working directory: .../chromium/src
+ ```
+1. Click `OK` to close all three dialogs
+1. `Run` > `Run` or `Run` > `Debug`
+
+## Note on installing CLion on Linux
+
+For some reason, when installing 2018.1 through a package manager, it did not create a desktop entry when I tried it. If you experience this as well:
+
+1. Option 1
+ 1. `cd /usr/share/applications/`
+ 1. `touch clion-2018-1.desktop`
+ 1. open `clion-2018-1.desktop` and insert:
+ ```
+ [Desktop Entry]
+ Name=CLion 2018.1
+ Exec=/opt/clion-2018.1/bin/clion.sh %u
+ Icon=/opt/clion-2018.1/bin/clion.svg
+ Terminal=false
+ Type=Application
+ Categories=Development;IDE;Java;
+ StartupWMClass=jetbrains-clion
+ X-Desktop-File-Install-Version=0.23
+ ```
+1. Option 2
+ 1. Run CLion through the terminal `/opt/clion-2018.1/bin/clion.sh`
+ 1. At the startup dialogue, in the bottom right corner, click `configure`
+ 1. Click `Create Desktop Entry`
+
+## Optional Performance Steps
+
+### Mark directories as `Library Files`
+
+To speed up CLion, you may optionally mark directories such as `src/third_party` as `Library Files`
+1. Open the `Project` navigation (default `Alt 1`)
+1. Right click the directory > `Mark directory as` > `Library Files`
+1. See `https://blog.jetbrains.com/clion/2015/12/mark-dir-as/` for more details \ No newline at end of file
diff --git a/chromium/docs/closure_compilation.md b/chromium/docs/closure_compilation.md
index 81a002be57d..4ae3688d92e 100644
--- a/chromium/docs/closure_compilation.md
+++ b/chromium/docs/closure_compilation.md
@@ -84,31 +84,31 @@ dangerous ways.
To do this, we can create:
- + ui/compiled_resources2.gyp
+ + ui/BUILD.gn
With these contents:
```
-# Copyright 2016 The Chromium Authors. All rights reserved.
+# Copyright 2018 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
-{
- 'targets': [
- {
- # Target names is typically just without ".js"
- 'target_name': 'makes_things_pretty',
-
- 'dependencies': [
- '../lib/compiled_resources2.gyp:does_the_hard_stuff',
-
- # Teaches closure about non-standard environments/APIs, e.g.
- # chrome.send(), chrome.app.window, etc.
- '<(EXTERNS_GYP):extern_name_goes_here'
- ],
-
- 'includes': ['../path/to/third_party/closure_compiler/compile_js2.gypi'],
- },
- ],
+
+import("//third_party/closure_compiler/compile_js.gni")
+
+js_type_check("closure_compile") {
+ deps = [
+ ":make_things_pretty",
+ ]
+}
+
+js_library("make_things_pretty") {
+ deps = [
+ "../lib:does_the_hard_stuff",
+ ]
+
+ externs_list = [
+ "$externs_path/extern_name_goes_here.js"
+ ]
}
```
@@ -120,10 +120,10 @@ You can locally test that your code compiles on Linux or Mac. This requires
python, depot_tools). Note: on Ubuntu, you can probably just run `sudo apt-get
install openjdk-7-jre`.
-Now you should be able to run:
+After you set closure_compile = true in your gn args, you should be able to run:
```shell
-third_party/closure_compiler/run_compiler
+ninja -C out/Default webui_closure_compile
```
and should see output like this:
@@ -133,10 +133,10 @@ ninja: Entering directory `out/Default/'
[0/1] ACTION Compiling ui/makes_things_pretty.js
```
-To compile only a specific target, add an argument after the script name:
+To compile only a specific folder, add an argument after the script name:
```shell
-third_party/closure_compiler/run_compiler makes_things_pretty
+ninja -C out/Default ui:closure_compile
```
In our example code, this error should appear:
@@ -152,51 +152,41 @@ In our example code, this error should appear:
Hooray! We can catch type errors in JavaScript!
+## Preferred BUILD.gn structure
+* Make all individual JS file targets a js\_library.
+* The top level target should be called “closure\_compile”.
+* If you have subfolders that need compiling, make “closure\_compile” a group(),
+ and any files in the current directory a js\_type\_check() called “<directory>\_resources”.
+* Otherwise, just make “closure\_compile” a js\_type\_check with all your js\_library targets as deps
+* Leave all closure targets below other kinds of targets becaure they’re less ‘important’
+
+See also:
+[Closure Compilation with GN](https://docs.google.com/a/chromium.org/document/d/1Ee9ggmp6U-lM-w9WmxN5cSLkK9B5YAq14939Woo-JY0/edit).
+
## Trying your change
-Closure compilation also has [try
-bots](https://build.chromium.org/p/tryserver.chromium.linux/builders/closure_compilation)
-which can check whether you could *would* break the build if it was committed.
+Closure compilation runs in the compile step of Linux, Android and ChromeOS builds.
From the command line, you try your change with:
```shell
-git cl try -b closure_compilation
-```
-
-To automatically check that your code typechecks cleanly before submitting, you
-can add this line to your CL description:
-
-```
-CQ_INCLUDE_TRYBOTS=tryserver.chromium.linux:closure_compilation
+git cl try -b linux_chromium_rel_ng
```
-Working in common resource directories in Chrome automatically adds this line
-for you.
-
## Integrating with the continuous build
-To compile your code on every commit, add your file to the `'dependencies'` list
-in `src/third_party/closure_compiler/compiled_resources2.gyp`:
+To compile your code on every commit, add your file to the
+`'webui_closure_compile'` target in `src/BUILD.gn`:
```
-{
- 'targets': [
- {
- 'target_name': 'compile_all_resources',
- 'dependencies': [
- # ... other projects ...
-++ '../my_project/compiled_resources2.gyp:*',
- ],
- }
- ]
-}
+ group("webui_closure_compile") {
+ data_deps = [
+ # Other projects
+ "my/project:closure_compile",
+ ]
+ }
```
-This file is used by the
-[Closure compiler bot](https://build.chromium.org/p/chromium.fyi/builders/Closure%20Compilation%20Linux)
-to automatically compile your code on every commit.
-
## Externs
[Externs files](https://github.com/google/closure-compiler/wiki/FAQ#how-do-i-write-an-externs-file)
diff --git a/chromium/docs/fuchsia_build_instructions.md b/chromium/docs/fuchsia_build_instructions.md
index 33103207cb2..aa5af391f63 100644
--- a/chromium/docs/fuchsia_build_instructions.md
+++ b/chromium/docs/fuchsia_build_instructions.md
@@ -125,6 +125,6 @@ The run script also symbolizes backtraces.
A useful alias (for "Build And Run Filtered") is:
```shell
-alias barf='ninja -C out/fuchsia base_unittests -j1000 && out/fuchsia/bin/run_base_unittests --test-launcher-filter-file=../../testing/buildbot/filters/fuchsia.base_unittests.filter'
+alias barf='ninja -C out/fuchsia net_unittests -j1000 && out/fuchsia/bin/run_net_unittests --test-launcher-filter-file=../../testing/buildbot/filters/fuchsia.net_unittests.filter'
```
to build and run only the tests that are not excluded/known-failing on the bot.
diff --git a/chromium/docs/gpu/debugging_gpu_related_code.md b/chromium/docs/gpu/debugging_gpu_related_code.md
index 1d5f57e6dda..765942cba13 100644
--- a/chromium/docs/gpu/debugging_gpu_related_code.md
+++ b/chromium/docs/gpu/debugging_gpu_related_code.md
@@ -188,7 +188,7 @@ This will print the name of each GPU command before it is executed.
### Debugging in the GPU Process
Given the multi-processness of chromium it can be hard to debug both sides.
-Turing on all the logging and having a small test case is useful. One minor
+Turning on all the logging and having a small test case is useful. One minor
suggestion, if you have some idea where the bug is happening a call to some
obscure gl function like `glHint()` can give you a place to catch a command
being processed in the GPU process (put a break point on
diff --git a/chromium/docs/gpu/gpu_testing.md b/chromium/docs/gpu/gpu_testing.md
index 5b05f8aaf31..2dcc5a7b1ff 100644
--- a/chromium/docs/gpu/gpu_testing.md
+++ b/chromium/docs/gpu/gpu_testing.md
@@ -76,8 +76,8 @@ overview of this documentation and links back to various portions.
<!-- XXX: broken link -->
[new-testing-infra]: https://github.com/luci/luci-py/wiki
[isolated-testing-infra]: https://www.chromium.org/developers/testing/isolated-testing/infrastructure
-[chromium.gpu]: https://build.chromium.org/p/chromium.gpu/console
-[chromium.gpu.fyi]: https://build.chromium.org/p/chromium.gpu.fyi/console
+[chromium.gpu]: https://ci.chromium.org/p/chromium/g/chromium.gpu/console
+[chromium.gpu.fyi]: https://ci.chromium.org/p/chromium/g/chromium.gpu.fyi/console
[tools/build workspace]: https://code.google.com/p/chromium/codesearch#chromium/build/scripts/slave/recipe_modules/chromium_tests/chromium_gpu_fyi.py
[bots-presentation]: https://docs.google.com/presentation/d/1BC6T7pndSqPFnituR7ceG7fMY7WaGqYHhx5i9ECa8EI/edit?usp=sharing
@@ -110,16 +110,13 @@ Sends your job to the default set of try servers.
The GPU tests are part of the default set for Chromium CLs, and are run as part
of the following tryservers' jobs:
-* [linux_chromium_rel_ng] on the [tryserver.chromium.linux] waterfall
-* [mac_chromium_rel_ng] on the [tryserver.chromium.mac] waterfall
-* [win_chromium_rel_ng] on the [tryserver.chromium.win] waterfall
+* [linux_chromium_rel_ng], formerly on the `tryserver.chromium.linux` waterfall
+* [mac_chromium_rel_ng], formerly on the `tryserver.chromium.mac` waterfall
+* [win_chromium_rel_ng], formerly on the `tryserver.chromium.win` waterfall
-[linux_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_rel_ng?numbuilds=100
-[mac_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_ng?numbuilds=100
-[win_chromium_rel_ng]: http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_ng?numbuilds=100
-[tryserver.chromium.linux]: http://build.chromium.org/p/tryserver.chromium.linux/waterfall?numbuilds=100
-[tryserver.chromium.mac]: http://build.chromium.org/p/tryserver.chromium.mac/waterfall?numbuilds=100
-[tryserver.chromium.win]: http://build.chromium.org/p/tryserver.chromium.win/waterfall?numbuilds=100
+[linux_chromium_rel_ng]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_chromium_rel_ng?limit=100
+[mac_chromium_rel_ng]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac_chromium_rel_ng?limit=100
+[win7_chromium_rel_ng]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win7_chromium_rel_ng?limit=100
Scan down through the steps looking for the text "GPU"; that identifies those
tests run on the GPU bots. For each test the "trigger" step can be ignored; the
@@ -132,7 +129,7 @@ tryserver master you want to reference, for example:
```sh
git cl try -b linux_chromium_rel_ng
git cl try -b mac_chromium_rel_ng
-git cl try -b win_chromium_rel_ng
+git cl try -b win7_chromium_rel_ng
```
Alternatively, the Gerrit UI can be used to send a patch set to these try
@@ -150,6 +147,7 @@ tryservers for code changes to certain sub-directories.
[linux_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux_optional_gpu_tests_rel
[mac_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac_optional_gpu_tests_rel
[win_optional_gpu_tests_rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win_optional_gpu_tests_rel
+[luci.chromium.try]: https://ci.chromium.org/p/chromium/g/luci.chromium.try/builders
Tryservers for the [ANGLE project] are also present on the
[tryserver.chromium.angle] waterfall. These are invoked from the Gerrit user
@@ -358,6 +356,42 @@ See the [Swarming documentation] for instructions on how to upload your binaries
[Swarming documentation]: https://www.chromium.org/developers/testing/isolated-testing/for-swes#TOC-Run-a-test-built-locally-on-Swarming
+## Moving Test Binaries from Machine to Machine
+
+To create a zip archive of your personal Chromium build plus all of
+the Telemetry-based GPU tests' dependencies, which you can then move
+to another machine for testing:
+
+1. Build Chrome (into `out/Release` in this example).
+1. `python tools/mb/mb.py zip out/Release/ telemetry_gpu_integration_test out/telemetry_gpu_integration_test.zip`
+
+Then copy telemetry_gpu_integration_test.zip to another machine. Unzip
+it, and cd into the resulting directory. Invoke
+`content/test/gpu/run_gpu_integration_test.py` as above.
+
+This workflow has been tested successfully on Windows with a
+statically-linked Release build of Chrome.
+
+Note: on one macOS machine, this command failed because of a broken
+`strip-json-comments` symlink in
+`src/third_party/catapult/common/node_runner/node_runner/node_modules/.bin`. Deleting
+that symlink allowed it to proceed.
+
+Note also: on the same macOS machine, with a component build, this
+command failed to zip up a working Chromium binary. The browser failed
+to start with the following error:
+
+`[0626/180440.571670:FATAL:chrome_main_delegate.cc(1057)] Check failed: service_manifest_data_pack_.`
+
+In a pinch, this command could be used to bundle up everything, but
+the "out" directory could be deleted from the resulting zip archive,
+and the Chromium binaries moved over to the target machine. Then the
+command line arguments `--browser=exact --browser-executable=[path]`
+can be used to launch that specific browser.
+
+See the [user guide for mb](../../tools/mb/docs/user_guide.md#mb-zip), the
+meta-build system, for more details.
+
## Adding New Tests to the GPU Bots
The goal of the GPU bots is to avoid regressions in Chrome's rendering stack.
@@ -408,27 +442,33 @@ in the Chromium workspace:
These files are autogenerated by the following script:
-* [`generate_buildbot_json.py`](https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/generate_buildbot_json.py)
+* [`generate_buildbot_json.py`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/generate_buildbot_json.py)
-This script is completely self-contained and should hopefully be
-self-explanatory. The JSON files are parsed by the chromium and chromium_trybot
-recipes, and describe two types of tests:
+This script is documented in
+[`testing/buildbot/README.md`](https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/README.md). The
+JSON files are parsed by the chromium and chromium_trybot recipes, and describe
+two basic types of tests:
* GTests: those which use the Googletest and Chromium's `base/test/launcher/`
frameworks.
-* Telemetry based tests: those which are built on the Telemetry framework and
- launch the entire browser.
+* Isolated scripts: tests whose initial entry point is a Python script which
+ follows a simple convention of command line argument parsing.
+
+The majority of the GPU tests are however:
+
+* Telemetry based tests: an isolated script test which is built on the
+ Telemetry framework and which launches the entire browser.
A prerequisite of adding a new test to the bots is that that test [run via
-isolates][new-isolates]. Once that is done, modify `generate_buildbot_json.py` to add the
-test to the appropriate set of bots. Be careful when adding large new test
-steps to all of the bots, because the GPU bots are a limited resource and do
-not currently have the capacity to absorb large new test suites. It is safer to
-get new tests running on the chromium.gpu.fyi waterfall first, and expand from
-there to the chromium.gpu waterfall (which will also make them run against
-every Chromium CL by virtue of the `linux_chromium_rel_ng`,
-`mac_chromium_rel_ng` and `win_chromium_rel_ng` tryservers' mirroring of the
-bots on this waterfall – so be careful!).
+isolates][new-isolates]. Once that is done, modify `test_suites.pyl` to add the
+test to the appropriate set of bots. Be careful when adding large new test steps
+to all of the bots, because the GPU bots are a limited resource and do not
+currently have the capacity to absorb large new test suites. It is safer to get
+new tests running on the chromium.gpu.fyi waterfall first, and expand from there
+to the chromium.gpu waterfall (which will also make them run against every
+Chromium CL by virtue of the `linux_chromium_rel_ng`, `mac_chromium_rel_ng`,
+`win7_chromium_rel_ng` and `android-marshmallow-arm64-rel` tryservers' mirroring
+of the bots on this waterfall – so be careful!).
Tryjobs which add new test steps to the chromium.gpu.json file will run those
new steps during the tryjob, which helps ensure that the new test won't break
diff --git a/chromium/docs/gpu/gpu_testing_bot_details.md b/chromium/docs/gpu/gpu_testing_bot_details.md
index 50b5cfe7433..234732255ac 100644
--- a/chromium/docs/gpu/gpu_testing_bot_details.md
+++ b/chromium/docs/gpu/gpu_testing_bot_details.md
@@ -199,13 +199,13 @@ In the [chromium/src] workspace:
build.
* [`src/tools/mb/mb_config.pyl`][mb_config.pyl]
* Defines the GN arguments for all of the bots.
-* [`src/content/test/gpu/generate_buildbot_json.py`][generate_buildbot_json.py]
- * The generator script for `chromium.gpu.json` and
+* [`src/testing/buildbot/generate_buildbot_json.py`][generate_buildbot_json.py]
+ * The generator script for all the waterfalls, including `chromium.gpu.json` and
`chromium.gpu.fyi.json`. It defines on which GPUs various tests run.
- * It's completely self-contained and should hopefully be fairly
- comprehensible.
+ * See the [README for generate_buildbot_json.py] for documentation
+ on this script and the descriptions of the waterfalls and test suites.
* When modifying this script, don't forget to also run it, to regenerate
- the JSON files.
+ the JSON files. Don't worry; the presubmit step will catch this if you forget.
* See [Adding new steps to the GPU bots] for more details.
[chromium/src]: https://chromium.googlesource.com/chromium/src/
@@ -214,7 +214,9 @@ In the [chromium/src] workspace:
[chromium.gpu.fyi.json]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/chromium.gpu.fyi.json
[gn_isolate_map.pyl]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/gn_isolate_map.pyl
[mb_config.pyl]: https://chromium.googlesource.com/chromium/src/+/master/tools/mb/mb_config.pyl
-[generate_buildbot_json.py]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/generate_buildbot_json.py
+[generate_buildbot_json.py]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/generate_buildbot_json.py
+[waterfalls.pyl]: https://chromium.googlesource.com/chromium/src/+/master/testing/buildbot/waterfalls.pyl
+[README for generate_buildbot_json.py]: ../../testing/buildbot/README.md
In the [infradata/config] workspace (Google internal only, sorry):
@@ -292,15 +294,12 @@ Builder].
1. Create a CL in the Chromium workspace which does the following. Here's an
[example CL](https://chromium-review.googlesource.com/1041164).
- 1. Adds the new machines to
- `src/content/test/gpu/generate_buildbot_json.py`.
+ 1. Adds the new machines to [waterfalls.pyl].
1. The swarming dimensions are crucial. These must match the GPU and
OS type of the physical hardware in the Swarming pool. This is what
causes the VMs to spawn their tests on the correct hardware. Make
sure to use the Chrome-GPU pool, and that the new machines were
specifically added to that pool.
- 1. Make sure to set the `swarming` property to `True` for both the
- Release and Debug bots.
1. Make triply sure that there are no collisions between the new
hardware you're adding and hardware already in the Swarming pool.
For example, it used to be the case that all of the Windows NVIDIA
@@ -312,11 +311,12 @@ Builder].
data center). Similarly, the Win8 bots had to have a very precise
OS description (`Windows-2012ServerR2-SP0`).
1. If you're deploying a new bot that's similar to another existing
- configuration, please search around in the file for references to
+ configuration, please search around in
+ `src/testing/buildbot/test_suite_exceptions.pyl` for references to
the other bot's name and see if your new bot needs to be added to
any exclusion lists. For example, some of the tests don't run on
certain Win bots because of missing OpenGL extensions.
- 1. Run this script to regenerate
+ 1. Run [generate_buildbot_json.py] to regenerate
`src/testing/buildbot/chromium.gpu.fyi.json`.
1. Updates [`cr-buildbucket.cfg`][cr-buildbucket.cfg]:
* Add the two new machines (Release and Debug) inside the
@@ -483,34 +483,14 @@ trybot for the Win7 NVIDIA GPUs in Release mode. We will call the new bot
appear to be necessary any more, but it's something to watch out for if
your CL fails presubmit for some reason.
-1. Now we need to add the new trybot to the Gerrit UI. This is most easily done
- using the Gerrit UI itself. (If on any CL you select "Choose Tryjobs", it
- says "Don't see the bots you want? Edit this repo's buildbucket.config to
- add them". That's the file we are going to edit.) Here's an [example
- CL](https://chromium-review.googlesource.com/1044866).
- 1. Go to the [`chromium/src`][chromium/src] repo in the Gerrit UI.
- 1. Click "Repo settings" in the upper-left corner.
- 1. Click "Commands".
- 1. Click the "Edit repo config" button.
- 1. This opens the project config by default. You don't want this, so close
- it using the "CLOSE" link at the upper right.
- 1. Now you're in a CL titled "Edit Repo Config". Click the "OPEN" link.
- 1. It will prompt you to open a file. Begin typing `buildbucket.config` and
- it will auto-complete. Click "Open".
- 1. Add the new trybot, in this case `gpu_manual_try_win7_nvidia_rel`, to
- the `luci.chromium.try` bucket. *BE CAREFUL* to include the leading tab;
- it is semantically important. (Note that this matches the "pool"
- dimension specified in bots.cfg in the infradata/config workspace.)
- 1. Click "Save", and then "Close" (once "Save" is grayed out).
- 1. You're now back at the CL. Click "PUBLISH EDIT" near the top right.
- 1. Now you're in normal CL mode again. You can now click the "Edit" button
- to edit the CL description; please do this.
- 1. Send this out to one of the Git admins; they're listed in the gitadmin
- column in [`go/chromecals`][go/chromecals]. The Git admin has to both +1
- AND land the CL.
-
-At this point the new trybot should show up in the Gerrit UI and it should be
-possible to send a CL to it.
+At this point the new trybot should automatically show up in the
+"Choose tryjobs" pop-up in the Gerrit UI, under the
+`luci.chromium.try` heading, because it was deployed via LUCI. It
+should be possible to send a CL to it.
+
+(It should not be necessary to modify buildbucket.config as is
+mentioned at the bottom of the "Choose tryjobs" pop-up. Contact the
+chrome-infra team if this doesn't work as expected.)
[chromium/src]: https://chromium-review.googlesource.com/q/project:chromium%252Fsrc+status:open
[go/chromecals]: http://go/chromecals
@@ -549,7 +529,7 @@ an entire new optional try bot.
1. Create a CL in the Chromium workspace:
1. Add your new bot (for example, "Optional Win7 Release
(CoolNewGPUType)") to the chromium.gpu.fyi waterfall in
- [generate_buildbot_json.py]. (Note, this is a bad example: the
+ [waterfalls.pyl]. (Note, this is a bad example: the
"optional" bots have special semantics in this script. You'd probably
want to define some new category of bot if you didn't intend to add
this to `win_optional_gpu_tests_rel`.)
@@ -584,7 +564,7 @@ everywhere. To do this:
1. Make sure that all of the current Swarming jobs for this OS and GPU
configuration are targeted at the "stable" version of the driver in
- `src/testing/gpu/generate_buildbot_json.py`.
+ [waterfalls.pyl].
1. File a `Build Infrastructure` bug, component `Infra>Labs`, to have ~4 of the
physical machines already in the Swarming pool upgraded to the new version
of the driver.
@@ -593,14 +573,14 @@ everywhere. To do this:
waterfall](#How-to-add-a-new-tester-bot-to-the-chromium_gpu_fyi-waterfall)
to deploy one.
1. Have this experimental bot target the new version of the driver in
- `src/testing/gpu/generate_buildbot_json.py`.
+ [waterfalls.pyl].
1. Hopefully, the new machine will pass the pixel tests. If it doesn't, then
unfortunately, it'll be necessary to follow the instructions on
[updating the pixel tests] to temporarily suppress the failures on this
particular configuration. Keep the time window for these test suppressions
as narrow as possible.
1. Watch the new machine for a day or two to make sure it's stable.
-1. When it is, update `src/testing/gpu/generate_buildbot_json.py` to use the
+1. When it is, update [waterfalls.pyl] to use the
"gpu trigger script" functionality to select *either* the stable *or* the
new driver version on the stable version of the bot. See [this
CL](https://chromium-review.googlesource.com/882344) for an example, though
@@ -611,7 +591,7 @@ everywhere. To do this:
1. If necessary, update pixel test expectations and remove the suppressions
added above.
1. Remove the alternate swarming dimensions for the stable bot from
- `generate_buildbot_json.py`, locking it to the new driver version.
+ [waterfalls.pyl], locking it to the new driver version.
Note that we leave the experimental bot in place. We could reclaim it, but it
seems worthwhile to continuously test the "next" version of graphics drivers as
diff --git a/chromium/docs/gpu/pixel_wrangling.md b/chromium/docs/gpu/pixel_wrangling.md
index 18afaa3b7c5..d661137038c 100644
--- a/chromium/docs/gpu/pixel_wrangling.md
+++ b/chromium/docs/gpu/pixel_wrangling.md
@@ -23,7 +23,7 @@ Primary configurations:
* [Windows 10 Intel HD 630 Pool](http://shortn/_QsoGIGIFYd)
* [Linux Quadro P400 Pool](http://shortn/_fNgNs1uROQ)
* [Linux Intel HD 630 Pool](http://shortn/_dqEGjCGMHT)
-* [Mac AMD Retina 10.12.6 GPU Pool](http://shortn/_BcrVmfRoSo)
+* [Mac AMD Retina 10.13.5 GPU Pool](http://shortn/_c2CCVyT6Uj)
* [Mac Mini Chrome Pool](http://shortn/_Ru8NESapPM)
* [Android Nexus 5X Chrome Pool](http://shortn/_G3j7AVmuNR)
@@ -31,7 +31,7 @@ Secondary configurations:
* [Windows 7 Quadro P400 Pool](http://shortn/_cuxSKC15UX)
* [Windows AMD R7 240 GPU Pool](http://shortn/_XET7RTMHQm)
-* [Mac NVIDIA Retina 10.12.6 GPU Pool](http://shortn/_jQWG7W71Ek)
+* [Mac NVIDIA Retina 10.13.5 GPU Pool](http://shortn/_sun7ISEg3F)
## GPU Bots' Waterfalls
@@ -91,7 +91,9 @@ test the code that is actually shipped. As of this writing, the tests included:
* `gl_tests`: see `src/gpu/BUILD.gn`
* `gl_unittests`: see `src/ui/gl/BUILD.gn`
-And more. See `src/content/test/gpu/generate_buildbot_json.py` for the
+And more. See
+[`src/testing/buildbot/README.md`](../../testing/buildbot/README.md)
+and the GPU sections of `test_suites.pyl` and `waterfalls.pyl` for the
complete description of bots and tests.
Additionally, the Release bots run:
diff --git a/chromium/docs/gpu/sync_token_internals.md b/chromium/docs/gpu/sync_token_internals.md
new file mode 100644
index 00000000000..4ba2caac26a
--- /dev/null
+++ b/chromium/docs/gpu/sync_token_internals.md
@@ -0,0 +1,132 @@
+# CHROMIUM Sync Token Internals
+
+Chrome uses a mechanism known as "sync tokens" to synchronize different command
+buffers in the GPU process. This document discusses the internals of the sync
+token system.
+
+[TOC]
+
+## Rationale
+
+In Chrome, multiple processes, for example browser and renderer, submit work to
+the GPU process asynchronously in command buffer. However, there are
+dependencies between the work submitted by different processes, such as
+GLRenderer in display compositor in the browser/viz process rendering a tile
+produced by the raster worker in the renderer process.
+
+Sync tokens are used to synchronize the work contained in command buffers
+without waiting for the work to complete. This improves pipelining, and with the
+introduction of GPU scheduling, allows prioritization of work. Although
+originally built for synchronizing command buffers, they can be used for other
+work in the GPU process.
+
+## Generation
+
+Sync tokens are represented by a namespace, identifier, and the *fence release
+count*. `CommandBufferId` is a 64-bit unsigned integer which is unique within a
+`CommandBufferNamespace`. For example IPC command buffers are in the *GPU_IO*
+CommandBufferNamespace, and are identified by CommandBufferId with process id as
+the MSB and IPC route id as the LSB.
+
+The fence release count marks completion of some work in a command buffer. Note:
+this is CPU side work done that includes command decoding, validation, issuing
+GL calls to the driver, etc. and not GPU side work. See
+[gpu_synchronication.md](/docs/design/gpu_synchronization.md) for more
+information about synchronizing GPU work.
+
+Fences are typically generated or inserted on the client using a sequential
+counter. The corresponding GL API is `GenSyncTokenCHROMIUM` which generates the
+fence using `CommandBufferProxyImpl::GenerateFenceSyncRelease()`, and also adds
+the fence to the command buffer using the internal `InsertFenceSyncCHROMIUM`
+command.
+
+## Verification
+
+Different client processes communicate with the GPU process using *channels*. A
+channel wraps around a message pipe which doesn't provide ordering guarantees
+with respect to other pipes. For example, a message from the browser process
+containing a sync token wait can arrive before the message from the renderer
+process that releases or fulfills the sync token promise.
+
+To prevent the above problem, client processes must verify sync tokens before
+sending to another process. Verification involves a synchronous nop IPC message,
+`GpuChannelMsg_Nop`, to the GPU process which ensures that the GPU process has
+read previous messages from the pipe.
+
+Sync tokens used within a process do not need to be verified, and the
+`GenSyncTokenUnverifiedCHROMIUM` GL API serves this common case. These sync
+tokens need to be verified using `VerifySyncTokensCHROMIUM`. Sync tokens
+generated using `GenSyncTokenCHROMIUM` are already verified. `SyncToken` has a
+`verified_flush` bit that guards against accidentally sending unverified sync
+tokens over IPC.
+
+## Streams
+
+In the GPU process, command buffers are organized into logical streams of
+execution that are called *sequences*. Within a sequence tasks are ordered, but
+are asynchronous with respect to tasks in other sequences. Dependencies between
+tasks are specified as sync tokens. For IPC command buffers, this implies flush
+ordering within a sequence.
+
+A sequence can be created by `Scheduler::CreateSequence` which returns a
+`SequenceId`. Tasks are posted to a sequence using `Scheduler::ScheduleTask`.
+Typically there is one sequence per channel, but sometimes there are more like
+raster, compositor, and media streams in renderer's channel.
+
+The scheduler also provides a means for co-operative scheduling through
+`Scheduler::ShouldYield` and `Scheduler::ContinueTask`. These allow a task to
+yield and continue once higher priority work is complete. Together with the GPU
+scheduler, multiple sequences provide the means for prioritization of UI work
+over raster prepaint work.
+
+## Waiting and Completion
+
+Sync tokens are managed in the GPU process by `SyncPointManager`, and its helper
+classes `SyncPointOrderData` and `SyncPointClientState`. `SyncPointOrderData`
+holds state for a logical stream of execution, typically containing work of
+multiple command buffers from one process. `SyncPointClientState` holds sync token
+state for a client which generated sync tokens, typically an IPC command buffer.
+
+GPU scheduler maintains a `SyncPointOrderData` per sequence. Clients must create
+SyncPointClientState using `SyncPointManager::CreateSyncPointClientState` and
+identify their namespace, id, and sequence.
+
+Waiting on a sync token is done by calling `SyncPointManager::Wait()` with a
+sync token, order number for the wait, and a callback. The callbacks are
+enqueued with the `SyncPointClientState` of the target with the release count of
+the sync token. The scheduler does this internally for sync token dependencies
+for scheduled tasks, but the wait can also be performed when running the
+`WaitSyncTokenCHROMIUM` GL command.
+
+Sync tokens are completed when the fence is released in the GPU process by
+calling `SyncPointClientState::ReleaseFenceSync()`. For GL command buffers, the
+`InsertFenceSyncCHROMIUM` command, which contains the release count generated in
+the client, calls this when executed in the service. This issues callbacks and
+allows waiting command buffers to resume their work.
+
+## Correctness
+
+Correctness of waits and releases basically amounts to checking that there are
+no indefinite waits because of broken promises or circular wait chains. This is
+ensured by associating an order number with each wait and release and
+maintaining the invariant that the order number of release is less than or equal
+to the order number of wait.
+
+Each task is assigned a global sequential order number generated by
+`SyncPointOrderData::GenerateUnprocessedOrderNumber` which are stored in a queue
+of unprocessed order numbers. In `SyncPointManager::Wait()`, the callbacks are
+also enqueued with the order number of the waiting task in `SyncPointOrderData`
+in a queue called `OrderFenceQueue`.
+
+`SyncPointOrderData` maintains the invariant that all waiting callbacks must
+have an order number greater than the sequence's next unprocessed order number.
+This invariant is checked when enqueuing a new callback in
+`SyncPointOrderData::ValidateReleaseOrderNumber`, and after completing a task in
+`SyncPointOrderData::FinishProcessingOrderNumber`.
+
+
+## See Also
+
+[CHROMIUM_sync_point](/gpu/GLES2/extensions/CHROMIUM/CHROMIUM_sync_point.txt)
+[gpu_synchronication.md](/docs/design/gpu_synchronization.md)
+[Lightweight GPU Sync Points](https://docs.google.com/document/d/1XwBYFuTcINI84ShNvqifkPREs3sw5NdaKzKqDDxyeHk/edit)
diff --git a/chromium/docs/infra/new_builder.md b/chromium/docs/infra/new_builder.md
new file mode 100644
index 00000000000..70e250f2f56
--- /dev/null
+++ b/chromium/docs/infra/new_builder.md
@@ -0,0 +1,340 @@
+# Creating a new builder
+
+This doc describes how to set up a new builder on LUCI. It's focused
+on chromium builders, but parts may be applicable to other projects.
+
+[TOC]
+
+## TL;DR
+
+For a typical chromium builder using the chromium recipe,
+you'll need to acquire hardware and then land **three** CLs:
+
+1. in [infradata/config][16], modifying swarming's bots.cfg.
+2. in [chromium/tools/build][17], modifying the chromium\_tests
+ configuration.
+3. in [chromium/src][18], modifying all of the following:
+ 1. LUCI service configurations in `//infra/config/global`
+ 2. Compile configuration in `//tools/mb`
+ 3. Test configuration in `//testing/buildbot`
+
+## Obtain hardware
+
+If you're setting up a new builder, you'll typically need hardware to run it.
+For CI / waterfall builders or manually triggered try builders,
+[file a labs bug][1] (internal).
+For CQ try bots, please file a [capacity bug][2] (internal) first.
+In both cases, note that your builder will be running on swarming
+(not on buildbot) and should be provisioned accordingly.
+
+## Pick a name and a master
+
+Your new builder's name should follow the [chromium builder naming scheme][3].
+
+We still use master names to group builders in a variety of places (even
+though buildbot itself is largely deprecated). FYI builders should use
+`chromium.fyi`, while other builders should mostly use `chromium.$OS`.
+
+> **Note:** If you're creating a try builder, its name should match the
+> name of the CI builder it mirrors.
+
+## Register hardware with swarming
+
+Once you've obtained hardware, you'll need to associate it with your
+new builder in swarming. You can do so by modifying the relevant swarming
+instance's configuration.
+
+Swarming's bots.cfg schema is [here][20].
+chromium-swarm's bots.cfg instance is [here][4].
+
+You'll want to add something like the following:
+
+``` sh
+bot_group {
+ dimensions: "builder:$BUILDER_NAME"
+ # Add a brief comment about hardware, particularly if you're doing
+ # anything unique or atypical.
+ # $COMMENT_ABOUT_HARDWARE
+ bot_id: "$HARDWARE"
+
+ # luci-eng@google.com is typically fine for generic chromium builders.
+ # If you're doing something more specialized, or if you're creating
+ # a non-chromium builder, consider a different list.
+ owners: "$OWNER_EMAIL"
+
+ # See the schema for more information on these options. The values
+ # listed below should be reasonable defaults for chromium builders.
+ auth {
+ require_luci_machine_token: true
+ ip_whitelist: "chromium-swarm-bots"
+ }
+
+ # This is the service account used by the swarming bot to authenticate to
+ # LUCI services for system purposes (i.e., not within tasks).
+ # For chromium builders, the bots-chrome@ account below should be fine.
+ system_service_account: "bots-chrome@chromium-swarm.iam.gserviceaccount.com"
+
+ # POOL_NAME should be:
+ # - luci.chromium.ci for public chromium CI / waterfall builders
+ # - luci.chromium.try for public chromium try builders
+ dimensions: "pool:$POOL_NAME"
+}
+```
+
+## Recipe configuration
+
+Recipes tell your builder what to do. Many require some degree of
+per-builder configuration outside of the chromium repo, though the
+specifics vary. The recipe you use depends on what you want your
+builder to do.
+
+For typical chromium compile and/or test builders, the chromium and
+chromium\_trybot recipes should be sufficient.
+
+To configure a chromium CI builder, you'll want to add a config block
+to the file in [recipe\_modules/chromium\_tests][5] corresponding
+to your new builder's master name. The format is somewhat in flux
+and is not very consistent among the different masters, but something
+like this should suffice:
+
+``` py
+'your-new-builder': {
+ 'chromium_config': 'chromium',
+ 'gclient_config': 'chromium',
+ 'chromium_apply_config': ['mb', 'ninja_confirm_noop'],
+ 'chromium_config_kwargs': {
+ 'BUILD_CONFIG': 'Release', # or 'Debug', as appropriate
+ 'TARGET_BITS': 64, # or 32, for some mobile builders
+ },
+ 'testing': {
+ 'platform': '$PLATFORM', # one of 'mac', 'win', or 'linux'
+ },
+
+ # Optional: where to upload test results. Valid values include:
+ # 'public_server' for test-results.appspot.com
+ # 'staging_server' for test-results-test.appspot.com
+ # 'no_server' to disable upload
+ 'test_results_config': 'public_server',
+
+ # There are a variety of other options; most of them are either
+ # unnecessary in most cases or are deprecated. If you think one
+ # may be applicable, please reach out or ask your reviewer.
+}
+```
+
+For chromium try builders, you'll also want to set up mirroring.
+You can do so by adding your new try builder to [trybots.py][21].
+
+A typical entry will just reference the matching CI builder, e.g.:
+
+``` py
+TRYBOTS = freeze({
+ # ...
+
+ 'tryserver.chromium.example': {
+ 'builders': {
+ # If you want to build and test the same targets as one
+ # CI builder, you can just do this:
+ 'your-new-builder': simple_bot({
+ 'mastername': 'chromium.example',
+ 'buildername': 'your-new-builder'
+ }),
+
+ # If you want to build the same targets as one CI builder
+ # but not test anything, you can do this:
+ 'your-new-compile-builder': simple_bot({
+ 'mastername': 'chromium.example',
+ 'buildername': 'your-new-builder',
+ }, analyze_mode='compile'),
+
+ # If you want to build and test the same targets as a builder/tester
+ # CI pair, you can do this:
+ 'your-new-tester': simple_bot({
+ 'mastername': 'chromium.example',
+ 'buildername': 'your-new-builder',
+ 'tester': 'your-new-tester',
+ }),
+
+ # If you want to mirror multiple try bots, please reach out.
+ },
+ },
+
+ # ...
+})
+```
+
+## Chromium configuration
+
+Lastly, you need to configure a variety of things in the chromium repo.
+It's generally ok to land all of them in a single CL.
+
+### LUCI services
+
+LUCI services used by chromium are configured in [//infra/config/global][6].
+
+#### Buildbucket
+
+Buildbucket is responsible for taking a build scheduled by a user or
+an agent and translating it into a swarming task. Its configuration
+includes things like:
+
+ * ACLs for scheduling and viewing builds
+ * Swarming dimensions
+ * Recipe name and properties
+
+Buildbucket's configuration schema is [here][7].
+Chromium's buildbucket configuration is [here][8].
+
+A typical chromium builder won't need to configure much. Adding a
+`builders` entry to the appropriate bucket
+(`luci.chromium.ci` for CI / waterfall, `luci.chromium.try` for try)
+with the new builder's name, the mixin containing the appropriate
+master name, and perhaps one or two dimensions should be sufficient,
+e.g.:
+
+``` sh
+buckets {
+ name: "luci.chromium.ci"
+ ...
+
+ swarming {
+ ...
+
+ builders {
+ name: "your-new-builder"
+
+ # To determine what you should include here, look for an
+ # existing mixin containing
+ #
+ # recipe {
+ # properties: "mastername:$MASTER_NAME"
+ # }
+ #
+ mixins: "$MASTER_NAME_MIXIN"
+
+ # Add other mixins and dimensions as necessary. You will
+ # usually at least want an os dimension configured, so if
+ # none of your included mixins have one, consider adding one.
+ }
+ }
+}
+```
+
+#### Milo
+
+Milo is responsible for displaying builders and build histories on a
+set of consoles. Its configuration includes the definitions of those
+consoles.
+
+Milo's configuration schema is [here][9].
+Chromium's milo configuration is [here][10].
+
+A typical chromium builder should be added to one or two consoles
+at most: one corresponding to its master, and possibly the main
+console, e.g.
+
+``` sh
+consoles {
+ ...
+ name: "$MASTER_NAME"
+ ...
+ builders {
+ name: "buildbucket/$BUCKET_NAME/$BUILDER_NAME"
+
+ # A builder's category is a pipe-delimited list of strings
+ # that determines how a builder is grouped on a console page.
+ category: "$LARGE_GROUP|$MEDIUM_GROUP|$SMALL_GROUP"
+
+ # A builder's short name is a string up to three characters
+ # long that lets someone uniquely identify it among builders
+ # in the same category.
+ short_name: "$ID"
+ }
+}
+```
+
+#### Scheduler (CI / waterfall builders only)
+
+The scheduler is responsible for triggering CI / waterfall builders.
+
+Scheduler's configuration schema is [here][11].
+Chromium's scheduler configuration is [here][12].
+
+A typical chromium builder will need a job configuration. A chromium
+builder that's triggering on new commits or on a regular schedule
+(as opposed to being triggered by a parent builder) will also need
+a trigger entry.
+
+``` sh
+trigger {
+ id: "master-gitiles-trigger"
+
+ ...
+
+ # Adding your builder to the master-gitiles-trigger
+ # will cause your builder to be triggered on new commits
+ # to chromium's master branch.
+ triggers: "your-new-ci-builder"
+}
+
+job {
+ id: "your-new-ci-builder"
+
+ # acl_sets should either be
+ # - "default" for builders that are triggered by the scheduler
+ # (i.e. anything triggering on new commits or on a cron)
+ # - "triggered-by-parent-builders" for builders that are
+ # triggered by other builders
+ acl_sets: "default"
+
+ buildbucket: {
+ server: "cr-buildbucket.appspot.com"
+ bucket: "luci.chromium.ci"
+ builder: "your-new-ci-builder"
+ }
+}
+```
+
+### Recipe-specific configurations
+
+#### chromium & chromium\_trybot
+
+The build and test configurations used by the main `chromium` and
+`chromium_trybot` recipes are stored src-side:
+
+* **Build configuration**: the gn configuration used by chromium
+recipe builders is handled by [MB][13]. MB's configuration is documented
+[here][14]. You only need to modify it if your new builder will be
+compiling.
+
+* **Test configuration**: the test configuration used by chromium
+recipe builders is in a group of `.pyl` and derived `.json` files
+in `//testing/buildbot`. The format is described [here][15].
+
+## Questions? Feedback?
+
+If you're in need of further assistance, if you're not sure about
+one or more steps, or if you found this documentation lacking, please
+reach out to infra-dev@chromium.org or [file a bug][19]!
+
+[1]: http://go/infrasys-bug
+[2]: http://go/cci-capacity-bug
+[3]: https://bit.ly/chromium-build-naming
+[4]: https://luci-config.appspot.com/#/services/chromium-swarm
+[5]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests
+[6]: /infra/config/global
+[7]: https://luci-config.appspot.com/schemas/projects:cr-buildbucket.cfg
+[8]: /infra/config/global/cr-buildbucket.cfg
+[9]: http://luci-config.appspot.com/schemas/projects:luci-milo.cfg
+[10]: /infra/config/global/luci-milo.cfg
+[11]: https://chromium.googlesource.com/infra/luci/luci-go/+/master/scheduler/appengine/messages/config.proto
+[12]: /infra/config/global/luci-scheduler.cfg
+[13]: /tools/mb/README.md
+[14]: /tools/mb/docs/user_guide.md#the-mb_config_pyl-config-file
+[15]: /testing/buildbot/README.md
+[16]: https://chrome-internal.googlesource.com/infradata/config
+[17]: https://chromium.googlesource.com/chromium/tools/build
+[18]: /
+[19]: https://g.co/bugatrooper
+[20]: https://chromium.googlesource.com/infra/luci/luci-py/+/master/appengine/swarming/proto/bots.proto
+[21]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests/trybots.py
diff --git a/chromium/docs/jumbo.md b/chromium/docs/jumbo.md
index e9fed8d3b70..0daa6d502da 100644
--- a/chromium/docs/jumbo.md
+++ b/chromium/docs/jumbo.md
@@ -51,7 +51,7 @@ source files.
## Tuning
-By default at most `50`, or `8` when using goma, files are merged at a
+By default on average `50`, or `8` when using goma, files are merged at a
time. The more files that are are merged, the less total CPU time is
needed, but parallelism is reduced. This number can be changed by
setting `jumbo_file_merge_limit`.
diff --git a/chromium/docs/linux_chromium_packages.md b/chromium/docs/linux_chromium_packages.md
index a55c46e4f93..375f66e5465 100644
--- a/chromium/docs/linux_chromium_packages.md
+++ b/chromium/docs/linux_chromium_packages.md
@@ -9,7 +9,7 @@ TODO: Move away from tables.
| **Distro** | **Contact** | **URL for packages** | **URL for distro-specific patches** |
|:-----------|:------------|:---------------------|:------------------------------------|
-| Ubuntu | Chad Miller `chad.miller@canonical.com` | https://launchpad.net/ubuntu/+source/chromium-browser | https://code.launchpad.net/ubuntu/+source/chromium-browser |
+| Ubuntu | Olivier Tilloy `olivier.tilloy@canonical.com` | https://launchpad.net/ubuntu/+source/chromium-browser | https://code.launchpad.net/~chromium-team |
| Debian | [see package page](http://packages.debian.org/sid/chromium) | in standard repo | [debian patch tracker](http://patch-tracker.debian.org/package/chromium-browser/) |
| openSUSE | Raymond Wooninck `tittiatcoke@gmail.com` | http://software.opensuse.org/search?baseproject=ALL&p=1&q=chromium | ?? |
| Arch | Evangelos Foutras `evangelos@foutrelis.com` | http://www.archlinux.org/packages/extra/x86_64/chromium/ | [link](http://projects.archlinux.org/svntogit/packages.git/tree/trunk?h=packages/chromium) |
diff --git a/chromium/docs/linux_debugging.md b/chromium/docs/linux_debugging.md
index d2baee663ca..aff0dd77f0a 100644
--- a/chromium/docs/linux_debugging.md
+++ b/chromium/docs/linux_debugging.md
@@ -273,7 +273,7 @@ You can improve GDB load time significantly at the cost of link time by
splitting symbols from the object files. In GN, set `use_debug_fission=false` in
your "gn args".
-### Source level debug with -fdebug-prefix-map
+### Source level debug with -fdebug-compilation-dir
When you enable GN config `strip_absolute_paths_from_debug_symbols`, this is
enabled by default for goma on Linux build, you need to add following command
diff --git a/chromium/docs/linux_minidump_to_core.md b/chromium/docs/linux_minidump_to_core.md
index 7f25c88d449..cb82416caad 100644
--- a/chromium/docs/linux_minidump_to_core.md
+++ b/chromium/docs/linux_minidump_to_core.md
@@ -13,7 +13,10 @@ Use `minidump-2-core` to convert the minidump file to a core file. On Linux, one
can build the minidump-2-core target in a Chromium checkout, or alternatively,
build it in a Google Breakpad checkout.
- $ minidump-2-core foo.dmp > foo.core
+```shell
+$ ninja -C out/Release minidump-2-core
+$ ./out/Release/minidump-2-core foo.dmp > foo.core
+```
## Retrieving Chrome binaries
@@ -24,17 +27,24 @@ _debug link_ method to specify the debugging file. Either way be sure to put
`chrome` and `chrome.debug` (the stripped debug information) in the same
directory as the core file so that the debuggers can find them.
+For Chrome OS release binaries look for `debug-*.tgz` files on
+GoldenEye.
+
## Loading the core file into gdb/cgdb
The recommended syntax for loading a core file into gdb/cgdb is as follows,
specifying both the executable and the core file:
- $ cgdb chrome foo.core
+```shell
+$ cgdb chrome foo.core
+```
If the executable is not available then the core file can be loaded on its own
but debugging options will be limited:
- $ cgdb -c foo.core
+```shell
+$ cgdb -c foo.core
+```
## Loading the core file into Qtcreator
@@ -53,7 +63,9 @@ approximately) when the Chrome build was created then you can tell
relative to the out/Release directory you just need to add that directory to
your debugger search path, by adding a line similar to this to `~/.gdbinit`:
- (gdb) directory /usr/local/chromium/src/out/Release/
+```
+(gdb) directory /usr/local/chromium/src/out/Release/
+```
## Notes
@@ -79,30 +91,41 @@ figure out the address, look near the end of `foo.dmp`, which contains a copy of
One quick way to do this is with `grep`. For instance, if the executable is
`/path/to/chrome`, one can simply run:
- $ grep -a /path/to/chrome$ foo.dmp
+```shell
+$ grep -a /path/to/chrome$ foo.dmp
+
+7fe749a90000-7fe74d28f000 r-xp 00000000 08:07 289158 /path/to/chrome
+7fe74d290000-7fe74d4b7000 r--p 037ff000 08:07 289158 /path/to/chrome
+7fe74d4b7000-7fe74d4e0000 rw-p 03a26000 08:07 289158 /path/to/chrome
+```
- 7fe749a90000-7fe74d28f000 r-xp 00000000 08:07 289158 /path/to/chrome
- 7fe74d290000-7fe74d4b7000 r--p 037ff000 08:07 289158 /path/to/chrome
- 7fe74d4b7000-7fe74d4e0000 rw-p 03a26000 08:07 289158 /path/to/chrome
In this case, `7fe749a90000` is the base address for `/path/to/chrome`, but gdb
takes the start address of the file's text section. To calculate this, one will
need a copy of `/path/to/chrome`, and run:
- $ objdump -x /path/to/chrome | grep '\.text' | head -n 1 | tr -s ' ' | \
- cut -d' ' -f 7
+```shell
+$ objdump -x /path/to/chrome | grep '\.text' | head -n 1 | tr -s ' ' | \
+ cut -d' ' -f 7
+
+005282c0
+```
- 005282c0
Now add the two addresses: `7fe749a90000 + 005282c0 = 7fe749fb82c0` and in gdb, run:
- (gdb) add-symbol-file /path/to/chrome 0x7fe749fb82c0
+```
+(gdb) add-symbol-file /path/to/chrome 0x7fe749fb82c0
+```
Then use gdb as normal.
## Other resources
For more discussion on this process see
-[Debugging a Minidump](https://www.chromium.org/chromium-os/how-tos-and-troubleshooting/crash-reporting/debugging-a-minidump).
+[Debugging a Minidump].
This page discusses the same process in the context of Chrome OS and many of the
concepts and techniques overlap.
+
+[Debugging a Minidump](
+https://www.chromium.org/chromium-os/packages/crash-reporting/debugging-a-minidump)
diff --git a/chromium/docs/linux_running_asan_tests.md b/chromium/docs/linux_running_asan_tests.md
index 1b6089db01c..dcf1e1d3604 100644
--- a/chromium/docs/linux_running_asan_tests.md
+++ b/chromium/docs/linux_running_asan_tests.md
@@ -1,7 +1,7 @@
# Running Chrome tests with AddressSanitizer (asan) and LeakSanitizer (lsan)
-It is relatively straightforward to run asan/lsan tests. The only tricky part
-is that some environment variables need to be set.
+Running asan/lsan tests requires changing the build and setting a few
+environment variables.
Changes to args.gn (ie, `out/Release/args.gn`):
@@ -10,10 +10,17 @@ is_asan = true
is_lsan = true
```
-How to run the test:
+Setting up environment variables and running the test:
```sh
$ export ASAN_OPTIONS="symbolize=1 external_symbolizer_path=./third_party/llvm-build/Release+Asserts/bin/llvm-symbolizer detect_leaks=1 detect_odr_violation=0"
$ export LSAN_OPTIONS=""
$ out/Release/browser_tests
```
+
+Stack traces (such as those emitted by `base::debug::StackTrace().Print()`) may
+not be fully symbolized. The following snippet can symbolize them:
+
+```sh
+$ out/Release/browser_tests 2>&1 | ./tools/valgrind/asan/asan_symbolize.py
+```
diff --git a/chromium/docs/linux_sandbox_ipc.md b/chromium/docs/linux_sandbox_ipc.md
index 5ff709067c6..f0555b8d476 100644
--- a/chromium/docs/linux_sandbox_ipc.md
+++ b/chromium/docs/linux_sandbox_ipc.md
@@ -4,11 +4,13 @@ The Sandbox IPC system is separate from the 'main' IPC system. The sandbox IPC
is a lower level system which deals with cases where we need to route requests
from the bottom of the call stack up into the browser.
-The motivating example is Skia, which uses fontconfig to load fonts. In a
-chrooted renderer we cannot access the user's fontcache, nor the font files
-themselves. However, font loading happens when we have called through WebKit,
-through Skia and into the SkFontHost. At this point, we cannot loop back around
-to use the main IPC system.
+The motivating example used to be Skia, which uses fontconfig to load
+fonts. Howvever, the OOP IPC for FontConfig was moved to using Font Service and
+the `components/services/font/public/cpp/font_loader.h` interface.
+
+These days, only the out-of-process localtime implementation as well as
+an OOP call for making a shared memory segment are using the Sandbox IPC
+file-descriptor based system. See `sandbox/linux/services/libc_interceptor.cc`.
Thus we define a small IPC system which doesn't depend on anything but `base`
and which can make synchronous requests to the browser process.
@@ -36,22 +38,12 @@ requests so that should be a good starting point.
Here is a (possibly incomplete) list of endpoints in the renderer:
-### fontconfig
-
-As mentioned above, the motivating example of this is dealing with fontconfig
-from a chrooted renderer. We implement our own Skia FontHost, outside of the
-Skia tree, in `skia/ext/SkFontHost_fontconfig**`.
-
-There are two methods used. One for performing a match against the fontconfig
-data and one to return a file descriptor to a font file resulting from one of
-those matches. The only wrinkle is that fontconfig is a single-threaded library
-and it's already used in the browser by GTK itself.
+### localtime
-Thus, we have a couple of options:
+`content/browser/sandbox_ipc_linux.h` defines HandleLocalTime which is
+implemented in `sandbox/linux/services/libc_interceptor.cc`.
-1. Handle the requests on the UI thread in the browser.
-1. Handle the requests in a separate address space.
+### Creating a shared memory segment
-The original implementation did the former (handle on UI thread). This turned
-out to be a terrible idea, performance wise, so we now handle the requests on a
-dedicated process.
+`content/browser/sandbox_ipc_linux.h` defines HandleMakeSharedMemorySegment
+which is implemented in `content/browser/sandbox_ipc_linux.cc`.
diff --git a/chromium/docs/mac_build_instructions.md b/chromium/docs/mac_build_instructions.md
index 782319b5ce6..f4dd826b3c8 100644
--- a/chromium/docs/mac_build_instructions.md
+++ b/chromium/docs/mac_build_instructions.md
@@ -193,6 +193,16 @@ would like to debug in a graphical environment, rather than using `lldb` at the
command line, that is possible without building in Xcode (see
[Debugging in Xcode](https://www.chromium.org/developers/how-tos/debugging-on-os-x/building-with-ninja-debugging-with-xcode)).
+Tips for printing variables from `lldb` prompt (both in Xcode or in terminal):
+* If `uptr` is a `std::unique_ptr`, the address it wraps is accessible as
+ `uptr.__ptr_.__value_`.
+* To pretty-print `base::string16`, ensure you have a `~/.lldbinit` file and
+ add the following line into it (substitute {SRC} for your actual path to the
+ root of Chromium's sources):
+```
+command script import {SRC}/tools/lldb/lldb_chrome.py
+```
+
## Update your checkout
To update an existing checkout, you can run
diff --git a/chromium/docs/media/profile-screenshot.png b/chromium/docs/media/profile-screenshot.png
new file mode 100644
index 00000000000..7b07a468f08
--- /dev/null
+++ b/chromium/docs/media/profile-screenshot.png
Binary files differ
diff --git a/chromium/docs/memory/README.md b/chromium/docs/memory/README.md
index e8ee0c8566c..c64253dbc24 100644
--- a/chromium/docs/memory/README.md
+++ b/chromium/docs/memory/README.md
@@ -68,15 +68,15 @@ Second, familiarize yourself with the following:
## Key knowledge areas and contacts
| Knowledge Area | Contact points |
|----------------|----------------|
-| Chrome on Android | mariakhomenko, dskiba, ssid |
-| Browser Process | mariakhomenko, dskiba, ssid |
+| Chrome on Android | lizeb, pasko, ssid |
+| Browser Process | ssid, erikchen, etienneb |
| GPU/cc | ericrk |
-| Memory metrics | erikchen, primano, ajwong, wez |
-| Native Heap Profiling | primiano, dskiba, ajwong |
+| Memory metrics | ssid, erikchen, primano, ajwong, wez |
+| Native Heap Profiling | primiano, ajwong |
| Net Stack | mmenke, rsleevi, xunjieli |
| Renderer Process | haraken, tasak, hajimehoshi, keishi, hiroshige |
| V8 | hpayer, ulan, verwaest, mlippautz |
-| Out of Process Heap Profiling | erikchen, ajwong, brettw, etienneb
+| Out of Process Heap Profiling | erikchen, ajwong, brettw, etienneb |
## Other docs
diff --git a/chromium/docs/mojo_guide.md b/chromium/docs/mojo_guide.md
new file mode 100644
index 00000000000..ba38d1848fc
--- /dev/null
+++ b/chromium/docs/mojo_guide.md
@@ -0,0 +1,157 @@
+# Mojo For Chromium Developers
+
+## Overview
+
+This document contains the minimum amount of information needed for a developer
+to start using Mojo in Chromium. For more detailed documentation on the C++
+bindings, see [this link](/mojo/public/cpp/bindings/README.md).
+
+## Terminology
+
+A **message pipe** is a pair of **endpoints**. Each endpoint has a queue of
+incoming messages, and writing a message to one endpoint effectively enqueues
+that message on the other endpoint. Message pipes are thus bidirectional.
+
+A **mojom** file describes **interfaces** which describe strongly typed message
+structures, similar to proto files.
+
+Given a **mojom interface** and a **message pipe**, the two **message pipes**
+can be given the labels **InterfacePtr** and **Binding**. This now describes a
+strongly typed **message pipe** which transports messages described by the
+**mojom interface**. The **InterfacePtr** is the **endpoint** which "sends"
+messages, and the **Binding** "receives" messages. Note that the **message
+pipe** itself is still bidirectional, and it's possible for a message to have a
+response callback, which the **InterfacePtr** would receive.
+
+Another way to think of this is that an **InterfacePtr** is capable of making
+remote calls on an implementation of the mojom interface associated with the
+**Binding**.
+
+The **Binding** itself is just glue that wires the endpoint's message queue up
+to some implementation of the interface provided by the developer.
+
+## Example
+
+Let's apply this to Chrome. Let's say we want to send a "Ping" message from a
+Browser to a Renderer. First we need to define the mojom interface.
+
+```
+module example.mojom;
+interface PingResponder {
+ // Receives a "Ping" and responds with a random integer.
+ Ping() => (int random);
+};
+```
+
+Now let's make a MessagePipe.
+```cpp
+example::mojom::PingResponderPtr ping_responder;
+example::mojom::PingResponderRequest request = mojo::MakeRequest(&ping_responder);
+```
+
+In this example, ```ping_responder``` is the **InterfacePtr**, and ```request```
+is an **InterfaceRequest**, which is a **Binding** precursor that will shortly
+be turned into a **Binding**. Now we can send our Ping message.
+
+```cpp
+auto callback = base::Bind(&OnPong);
+ping_responder->Ping(callback);
+```
+
+Important aside: If we want to receive the the response, we must keep the object
+```ping_responder``` alive. After all, it's just a wrapper around a **Message
+Pipe endpoint**, if it were to go away, there'd be nothing left to receive the
+response.
+
+We're done! Of course, if everything were this easy, this document wouldn't need
+to exist. We've taken the hard problem of sending a message from the Browser to
+a Renderer, and transformed it into a problem where we just need to take the
+```request``` object, pass it to the Renderer, turn it into a **Binding**, and
+implement the interface.
+
+In Chrome, processes host services, and the services themselves are connected to
+a Service Manager via **message pipes**. It's easy to pass ```request``` to the
+appropriate Renderer using the Service Manager, but this requires explicitly
+declaring our intentions via manifest files. For this example, we'll use the
+content_browser service [manifest
+file](https://cs.chromium.org/chromium/src/content/public/app/mojo/content_browser_manifest.json)
+and the content_renderer service [manifest
+file](https://cs.chromium.org/chromium/src/content/public/app/mojo/content_renderer_manifest.json).
+
+```js
+content_renderer_manifest.json:
+...
+ "interface_provider_specs": {
+ "service_manager:connector": {
+ "provides": {
+ "cool_ping_feature": [
+ "example::mojom::PingResponder"
+ ]
+ },
+ },
+...
+```
+
+```js
+content_browser_manifest.json:
+...
+ "interface_provider_specs": {
+ "service_manager:connector": {
+ "requires": {
+ "content_renderer": [ "cool_ping_feature" ],
+ },
+ },
+ },
+...
+```
+
+These changes indicate that the content_renderer service provides the interface
+PingResponder, under the **capability** named "cool_ping_feature". And the
+content_browser services intends to use this feature.
+```content::BindInterface``` is a helper function that takes ```request``` and
+sends it to the renderer process via the Service Manager.
+
+```cpp
+content::RenderProcessHost* host = GetRenderProcessHost();
+content::BindInterface(host, std::move(request));
+```
+
+Putting this all together for the browser process:
+```cpp
+example::mojom::PingResponderPtr ping_responder; // Make sure to keep this alive! Otherwise the response will never be received.
+example::mojom::PingResponderRequest request = mojo::MakeRequest(&ping_responder);
+ping_responder->Ping(base::BindOnce(&OnPong));
+content::RenderProcessHost* host = GetRenderProcessHost();
+content::BindInterface(host, std::move(request));
+```
+
+In the Renderer process, we need to write an implementation for PingResponder,
+and ensure that a **Binding** is created using the transported ```request```. In a
+standalone Mojo service, this would require us to implement
+```service_manager::Service::OnBindInterface()```. In Chrome, this is abstracted
+behind ```content::ConnectionFilters``` and
+```service_manager::BinderRegistry```. This is typically done in
+```RenderThreadImpl::Init```.
+
+```cpp
+class PingResponderImpl : mojom::PingResponder {
+ void BindToInterface(example::mojom::PingResponderRequest request) {
+ binding_.reset(
+ new mojo::Binding<mojom::MemlogClient>(this, std::move(request)));
+ }
+ void Ping(PingCallback callback) { std::move(callback).Run(4); }
+ std::unique_ptr<mojo::Binding<mojom::PingResponder>> binding_;
+};
+
+RenderThreadImpl::Init() {
+...
+ this->ping_responder = std::make_unique<PingResponderImpl>();
+ auto registry = base::MakeUnique<service_manager::BinderRegistry>();
+
+ // This makes the assumption that |this->ping_responder| will outlive |registry|.
+ registry->AddInterface(base::Bind(&PingResponderImpl::BindToInterface), base::Unretained(this->ping_responder.get()));
+
+ GetServiceManagerConnection()->AddConnectionFilter(
+ base::MakeUnique<SimpleConnectionFilter>(std::move(registry)));
+...
+```
diff --git a/chromium/docs/ozone_drm_for_linux.md b/chromium/docs/ozone_drm_for_linux.md
new file mode 100644
index 00000000000..7e48abc86ea
--- /dev/null
+++ b/chromium/docs/ozone_drm_for_linux.md
@@ -0,0 +1,96 @@
+# Running ChromeOS UI on Linux
+Note that this instructions may not work for you. They have been
+verified to work as of 2018/06/06 on standard Google engineering
+workstations as issued to engineerings on the Chrome team. Please
+submit patches describing the steps needed for other machines or distributions.
+
+## Nouveau
+If you have an NVidia card, you probably have the binary drivers installed. These install a blacklist for the nouveau kernel modules. Best is to remove the nvidia driver and switch to nouveau completely:
+
+```
+$ sudo apt-get remove --purge "nvidia*"
+$ sudo apt-get install xserver-xorg-input-evdev xserver-xorg-input-mouse xserver-xorg-input-kbd xserver-xorg-input-libinput xserver-xorg-video-nouveau
+$ sudo dpkg-reconfigure xserver-xorg
+$ # If you are using a Google development machine:
+$ sudo goobuntu-config set custom_video_driver custom
+```
+
+Default version of nouveau xorg driver is too old for the NV117 chipset in Z840 machines. Install a newer version:
+
+```
+$ cd /tmp
+$ wget http://http.us.debian.org/debian/pool/main/x/xserver-xorg-video-nouveau/xserver-xorg-video-nouveau_1.0.15-2_amd64.deb
+$ sudo apt-get install ./xserver-xorg-video-nouveau_1.0.15-2_amd64.deb
+```
+
+At this point you *must reboot.* If you run into issues to load video at boot then disable `load_video` and `gfx_mode` in `/boot/grub/grub.cfg`.
+
+## Building Chrome
+Checkout chromium as per your usual workflow. See [Get the Code:
+Checkout, Build, & Run
+Chromium](https://www.chromium.org/developers/how-tos/get-the-code).
+Googlers should checkout chromium source code as described here:
+[Building Chromium on a corporate Linux
+workstation](https://companydoc.corp.google.com/company/teams/chrome/linux_build_instructions.md?cl=head)
+
+We want to build on linux on top of Ozone with gbm platform. The
+following instructions builds chromium targets along with minigbm
+that lives in the chromium tree `src/third_party/minigbm`. Currently,
+there is no builder for this configuration so while this worked
+(mostly) when this document was written, some experimentation may
+be necessary.
+
+Set the gn args for your output dir target `out/Nouveau` with:
+
+```
+$ gn args out/Nouveau
+Add the following arguments:
+dcheck_always_on = true
+use_ozone = true
+target_os = "chromeos"
+ozone_platform_gbm = true
+ozone_platform = "gbm"
+use_system_minigbm = false
+target_sysroot = "//build/linux/debian_jessie_amd64-sysroot"
+is_debug = false
+use_goma = true
+use_xkbcommon = true
+#use_evdev_gestures = true
+#use_system_libevdev = false
+#use_system_gestures = false
+
+# Non-Googlers should set the next two flags to false
+is_chrome_branded = true
+is_official_build = true
+use_pulseaudio = false
+```
+
+Build official release build of chrome:
+
+```
+$ ninja -j768 -l24 -C out/Nouveau chrome chrome_sandbox nacl_helper
+$ # Give user access to dri, input and audio device nodes:
+$ sudo sh -c "echo 'KERNEL==\"event*\", NAME=\"input/%k\", MODE=\"660\", GROUP=\"plugdev\"' > /etc/udev/rules.d/90-input.rules"
+$ sudo sh -c "echo 'KERNEL==\"card[0-9]*\", NAME=\"dri/%k\", GROUP=\"video\"' > /etc/udev/rules.d/90-dri.rules"
+$ sudo udevadm control --reload
+$ sudo udevadm trigger --action=add
+$ sudo usermod -a -G plugdev $USER
+$ sudo usermod -a -G video $USER
+$ sudo usermod -a -G audio $USER
+$ newgrp video
+$ negrrp plugdev
+$ newgrp audio
+$ # Stop pulseaudio if running:
+$ pactl exit
+```
+
+Run chrome: (Set `CHROMIUM_SRC` to the directory containing your Chrome checkout.)
+
+```
+$ sudo chvt 8; EGL_PLATFORM=surfaceless $CHROMIUM_SRC/out/Nouveau/chrome --ozone-platform=gbm --force-system-compositor-mode --login-profile=user --user-data-dir=$HOME/.config/google-chrome-gbm --ui-prioritize-in-gpu-process --use-gl=egl --enable-wayland-server --login-manager --ash-constrain-pointer-to-root --default-tile-width=512 --default-tile-height=512 --system-developer-mode --crosh-command=/bin/bash
+```
+
+Login to Chrome settings should synchronize.
+
+Install Secure Shell if not already installed from [the web store](https://chrome.google.com/webstore/detail/secure-shell/pnhechapfaindjhompbnflcldabbghjo?hl=en)
+
diff --git a/chromium/docs/privacy/OWNERS b/chromium/docs/privacy/OWNERS
new file mode 100644
index 00000000000..19809047e57
--- /dev/null
+++ b/chromium/docs/privacy/OWNERS
@@ -0,0 +1,6 @@
+dullweber@chromium.org
+glevin@chromium.org
+mkwst@chromium.org
+msramek@chromium.org
+rhalavati@chromium.org
+tnagel@chromium.org
diff --git a/chromium/docs/privacy/nonsecure-cookies.md b/chromium/docs/privacy/nonsecure-cookies.md
new file mode 100644
index 00000000000..6e401544413
--- /dev/null
+++ b/chromium/docs/privacy/nonsecure-cookies.md
@@ -0,0 +1,6 @@
+# Nonsecurely delivered cookies
+
+This page is a stub.
+
+Please refer to the [Intent to Deprecate](https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/r0UBdUAyrLk/E_iCCBCRBAAJ)
+in the meantime.
diff --git a/chromium/docs/profiling.md b/chromium/docs/profiling.md
new file mode 100644
index 00000000000..72c6abad869
--- /dev/null
+++ b/chromium/docs/profiling.md
@@ -0,0 +1,152 @@
+# CPU Profiling Chrome
+
+[TOC]
+
+## Introduction
+
+These are instructions for collecting a CPU profile of chromium. All of the profiling methods described here produce output that can be view using the `pprof` tool. `pprof` is highly customizable; here's a screenshot of some example `pprof` output:
+
+![pprof output screenshot](./media/profile-screenshot.png)
+
+This doc is intended to be an authoritative one-stop resource for profiling chromium. At the time of writing, there are a number of existing docs with profiling instructions, in varying states of obsolescence:
+
+* [./linux_profiling.md](./linux_profiling.md)
+* [./profiling_content_shell_on_android.md](./profiling_content_shell_on_android.md)
+* https://www.chromium.org/developers/profiling-chromium-and-webkit
+* https://www.chromium.org/developers/telemetry/profiling
+
+***promo
+CPU profiling is not to be confused with tracing or task profiling:
+
+* https://www.chromium.org/developers/how-tos/trace-event-profiling-tool
+* https://www.chromium.org/developers/threaded-task-tracking
+***
+
+## Profiling on Linux
+
+Profiling support is built into tcmalloc and exposed in chromium, so any platform that uses tcmalloc should be able to generate profiling data without using external tools.
+
+### Preparing your checkout
+
+Profiling should always be done on a Release build, which has very similiar performance characteristics to an official build. Make sure the following appears in your `args.gn` file:
+
+ is_debug = false
+ enable_profiling = true
+ enable_callgrind = true
+
+### Preparing your environment
+
+By default, the profiler will take a sample 100 times per second. You can adjust this rate by setting the `CPUPROFILE_FREQUENCY` environment variable before launching chromium:
+
+ $ export CPUPROFILE_FREQUENCY=1000
+
+The maximum supported rate is 4000 samples per second.
+
+### Profiling a process over its entire lifetime
+
+To profile the main browser process, add the following argument to your chrome invocation:
+
+ --enable-profiling --profiling-at-start
+
+To profile, e.g., every renderer process, add the following argument to your chrome invocation:
+
+ --enable-profiling --profiling-at-start=renderer --no-sandbox
+
+*** promo
+The --no-sandbox argument is required to allow the renderer process to write the profiling output to the file system.
+***
+
+When the process being profiled ends, you should see one or more `chrome-profile-{process type}-{process ID}` files in your `$PWD`. Run `pprof` to view the results, e.g.:
+
+ $ pprof -web chrome-profile-renderer-12345
+
+*** promo
+`pprof` is packed with useful features for visualizing profiling data. Try `pprof --help` for more info.
+***
+
+*** promo
+Tip for Googlers: running `prodaccess` first will make `pprof` run faster, and eliminate some useless spew to the terminal.
+***
+
+### Profiling a process or thread for a defined period of time using perf
+
+First, make sure you have the `linux-perf` package installed:
+
+ $ sudo apt-get install linux-perf
+
+After starting up the browser and loading the page you want to profile, press 'Shift-Escape' to bring up the task manager, and get the Process ID of the process you want to profile.
+
+Run the perf tool like this:
+
+ $ perf record -g -p <Process ID> -o <output file>
+
+*** promo
+`perf` does not honor the `CPUPROFILE_FREQUENCY` env var. To adjust the sampling frequency, use the `-F` argument, e.g., `-F 1000`.
+***
+
+To stop profiling, press `Control-c` in the terminal window where `perf` is running. Run `pprof` to view the results, providing the path to the browser executable; e.g.:
+
+ $ pprof -web src/out/Release/chrome <perf output file>
+
+*** promo
+`pprof` is packed with useful features for visualizing profiling data. Try `pprof --help` for more info.
+***
+
+If you want to limit the profile to a single thread, run:
+
+ $ ps -T -p <Process ID>
+
+From the output, find the Thread ID (column header "SPID") of the thread you want. Now run perf:
+
+ $ perf record -g -t <Thread ID> -o <output file>
+
+Use the same `pprof` command as above to view the single-thread results.
+
+### Profiling the renderer process for a period defined in javascript
+
+You can generate a highly-focused profile for any period that can be defined in javascript using the `chrome.gpuBenchmarking` javascript interface. First, adding the following command-line flags when you start chrome:
+
+ $ chrome --enable-gpu-benchmarking --no-sandbox [...]
+
+Open devtools, and in the console, use `chrome.gpuBenchmarking.startProfiling` and `chrome.gpuBenchmarking.stopProfiling` to define a profiling period. e.g.:
+
+ > chrome.gpuBenchmarking.startProfiling('perf.data'); doSomething(); chrome.gpuBenchmarking.stopProfiling()
+
+`chrome.gpuBenchmarking` has a number of useful methods for simulating user-gesture-initiated actions; for example, to profile scrolling:
+
+ > chrome.gpuBenchmarking.startProfiling('perf.data'); chrome.gpuBenchmarking.smoothScrollBy(1000, () => { chrome.gpuBenchmarking.stopProfiling() });
+
+## Profiling on Android
+
+Android (Nougat and later) supports profiling using the [simpleperf](https://developer.android.com/ndk/guides/simpleperf) tool.
+
+Follow the [instructions](./android_build_instructions.md) for building and installing chromium on android. With chromium running on the device, run the following command to start profiling on the browser process (assuming your build is in `src/out/Release`):
+
+ $ src/out/Release/bin/chrome_public_apk profile
+ Profiler is running; press Enter to stop...
+
+Once you stop the profiler, the profiling data will be copied off the device to the host machine and post-processed so it can be viewed in `pprof`, as described above.
+
+To profile the renderer process, you must have just one tab open in chromium, and use a command like this:
+
+ $ src/out/Release/bin/chrome_public_apk profile --profile-process=renderer
+
+To limit the profile to a single thread, use a command like this:
+
+ $ src/out/Release/bin/chrome_public_apk profile --profile-process=renderer --profile-thread=main
+
+The `--profile-process` and `--profile-thread` arguments support most of the common process names ('browser', 'gpu', 'renderer') and thread names ('main', 'io', 'compositor', etc.). However, if you need finer control of the process and/or thread to profile, you can specify an explicit Process ID or Thread ID. Check out the usage message for more info:
+
+ $ src/out/Release/bin/chrome_public_apk help profile
+
+## Profiling during a perf benchmark run
+
+The perf benchmark runner can generate a CPU profile over the course of running a perf test. Currently, this is supported only on Linux and Android. To get info about the relevant options, run:
+
+ $ src/tools/perf/run_benchmark help run
+
+... and look for the `--interval-profiling-*` options. For example, to generate a profile of the main thread of the renderer process during the "page interactions" phase of a perf benchmark, you might run:
+
+ $ src/tools/perf/run_benchmark run <benchmark name> --interval-profiling-target=renderer:main --interval-profiling-period=interactions --interval-profiling-frequency=2000
+
+The profiling data will be written into the `artifacts/` sub-directory of your perf benchmark output directory (default is `src/tools/perf`), to files with the naming pattern `*.profile.pb`. You can use `pprof` to view the results, as described above.
diff --git a/chromium/docs/security/OWNERS b/chromium/docs/security/OWNERS
index d3eae96f029..99b41c84c8b 100644
--- a/chromium/docs/security/OWNERS
+++ b/chromium/docs/security/OWNERS
@@ -1,6 +1,5 @@
awhalley@chromium.org
dcheng@chromium.org
-elawrence@chromium.org
estark@chromium.org
felt@chromium.org
mmoroz@chromium.org
diff --git a/chromium/docs/security/faq.md b/chromium/docs/security/faq.md
index efc562df7d6..44d5843293f 100644
--- a/chromium/docs/security/faq.md
+++ b/chromium/docs/security/faq.md
@@ -144,13 +144,21 @@ are considered security vulnerabilities in more detail.
No. Chromium contains a reflected XSS filter (called XSSAuditor) that is a
best-effort second line of defense against reflected XSS flaws found in web
-sites. We do not treat these bypasses as security bugs in Chromium because the
-underlying issue is in the web site itself. We treat them as functional bugs,
-and we do appreciate such reports.
+sites. We do not treat these bypasses as security bugs in Chromium because the
+underlying security issue is in the web site itself. Instead, we treat them as
+functional bugs in Chromium.
-The XSSAuditor is not able to defend against persistent XSS or DOM-based XSS.
-There will also be a number of infrequently occurring reflected XSS corner
-cases, however, that it will never be able to cover. Among these are:
+We do appreciate reports of XSSAuditor bypasses, and endeavor to close them.
+When reporting an XSSAuditor bypass, two pieces of information are essential:
+* The exact URL (and for POSTs, the request body) triggering the reflection.
+* The view-source: of the page showing the reflection in the page text.
+
+Please do not provide links to vulnerable production sites seen in the wild,
+as that forces us to embargo the information in the bug.
+
+Note that the XSSAuditor is not able to defend against persistent XSS or
+DOM-based XSS. There will also be a number of infrequently occurring reflected
+XSS corner cases that it will never be able to cover. Among these are:
* Multiple unsanitized variables injected into the page.
* Unexpected server side transformation or decoding of the payload.
diff --git a/chromium/docs/security/sheriff.md b/chromium/docs/security/sheriff.md
index 925c1ace2e2..8dd807e032e 100644
--- a/chromium/docs/security/sheriff.md
+++ b/chromium/docs/security/sheriff.md
@@ -67,6 +67,7 @@ transitions to **Fixed**.
close. Make sure to read bug comments where developer might point out that it
needs more CLs, et c. Wait 24 hours before closing ClusterFuzz bugs, to give
ClusterFuzz a chance to close it automatically.
+ * [Starting point](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=Type%3D%22Bug-Security%22+%22Change-Id:%22)
* Look at open security bug reports and check that progress is occurring.
* Generally keep an eye on all bug traffic in case anything needs action or
replying to.
diff --git a/chromium/docs/security/side-channel-threat-model.md b/chromium/docs/security/side-channel-threat-model.md
new file mode 100644
index 00000000000..8a845b89c95
--- /dev/null
+++ b/chromium/docs/security/side-channel-threat-model.md
@@ -0,0 +1,374 @@
+# Post-Spectre Threat Model Re-Think
+
+Contributors: awhalley, creis, dcheng, jschuh, jyasskin, lukasza, mkwst, nasko,
+palmer, tsepez. Patches and corrections welcome!
+
+Last Updated: 29 May 2018
+
+[TOC]
+
+## Introduction
+
+In light of [Spectre/Meltdown](https://spectreattack.com/), we needed to
+re-think our threat model and defenses for Chrome renderer processes. Spectre is
+a new class of hardware side-channel attack that affects (among many other
+targets) web browsers. This document describes the impact of these side-channel
+attacks and our approach to mitigating them.
+
+> The upshot of the latest developments is that the folks working on this from
+> the V8 side are increasingly convinced that there is no viable alternative to
+> Site Isolation as a systematic mitigation to SSCAs [speculative side-channel
+> attacks]. In this new mental model, we have to assume that user code can
+> reliably gain access to all data within a renderer process through
+> speculation. This means that we definitely need some sort of ‘privileged/PII
+> data isolation’ guarantees as well, for example ensuring that password and
+> credit card info are not speculatively loaded into a renderer process without
+> user consent. — Daniel Clifford, in private email
+
+In fact, any software that both (a) runs (native or interpreted) code from more
+than one source; and (b) attempts to create a security boundary inside a single
+address space, is potentially affected. For example, software that processes
+document formats with scripting capabilities, and which loads multiple documents
+from different sources into the same process, may need to take defense measures
+similar to those described here.
+
+### Problem Statement
+
+#### Active Web Content: Renderer Processes
+
+We must assume that *active web content* (JavaScript, WebAssembly, Native
+Client, Flash, PDFium, …) will be able to read any and all data in the address
+space of the process that hosts it. Multiple independent parties have developed
+proof-of-concept exploits that illustrate the effectiveness and reliability of
+Spectre-style attacks. The loss of cross-origin confidentiality inside a single
+process is thus not merely theoretical.
+
+The implications of this are far-reaching:
+
+* An attacker that can exploit Spectre can bypass certain native code exploit
+ mitigations, even without an infoleak bug in software.
+ * ASLR
+ * Stack canaries
+ * Heap metadata canaries
+ * Potentially certain forms of control-flow integrity
+* We must consider any data that gets into a renderer process to have no
+ confidentiality from any origins running in that process, regardless of the
+ same origin policy.
+
+Additionally, attackers may develop ways to read memory from other userland
+processes (e.g. a renderer reading the browser’s memory). We do not include
+those attacks in our threat model. The hardware, microcode, and OS must
+re-establish the process boundary and the userland/kernel boundary. If the
+underlying platform does not enforce those boundaries, there’s nothing an
+application (like a web browser) can do.
+
+#### GPU Process
+
+Chrome’s GPU process handles data from all origins in a single process. It is
+not currently practical to isolate different sites or origins into their own GPU
+processes. (At a minimum, there are time and space efficiency concerns; we are
+still trying to get Site Isolation shipped and are actively resolving issues
+there.)
+
+However, WebGL exposed high-resolution clocks that are useful for exploiting
+Spectre. It was possible to temporarily remove some of them, and to coarsen
+another, with minimal breakage of web compatibility, and so [that has been
+done](https://bugs.chromium.org/p/chromium/issues/detail?id=808744). However, we
+expect to reinstate the clocks on platforms where Site Isolation is on by
+default. (See [Attenuating Clocks, below](#attenuating-clocks).)
+
+We do not currently believe that, short of full code execution, an attacker can
+control speculative execution inside the GPU process to the extent necessary to
+exploit Spectre-like vulnerabilities. [As always, evidence to the contrary is
+welcome!](https://www.google.com/about/appsecurity/chrome-rewards/index.html)
+
+#### Nastier Threat Models
+
+It is generally safest to assume that an arbitrary read-write primitive in the
+renderer process will be available to the attacker. The richness of the
+attack/API surface available in a rendering engine makes this plausible.
+However, this capability is not a freebie the way Spectre is — the attacker must
+actually find 1 or more bugs that enable the RW primitive.
+
+Site Isolation (SI) gets us closer to a place where origins face in-process
+attacks only from other origins in their `SiteInstance`, and not from any
+arbitrary origin. (Origins that include script from hostile origins will still
+be vulnerable, of course.) However, [there may be hostile origins in the same
+process](#multiple-origins-within-a-siteinstance).
+
+Strict origin isolation is not yet being worked on; we must first ship SI on by
+default. It is an open question whether strict origin isolation will turn out to
+be feasible.
+
+## Defensive Approaches
+
+These are presented in no particular order, with the exception that Site
+Isolation is currently the best and most direct solution.
+
+### Site Isolation
+
+The first order solution is to simply get cross-origin data out of the Spectre
+attacker’s address space. [Site
+Isolation](https://www.chromium.org/Home/chromium-security/site-isolation) (SI)
+more closely aligns the web security model (the same-origin policy) with the
+underlying platform’s security model (separate address spaces and privilege
+reduction).
+
+SI still has some bugs that need to be ironed out before we can turn it on by
+default, both on Desktop and on Android. As of May 2018 we believe we can turn
+it on by default, on Desktop (but not Android yet) in M67 or M68.
+
+On iOS, where Chrome is a WKWebView embedder, we must rely on [the mitigations
+that Apple is
+developing](https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-for-webkit/).
+
+All major browsers are working on some form of site isolation, and [we are
+collaborating publicly on a way for sites to opt in to
+isolation](https://groups.google.com/a/chromium.org/forum/#!forum/isolation-policy),
+to potentially make implementing and deploying site isolation easier. (Chrome
+Desktop’s Site Isolation will be on by default, regardless, in the M67 – M68
+timeframe.)
+
+#### Limitations
+
+##### Incompleteness of CORB
+
+Site Isolation depends on [cross-origin read
+blocking](https://chromium.googlesource.com/chromium/src/+/master/content/browser/loader/cross_origin_read_blocking_explainer.md)
+(CORB; formerly known as cross-site document blocking or XSDB) to prevent a
+malicious website from pulling in sensitive cross-origin data. Otherwise, an
+attacker could use markup like `<img src="http://example.com/secret.json">` to
+get cross-origin data within reach of Spectre or other OOB-read exploits.
+
+As of M63, CORB protects:
+
+* HTML, JSON, and XML responses.
+ * Protection requires the resource to be served with the correct
+ `Content-Type` header. [We recommend using `X-Content-Type-Options:
+ nosniff`](https://www.chromium.org/Home/chromium-security/ssca).
+ * In M65 we broadened which content types are considered JSON and XML. (E.g.
+ M63 didn’t consider `*+xml`.)
+* text/plain responses which sniff as HTML, XML, or JSON.
+
+Today, CORB doesn’t protect:
+
+* Responses without a `Content-Type` header.
+* Particular content types:
+ * `image/*`
+ * `video/*`
+ * `audio/*`
+ * `text/css`
+ * `font/*`
+ * `application/javascript`
+ * PDFs, ZIPs, and other unrecognized MIME types
+
+Site operators should read and follow, where applicable, [our guidance for
+maximizing CORB and other defensive
+features](https://developers.google.com/web/updates/2018/02/meltdown-spectre).
+(There is [an open bug to add a CORB evaluator to
+Lighthouse](https://bugs.chromium.org/p/chromium/issues/detail?id=806070).)
+
+<a name="multiple-origins-within-a-siteinstance"></a>
+##### Multiple Origins Within A `SiteInstance`
+
+A *site* is defined as the effective TLD + 1 DNS label (“eTLD+1”) and the URL
+scheme. This is a broader category than the origin, which is the scheme, entire
+hostname, and port number. All of these origins belong to the same site:
+
+* https, www.example.com, 443
+* https, www.example.com, 8443
+* https, goaty-desktop.internal.example.com, 443
+* https, compromised-and-hostile.unmaintained.example.com, 8443
+
+Therefore, even once we have shipped SI on all platforms and have shaken out all
+the bugs, renderers will still not be perfect compartments for origins. So we
+will still need to take a multi-faceted approach to UXSS, memory corruption, and
+OOB-read attacks like Spectre.
+
+Note that we are looking into the possibility of disabling assignments to
+`document.domain` (via [origin-wide](https://wicg.github.io/origin-policy)
+application of [Feature Policy](https://wicg.github.io/feature-policy/) or the
+like). This would open the possibility that we could isolate at the origin
+level.
+
+##### Memory Cost
+
+With SI, Chrome tends to spawn more renderer processes, which tends to lead to
+greater overall memory usage (conservative estimates seem to be about 10%). On
+many Android devices, it is more than 10%, and this additional cost can be
+prohibitive. However, each renderer is smaller and shorter-lived under Site
+Isolation.
+
+##### Plug-Ins
+
+###### PDFium
+
+Chrome uses different PPAPI processes per origin, for secure origins. (We
+tracked this as [Issue
+809614](https://bugs.chromium.org/p/chromium/issues/detail?id=809614).)
+
+###### Flash
+
+Click To Play greatly reduces the risk that Flash-borne Spectre (and other)
+exploits will be effective at scale. Even so, [we might want to consider SI for
+Flash](https://bugs.chromium.org/p/chromium/issues/detail?id=816318).
+
+##### All Frames In A `<webview>` Run In The Same Process
+
+[`<webview>`s run in a separate renderer
+process](https://developer.chrome.com/apps/tags/webview), but that single
+process hosts all frames in the `<webview>` (even with Strict Site Isolation
+enabled elsewhere in Chrome). Extra work is needed to fix this.
+
+Mitigating factors:
+
+* `<webview>` is available only to Web UI and Chrome Apps (which are deprecated
+ outside of Chrome OS).
+* `<webview>` contents are in a separate storage partition (separate from the
+ normal profile and from the Chrome App using the `<webview>` tag). The Chrome
+ App is also in an additional separate storage partition.
+
+Chrome WebUI pages must not, and Chrome Apps should not, use `<webview>` for
+hosting arbitrary web pages. They must only allow a single trustworthy page or
+set of pages. The user already has to trust the Chrome App to do the right thing
+(there is no Omnibox, for example) and only take the user to safe sites. If we
+can’t enforce this programmatically, we may consider enforcing it through code
+review.
+
+##### Android `WebView`
+
+Android `WebView`s run in their own process as of Android O, so the hosting
+application gets protection from malicious web content. However, all origins are
+run in the same `WebView` process.
+
+### Ensure User Intent When Sending Data To A Renderer
+
+Before copying sensitive data into a renderer process, we should somehow get the
+person’s affirmative knowledge and consent. This has implications for all types
+of form auto-filling: normal form data, passwords, payment instruments, and any
+others. It seems like we are [currently in a pretty good place on that
+front](https://bugs.chromium.org/p/chromium/issues/detail?id=802993), with one
+exception: usernames and passwords get auto-filled into the shadow DOM, and then
+revealed to the real DOM on a (potentially forged?) user gesture. These
+credentials are origin-bound, however.
+
+The [Credential Management
+API](https://developer.mozilla.org/en-US/docs/Web/API/Credential_Management_API)
+still poses a risk, exposing usernames/passwords without a gesture for the
+subset of users who've accepted the auto-sign-in mechanism.
+
+What should count as a secure gesture is a gesture on relevant, well-labeled
+browser chrome, handled in the browser process. Tracking the gesture in the
+renderer, that can be forged by web content that compromises the renderer, does
+not suffice.
+
+#### Challenge
+
+We must enable a good user experience with autofill, payments, and passwords,
+while also not ending up with a browser that leaks these super-important classes
+of data. (A good password management experience is itself a key security goal,
+after all.)
+
+### Reducing Or Eliminating Speculation Gadgets
+
+Exploiting Spectre requires that the attacker can find (in V8, Blink, or Blink
+bindings), generate, or cause to be generated code ‘gadgets’ that will read out
+of bounds when speculatively executed. By exerting more control over how we
+generate machine code from JavaScript, and over where we place objects in memory
+relative to each other, we can reduce the prevalence and utility of these
+gadgets. The V8 team has been [landing such code generation
+changes](https://bugs.chromium.org/p/chromium/issues/detail?id=798964)
+continually since January 2018.
+
+Of the known attacks, we believe it’s currently only feasible to try to mitigate
+variant 1 with code changes in C++. We will need the toolchain and/or platform
+support to mitigate other types of speculation attacks. We could experiment with
+inserting `LFENCE` instructions or using
+[Retpoline](https://support.google.com/faqs/answer/7625886) before calling into
+Blink.
+
+PDFium uses V8 for its JavaScript support. To the extent that we rely on V8
+mitigations for Spectre defense, we need to be sure that PDFium uses the latest
+V8, so that it gets the latest mitigations. In shipping Chrome/ium products,
+PDFium uses the V8 that is in Chrome/ium.
+
+#### Limitations
+
+We don’t consider this approach to be a true solution; it’s only a mitigation.
+We think we can eliminate many of the most obvious gadgets and can buy some time
+for better defense mechanisms to be developed and deployed (primarily, Site
+Isolation).
+
+It is very likely impossible to eliminate all gadgets. As with [return-oriented
+programming](https://en.wikipedia.org/wiki/Return-oriented_programming), a large
+body of object code (like a Chrome renderer) is likely to contain so many
+gadgets that the attacker has a good probability to craft a working exploit. At
+some point, we may decide that we can’t stay ahead of attack research, and will
+stop trying to eliminate gadgets.
+
+Additionally, the mitigations typically come with a performance cost, and we may
+ultimately roll some or all of them back. Some potential mitigations are so
+expensive that it is impractical to deploy them.
+
+<a name="attenuating-clocks"></a>
+### Attenuating Clocks
+
+Exploiting Spectre requires a clock. We don’t believe it’s possible to
+eliminate, coarsen, or jitter all explicit and implicit clocks in the Open Web
+Platform (OWP) in a way that is sufficient to fully resolve Spectre. ([Merely
+enumerating all the
+clocks](https://bugs.chromium.org/p/chromium/issues/detail?id=798795) is
+difficult.) Surprisingly coarse clocks are still useful for exploitation.
+
+While it sometimes makes sense to deprecate, remove, coarsen, or jitter clocks,
+we don’t expect that we can get much long-term defensive value from doing so,
+for several reasons:
+
+* There are [many explicit and implicit clocks in the
+ platform](https://bugs.chromium.org/p/chromium/issues/detail?id=798795)
+* It is not always possible to coarsen or jitter them enough to slow or stop
+ exploitation…
+* …while also maintaining web platform compatibility and utility
+
+In particular, [clock jitter is of extremely limited
+utility](https://rdist.root.org/2009/05/28/timing-attack-in-google-keyczar-library/#comment-5485)
+when defending against side channel attacks.
+
+Many useful and legitimate web applications need access to high-precision
+clocks, and we want the OWP to be able to support them.
+
+### Gating Access To APIs That Enable Exploitation
+
+**Note:** This section explores ideas but we are not currently planning on
+implementing anything along these lines.
+
+Although we want to support applications that necessarily need access to
+features that enable exploitation, such as `SharedArrayBuffer`, we don’t
+necessarily need to make the features available unconditionally. For example, a
+third-party `iframe` that is trying to exploit Spectre is very different than a
+WebAssembly game, in the top-level frame, that the person is actively playing
+(and issuing many gestures to). We could programmatically detect engagement and
+establish policies for when certain APIs and features will be available to web
+content. (See e.g. [Feature Policy](https://wicg.github.io/feature-policy/).)
+
+*Engagement* could be defined in a variety of complementary ways:
+
+* High [site engagement
+ score](https://www.chromium.org/developers/design-documents/site-engagement)
+* High site popularity, search rank, or similar
+* Frequent gestures on/interactions with the document
+* Document is the top-level document
+* Document is the currently-focused tab
+* Site is bookmarked or added to the Home screen or Desktop
+
+Additionally, we have considered the possibility of prompting the user for
+permission to run certain exploit-enabling APIs, although there are problems:
+warning fatigue, and the difficulty of communicating something accurate yet
+comprehensible to people.
+
+## Conclusion
+
+For the reasons above, we now assume any active code can read any data in the
+same address space. The plan going forward must be to keep sensitive
+cross-origin data out of address spaces that run untrustworthy code, rather than
+relying on in-process checks.
diff --git a/chromium/docs/speed/addressing_performance_regressions.md b/chromium/docs/speed/addressing_performance_regressions.md
index 4512a8910c6..4b5247f3cb3 100644
--- a/chromium/docs/speed/addressing_performance_regressions.md
+++ b/chromium/docs/speed/addressing_performance_regressions.md
@@ -124,6 +124,10 @@ to learn how to use traces to debug performance issues.
* [Memory](https://chromium.googlesource.com/chromium/src/+/master/docs/memory-infra/memory_benchmarks.md)
* [Android binary size](apk_size_regressions.md)
+### How do I profile?
+
+Here is the [documentation on CPU Profiling Chrome](https://chromium.googlesource.com/chromium/src/+/master/docs/profiling.md)
+
## If you don't believe your CL could be the cause
> Please remember that our performance tests exist to catch unexpected
diff --git a/chromium/docs/speed/apk_size_regressions.md b/chromium/docs/speed/apk_size_regressions.md
index 3a97c4fed27..af1f8c35620 100644
--- a/chromium/docs/speed/apk_size_regressions.md
+++ b/chromium/docs/speed/apk_size_regressions.md
@@ -29,8 +29,13 @@
tools/binary_size/diagnose_bloat.py AFTER_GIT_REV --reference-rev BEFORE_GIT_REV --subrepo v8 --all
-You can usually find the before and after revs in the roll commit message
+ * You can usually find the before and after revs in the roll commit message
([example](https://chromium.googlesource.com/chromium/src/+/10c40fd863f4ae106650bba93b845f25c9b733b1))
+ * Note that you may need to click through the link for the list of changes
+ in order to find the actual first commit hash and use that one instead
+ since some rollers (including v8) use extra commits for tagging not in
+ master. In the linked example `BEFORE_GIT_REV` would actually be
+ `876f37c` and not `c1dec05f`.
### Monochrome.apk Alerts
@@ -153,6 +158,11 @@ to show a diff of ELF symbols.
* Use [//tools/binary_size/diagnose_bloat.py](https://chromium.googlesource.com/chromium/src/+/master/tools/binary_size/README.md)
to show a diff of Java symbols.
* Ensure any new Java deps are as specific as possible.
+ * If the change doesn't look suspect, check to see if the regression still
+ exists when internal proguard is used (see
+ [downstream graphs](https://chromeperf.appspot.com/report?sid=83bf643964a326648325f7eb6767d8adb85d67db8306dd94aa7476ed70d7dace)
+ or use `diagnose_bloat.py -v --enable-chrome-android-internal REV`
+ to build locally)
### Growth is from "other lib size" or "Unknown files size"
@@ -168,13 +178,21 @@ to show a diff of ELF symbols.
## Step 1: Check work queue daily
- * Bugs requiring sheriffs to take a look at are labeled `Performance-Sheriff` and `Performance-Size`.
+ * Bugs requiring sheriffs to take a look at are labeled `Performance-Sheriff` and `Performance-Size` [here](https://bugs.chromium.org/p/chromium/issues/list?q=label:Performance-Sheriff%20label:Performance-Size&sort=-modified).
* After resolving the bug by finding an owner or debugging or commenting, remove the `Performance-Sheriff` label.
## Step 2: Check alerts regularly
- * **IMPORTANT**: Check the [perf bot page](https://ci.chromium.org/buildbot/chromium.perf/Android%20Builder%20Perf/)
- several times a day to make sure it isn't broken (and ping/file a bug if it is).
+ * **IMPORTANT: Check the [perf bot page](https://ci.chromium.org/buildbot/chromium.perf/Android%20Builder%20Perf/)
+ several times a day to make sure it isn't broken (and ping/file a bug if it is).**
+ * At the very least you need to check this once in the morning and once in
+ the afternoon.
+ * If you don't and the builder is broken either you or the next sheriff will
+ have to manually build and diff the broken range (via. diagnose_bloat.py) to
+ see if we missed any regressions.
+ * This is necessary even if the next passing build doesn't create an alert
+ because the range could contain a large regression with multiple offsetting
+ decreases.
* Check [alert page](https://chromeperf.appspot.com/alerts?sheriff=Binary%20Size%20Sheriff) regularly for new alerts.
* Join [binary-size-alerts@chromium.org](https://groups.google.com/a/chromium.org/forum/#!forum/binary-size-alerts). Eventually it will be all set up.
* Deal with alerts as outlined above.
diff --git a/chromium/docs/speed/benchmark/benchmark_ownership.md b/chromium/docs/speed/benchmark/benchmark_ownership.md
index f94ebfe65de..b6a1892bcca 100644
--- a/chromium/docs/speed/benchmark/benchmark_ownership.md
+++ b/chromium/docs/speed/benchmark/benchmark_ownership.md
@@ -14,17 +14,20 @@ There can be multiple owners of a benchmark, for example if there are multiple t
### Telemetry Benchmarks
1. Open [`src/tools/perf/benchmarks/benchmark_name.py`](https://cs.chromium.org/chromium/src/tools/perf/benchmarks/), where `benchmark_name` is the part of the benchmark before the “.”, like `smoothness` in `smoothness.top_25_smooth`.
1. Find the class for the benchmark. It has a `Name` method that should match the full name of the benchmark.
-1. Add a `benchmark.Owner` decorator above the class.
+1. Add a `benchmark.Info` decorator above the class.
Example:
```
- @benchmark.Owner(
+ @benchmark.Info(
emails=['owner1@chromium.org', 'owner2@samsung.com'],
- component=’GoatTeleporter>Performance’)
+ component=’GoatTeleporter>Performance’,
+ documentation_url='http://link.to/your_benchmark_documentation')
```
- In this example, there are two owners for the benchmark, specified by email, and a bug component (we are working on getting the bug component automatically added to all perf regressions in Q2 2018).
+ In this example, there are two owners for the benchmark, specified by email; a bug component,
+ which will be automatically added to the bug by the perf dashboard; and a link
+ to documentation (which will be added to regression bugs in Q3 2018).
1. Run `tools/perf/generate_perf_data` to update `tools/perf/benchmarks.csv`.
1. Upload the benchmark python file and `benchmarks.csv` to a CL for review. Please add any previous owners to the review.
diff --git a/chromium/docs/speed/benchmark/harnesses/blink_perf.md b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
index 1f6e781b250..3eff372e8b3 100644
--- a/chromium/docs/speed/benchmark/harnesses/blink_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
@@ -160,8 +160,8 @@ viewer won't be supported.
Assuming your current directory is `chromium/src/`, you can run tests with:
-`./tools/perf/run_benchmark blink_perf [--test-path=<path to your tests>]`
+`./tools/perf/run_benchmark run blink_perf [--test-path=<path to your tests>]`
For information about all supported options, run:
-`./tools/perf/run_benchmark blink_perf --help`
+`./tools/perf/run_benchmark run blink_perf --help`
diff --git a/chromium/docs/speed/benchmark/harnesses/loading.md b/chromium/docs/speed/benchmark/harnesses/loading.md
new file mode 100644
index 00000000000..2c24b04e698
--- /dev/null
+++ b/chromium/docs/speed/benchmark/harnesses/loading.md
@@ -0,0 +1,101 @@
+# Loading benchmarks
+
+[TOC]
+
+## Overview
+
+The Telemetry loading benchmarks measure Chrome's loading performance under
+different network and caching conditions.
+
+There are currently three loading benchmarks:
+
+- **`loading.desktop`**: A desktop-only benchmark in which each test case
+ measures performance of loading a real world website (e.g: facebook, cnn,
+ alibaba..).
+- **`loading.mobile`**: A mobile-only benchmark that parallels `loading.desktop`
+- **`loading.cluster_telemetry`**: A cluster Telemetry benchmark that uses the
+corpus of top 10 thousands URLs from Alexa. Unlike the other two loading
+benchmarks which are run continuously on the perf waterfall, this benchmark is
+triggered on-demand only.
+
+# Running the tests remotely
+
+If you're just trying to gauge whether your change has caused a loading
+regression, you can either run `loading.desktop` and `loading.mobile` through
+[perf try job](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/perf_trybots.md) or you can run `loading.cluster_telemetry` through
+[Cluster Telemetry service](https://ct.skia.org/) (Cluster Telemetry is for
+Googler only).
+
+## Running the tests locally
+
+For more in-depth analysis and shorter cycle times, it can be helpful to run the tests locally.
+
+First, [prepare your test device for
+Telemetry](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/benchmark/telemetry_device_setup.md).
+
+Once you've done this, you can start the Telemetry benchmark with:
+
+```
+./tools/perf/run_benchmark <benchmark_name> --browser=<browser>
+```
+
+where `benchmark_name` can be `loading.desktop` or `loading.mobile`.
+
+## Understanding the loading test cases
+
+The loading test cases are divided into groups based on their network traffic
+settings and cache conditions.
+
+All available traffic settings can be found in [traffic_setting.py](https://chromium.googlesource.com/catapult/+/master/telemetry/telemetry/page/traffic_setting.py)
+
+All available caching conditions can be found in [cache_temperature.py](https://chromium.googlesource.com/catapult/+/master/telemetry/telemetry/page/cache_temperature.py)
+
+Test cases of `loading.desktop` and `loading.mobile` are named with their
+corresponding settings. For example, `DevOpera_cold_3g` test case loads
+`https://dev.opera.com/` with cold cache and 3G network setting.
+
+In additions, the pages are also tagged with labels describing their content.
+e.g: 'global', 'pwa',...
+
+To run only pages of one tags, add `--story-tag-filter=<tag name>` flag to the
+run benchmark command.
+
+## Understanding the loading metrics
+The benchmark output several different loading metrics. The keys one are:
+ * [Time To First Contentful Paint](https://docs.google.com/document/d/1kKGZO3qlBBVOSZTf-T8BOMETzk3bY15SC-jsMJWv4IE/edit#heading=h.27igk2kctj7o)
+ * [Time To First Meaningful Paint](https://docs.google.com/document/d/1BR94tJdZLsin5poeet0XoTW60M0SjvOJQttKT-JK8HI/edit)
+ * [Time to First CPU
+ Idle](https://docs.google.com/document/d/12UHgAW2r7nWo3R6FBerpYuz9EVOdG1OpPm8YmY4yD0c/edit#)
+
+Besides those key metrics, there are also breakdown metrics that are meant to
+to make debugging regressions simpler. These metrics are updated often, for most
+up to date information, you can email progressive-web-metrics@chromium.org
+or chrome-speed-metrics@google.com (Googlers only).
+
+## Adding new loading test cases
+New test cases can be added by modifying
+[loading_desktop.py](https://chromium.googlesource.com/chromium/src/+/master/tools/perf/page_sets/loading_desktop.py)
+or [loading_mobile.py](https://chromium.googlesource.com/chromium/src/+/master/tools/perf/page_sets/loading_mobile.py) page sets.
+
+For example, to add a new case of loading
+`https://en.wikipedia.org/wiki/Cats_and_the_Internet` on 2G and 3G networks with
+warm cache to `news` group to `loading.desktop` benchmark, you would write:
+
+```
+self.AddStories(
+ tags=['news'],
+ urls=[('https://en.wikipedia.org/wiki/Cats_and_the_Internet', 'wiki_cats')],
+ cache_temperatures=[cache_temperature_module.WARM],
+ traffic_settings=[traffic_setting_module.2G, traffic_setting_module.3G])
+```
+
+After adding the new page, record it and upload the page archive to cloud
+storage with:
+
+```
+$ ./tools/perf/record_wpr loading_desktop --browser=system \
+ --story-filter=wiki_cats --upload
+```
+
+If the extra story was added to `loading.mobile`, replace `loading_desktop` in
+the command above with `loading_mobile`.
diff --git a/chromium/docs/speed/benchmark/harnesses/power_perf.md b/chromium/docs/speed/benchmark/harnesses/power_perf.md
index c7c4a172d98..04372512840 100644
--- a/chromium/docs/speed/benchmark/harnesses/power_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/power_perf.md
@@ -4,80 +4,45 @@
## Overview
-The Telemetry power benchmarks use BattOr, a small external power monitor, to collect power measurements while Chrome performs various tasks (a.k.a. user stories).
+The Telemetry power benchmarks measure power indirectly by measuring the CPU time used by Chrome while it performs various tasks (a.k.a. user stories).
-There are currently seven benchmarks that collect power data, grouped together by the type of task during which the power data is collected:
+## List of power metrics
-- **`system_health.common_desktop`**: A desktop-only benchmark in which each page focuses on a single, common way in which users use Chrome (e.g. browsing Facebook photos, shopping on Amazon, searching Google)
-- **`system_health.common_mobile`**: A mobile-only benchmark that parallels `system_health.common_desktop`
-- **`battor.trivial_pages`**: A Mac-only benchmark in which each page focuses on a single, extremely simple behavior (e.g. a blinking cursor, a CSS blur animation)
-- **`battor.steady_state`**: A Mac-only benchmark in which each page focuses on a website that Chrome has exhibited pathological idle behavior in the past
-- **`media.tough_video_cases_tbmv2`**: A desktop-only benchmark in which each page tests a particular media-related scenario (e.g. playing a 1080p, H264 video with sound)
-- **`media.android.tough_video_cases_tbmv2`**: A mobile-only benchmark that parallels `media.tough_video_cases_tbmv2`
-- **`power.idle_platform`**: A benchmark that sits idle without starting Chrome for various lengths of time. Used as a debugging benchmark to monitor machine background noise.
-
-Note that these benchmarks are in the process of being consolidated and that there will likely be fewer, larger power benchmarks in the near future.
-
-The legacy power benchmarks consist of:
-
-- **`power.typical_10_mobile`**, which visits ten popular sites and uses Android-specific APIs to measure approximately how much power was consumed. This can't be deleted because it's still used by the Android System Health Council to assess whether Chrome Android is fit for release on hardware configurations for which BattOrs are not yet available.
-
-## Running the tests remotely
-
-If you're just trying to gauge whether your change has caused a power regression, you can do so by [running a benchmark remotely via a perf try job](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/perf_trybots.md).
-
-When you do this, be sure to use a configuration that's equipped with BattOrs:
-
-- `android_nexus5X`
-- `android-webview-arm64-aosp`
-- `mac-retina`
-- `mac-10-11`
-- `winx64-high-dpi`
-
-If you're unsure of which benchmark to choose, `system_health.common_[desktop/mobile]` is a safe, broad choice.
+### `cpu_time_percentage_avg`
+This metric measures the average number of cores that Chrome used over the duration of the trace.
-## Running the tests locally
-
-For more in-depth analysis and shorter cycle times, it can be helpful to run the tests locally. Because the power benchmarks rely on having a BattOr, you'll need to get one before you can do so. If you're a Googler, you can ask around (many Chrome offices already have a BattOr in them) or request one at [go/battor-request-form](http://go/battor-request-form). If you're external to Google, you can contact the BattOr's manufacturer at <sales@mellowlabs.com>.
-
-Once you have a BattOr, follow the instructions in the [BattOr laptop setup guide](https://docs.google.com/document/d/1UsHc990NRO2MEm5A3b9oRk9o7j7KZ1qftOrJyV1Tr2c/edit) to hook it up to your laptop. If you're using a phone with a BattOr, you'll need to run one USB to micro-USB cable from the host computer triggering the Telemetry tests to the BattOr and another from the host computer to the phone.
-
-Once you've done this, you can start the Telemetry benchmark with:
+This metric is enabled by adding `'cpuTimeMetric'` to the list of TBM2 metrics in the benchmark's Python class:
+```python
+options.SetTimelineBasedMetrics(['cpuTimeMetric', 'memoryMetric'])
```
-./tools/perf/run_benchmark <benchmark_name> --browser=<browser>
-```
-
-where `benchmark_name` is one of the above benchmark names.
-## Understanding power metrics
+Additionally, the `toplevel` trace category must be enabled for this metric to function correctly because it ensures that a trace span is active whenever Chrome is doing work:
-To understand our power metrics, it's important to understand the distinction between power and energy. *Energy* is what makes computers run and is measured in Joules, whereas *power* is the rate at which that energy is used and is measured in Joules per second.
+```python
+category_filter = chrome_trace_category_filter.ChromeTraceCategoryFilter(filter_string='toplevel')
+```
-Some of our power metrics measure energy, whereas others measure power. Specifically:
+## List of power benchmarks
-- We measure *energy* when the user cares about whether the task is completed (e.g. "energy required to load a page", "energy required to responsd to a mouse click").
-- We measure *power* when the user cares about the power required to continue performing an action (e.g. "power while scrolling", "power while playing a video animation").
+The primary power benchmarks are:
-The full list of our metrics is as follows:
+- **`system_health.common_desktop`**: A desktop-only benchmark in which each page focuses on a single, common way in which users use Chrome (e.g. browsing Facebook photos, shopping on Amazon, searching Google)
+- **`system_health.common_mobile`**: A mobile-only benchmark that parallels `system_health.common_desktop`
+- **`power.desktop`**: A desktop-only benchmark made up of two types of pages:
+ - Pages focusing on a single, extremely simple behavior (e.g. a blinking cursor, a CSS blur animation)
+ - Pages on which Chrome has exhibited pathological idle behavior in the past
+- **`power.typical_10_mobile`**: A mobile-only benchmark which visits ten popular sites and uses Android-specific APIs to measure approximately how much power is consumed. This benchmark is necessary to provide data to the Android System Health Council to assess whether Chrome Android is fit for release
+- **`media.desktop`**: A desktop-only benchmark in which each page tests a particular media-related scenario (e.g. playing a 1080p, H264 video with sound)
+- **`media.mobile`**: A mobile-only benchmark that parallels `media.desktop`
-### Energy metrics
-- **`load:energy_sum`**: Total energy used in between page navigations and first meaningful paint on all navigations in the story.
-- **`scroll_response:energy_sum`**: Total energy used to respond to all scroll requests in the story.
-- **`tap_response:energy_sum`**: Total energy used to respond to all taps in the story.
-- **`keyboard_response:energy_sum`**: Total energy used to respond to all key entries in the story.
+[This spreadsheet](https://docs.google.com/spreadsheets/d/1xaAo0_SU3iDfGdqDJZX_jRV0QtkufwHUKH3kQKF3YQs/edit#gid=0) lists the owner for each benchmark.
-### Power metrics
-- **`story:power_avg`**: Average power over the entire story.
-- **`css_animation:power_avg`**: Average power over all CSS animations in the story.
-- **`scroll_animation:power_avg`**: Average power over all scroll animations in the story.
-- **`touch_animation:power_avg`**: Average power over all touch animations (e.g. finger drags) in the story.
-- **`video_animation:power_avg`**: Average power over all videos played in the story.
-- **`webgl_animation:power_avg`**: Average power over all WebGL animations in the story.
-- **`idle:power_avg`**: Average power over all idle periods in the story.
+## Adding new power test cases
+To add a new test case to a power benchmark, contact the owner of the benchmark above that sounds like the best fit.
-### Other metrics
-- **`cpu_time_percentage_avg`**: Average CPU load over the entire story.
+## Running the benchmarks locally
+See [this page](https://github.com/catapult-project/catapult/blob/master/telemetry/docs/run_benchmarks_locally.md) for instructions on how to run the benchmarks locally.
-## Adding new power test cases
-We're not currently accepting new power stories until we've consolidated the existing ones.
+## Seeing power benchmark results
+Enter the platform, benchmark, and metric you care about on [this page](https://chromeperf.appspot.com/report) to see how the power metrics have moved over time.
diff --git a/chromium/docs/speed/benchmark/harnesses/rendering.md b/chromium/docs/speed/benchmark/harnesses/rendering.md
new file mode 100644
index 00000000000..a2317ae57e4
--- /dev/null
+++ b/chromium/docs/speed/benchmark/harnesses/rendering.md
@@ -0,0 +1,94 @@
+# Rendering Benchmarks
+
+This document provides an overview of the benchmarks used to monitor Chrome’s graphics performance. It includes information on what benchmarks are available, how to run them, how to interpret their results, and how to add more tests to the benchmarks.
+
+[TOC]
+
+## Glossary
+
+- **Page** (or story): A recording of a website, which is associated with a set of actions (ex. scrolling)
+- **Page Set** (or story set): A collection of different pages, organized by some shared characteristic (ex. top real world mobile sites)
+- **Metric**: A process that describes how to collect meaningful data from a Chrome trace and calculate results (ex. frame time)
+- **Benchmark**: A combination of a page set and multiple metrics
+- **Telemetry**: The [framework](https://github.com/catapult-project/catapult/blob/master/telemetry/README.md) used for Chrome performance testing, which allows benchmarks to be run and metrics to be collected
+
+## Overview
+
+The Telemetry rendering benchmarks measure Chrome’s rendering performance in different scenarios.
+
+There are currently two rendering benchmarks:
+
+- `rendering.desktop`: A desktop-only benchmark that measures performance on both real world websites and special cases (ex. pages that are difficult to zoom)
+- `rendering.mobile`: A mobile-only equivalent of rendering.desktop
+
+Note: Some pages are used for rendering.desktop but not rendering.mobile, and vice versa. This is because some pages are only meant to measure behavior on one platform, for instance dragging on desktop. This is indicated with the `SUPPORTED_PLATFORMS` attribute in the page class.
+
+These benchmarks are run on the [Chromium Perf Waterfall](https://ci.chromium.org/p/chromium/g/chromium.perf/console), with results reported on the [Chrome Performance Dashboard](https://chromeperf.appspot.com/report).
+
+## What are the rendering metrics
+
+Some rendering metrics are [written in Python](https://cs.chromium.org/chromium/src/third_party/catapult/telemetry/telemetry/web_perf/metrics/smoothness.py) and others are written [in JavaScript](https://github.com/catapult-project/catapult/blob/master/tracing/tracing/metrics/rendering_metric.html). The list of all metrics and their meanings should be documented in the files they are defined in. We are in the progress of writing all metrics in JavaScript, which means [rendering_metric.html](https://github.com/catapult-project/catapult/blob/master/tracing/tracing/metrics/rendering_metric.html) will eventually contain all metrics.
+
+Important rendering metrics include:
+- `mean_frame_time`: the amount of time it takes for a frame to be rendered
+- `mean_input_event_latency`: time from when the input event is created to when its resulted page is swap buffered (from [here](https://github.com/catapult-project/catapult/blob/master/telemetry/telemetry/web_perf/metrics/smoothness.py))
+
+## How to run rendering benchmarks on local devices
+
+First, set up your device by following the instructions [here](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/benchmark/telemetry_device_setup.md). You can then run telemetry benchmarks locally using:
+
+`./tools/perf/run_benchmark <benchmark_name> --browser=<browser>`
+
+For `benchmark_name`, use either `rendering.desktop` or `rendering.mobile`
+
+As the pages in the rendering page sets were merged from a variety of previous page sets, they have corresponding tags. To run the benchmark only for pages of a certain tag, add this flag:
+
+`--story-tag-filter=<tag name>`
+
+For example, if the old benchmark was `smoothness.tough_scrolling_cases`, you would now use `--story-tag-filter=tough_scrolling` for the rendering benchmarks. A list of all rendering [tags](https://cs.chromium.org/chromium/src/tools/perf/page_sets/rendering/story_tags.py?dr&g=0) can be found here. You can also find out which tags are used by a page by looking at the `TAGS` attribute of the class. Additionally, these same tags can be used to filter the metrics results in the generated results.html file.
+
+Other useful options for the command are:
+
+- `--pageset-repeat [n]`: override the default number of repetitions
+- `--reset-results`: clear results from any previous benchmark runs in the results.html file.
+- `--results-label [label]`: give meaningful names to your benchmark runs, to make it easier to compare them
+
+## How to run rendering benchmarks on try bots
+
+For more consistent results and to identify whether your change has resulted in a rendering regression, you can run the rendering benchmarks using a [perf try job](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/perf_trybots.md). In order to do this, you need to first upload a CL, which allows results to be generated with and without your patch.
+
+## How to handle regressions
+
+If your changes have resulted in a regression in a metric that is monitored by [perf alerts](https://chromeperf.appspot.com/alerts?sortby=end_revision&sortdirection=down), you will be assigned to a bug. This will contain information about the specific metric and how much it was regressed, as well as a Pinpoint link that will help you investigate further. For instance, you will be able to obtain traces from the try bot runs. This [link](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/addressing_performance_regressions.md) contains detailed steps on how to deal with regressions. Rendering metrics use trace events logged under the benchmark and toplevel trace categories.
+
+If you already have a trace and want to debug the metric computation part, you can just run the metric:
+`tracing/bin/run_metric <path-to-trace-file> renderingMetric`
+
+## How to add more pages
+
+New rendering pages should be added to the [./tools/perf/page_sets/rendering](https://cs.chromium.org/chromium/src/tools/perf/page_sets/rendering/?dr&g=0) folder:
+
+Pages inherit from the [RenderingStory](https://cs.chromium.org/chromium/src/tools/perf/page_sets/rendering/rendering_story.py?dr&g=0) class. If adding a group of new pages, create an abstract class with the following attributes:
+
+- `ABSTRACT_STORY = True`
+- `TAGS`: a list of tags, which can be added to [story_tags.py](https://cs.chromium.org/chromium/src/tools/perf/page_sets/rendering/story_tags.py?dr&g=0) if necessary
+- `SUPPORTED_PLATFORMS` (optional): if the page should only be mobile or desktop
+
+Children classes should specify these attributes:
+- `BASE_NAME`: name of the page
+ - Use the “new_page_name” format
+ - If the page is a real-world website and should be periodically refreshed, add “_year” to the end of the page name and update the value when a new recording is uploaded
+ - Ex. google_web_search_2018
+- `URL`: url of the page
+
+All pages in the rendering benchmark need to use [RenderingSharedState](https://cs.chromium.org/chromium/src/tools/perf/page_sets/rendering/rendering_shared_state.py?dr&g=0) as the shared_page_state_class, since this has to be consistent across pages in a page set. Individual pages can also specify `extra_browser_args`, in order to set specific flags.
+
+After adding the page, record it and upload it to cloud storage using:
+
+`./tools/perf/record_wpr rendering_desktop --browser=system --story-tag-filter=<tag name> --upload`
+
+This will modify the [data/rendering_desktop.json](https://cs.chromium.org/chromium/src/tools/perf/page_sets/data/rendering_desktop.json?type=cs&q=rendering_deskt&g=0&l=1) or [data/rendering_mobile.json](https://cs.chromium.org/chromium/src/tools/perf/page_sets/data/rendering_mobile.json?type=cs&g=0) files and generate .sha1 files, which should be included in the CL.
+
+### Merging existing pages
+
+If more pages need to be merged into the rendering page sets, please see [this guide](https://docs.google.com/document/d/19vUZCnJ0_5pfcwotl0ABTFGFIBc_CckNIyfE7Cs7I3o/edit#bookmark=id.w3jf2ip73aat) on how to do so.
diff --git a/chromium/docs/speed/bot_health_sheriffing/how_to_access_test_logs.md b/chromium/docs/speed/bot_health_sheriffing/how_to_access_test_logs.md
index 5cb817c89f9..89ee025c3ba 100644
--- a/chromium/docs/speed/bot_health_sheriffing/how_to_access_test_logs.md
+++ b/chromium/docs/speed/bot_health_sheriffing/how_to_access_test_logs.md
@@ -34,6 +34,30 @@ After doing this, search for your benchmark's name (in this case, "v8.browsing_d
![Sheriff-o-matic choose shard #0 failed link from test steps](images/som_test_steps_shard_0.png)
+### Accessing the log for the new perf recipe
+
+Currently linux-perf and mac-10_12_laptop_low_end-perf are running the new perf recipe and logs are accessed slightly differently. 
+
+#### Failing Story Logs
+Sheriff-o-matic now links to failing story logs when present. Click on the logs
+link to download the failing story log.
+![Sheriff-o-matic click on builder](images/som_new_recipe_story_log.png)
+
+
+#### Failing Benchmark Logs
+First navigate to the failing build through the Sheriff-o-matic entry by clicking on the builder this step failed on. 
+
+![Sheriff-o-matic click on builder](images/som_new_recipe_choose_builder.png)
+
+This new screen will list out the most recent builds for this builder.  To identify which build you are interested in you will have to drill into each build starting with the most recent to identify where the specific failing test is. Ctrl-F for “performance_test_suite” or scroll down to the test entry.  The list of failed tests is right on the performance_test_suite step:
+
+![Sheriff-o-matic identify the list of failing tests](images/som_new_recipe_identify_failed_tests.png)
+
+Once you have identified the build that has your failing test, click on the “Benchmark logs” link and Ctrl-F for your failing benchmark.  This link provides a logdog stream with all of the logs for that particular benchmark.
+
+![Sheriff-o-matic find benchmark logs](images/som_new_recipe_benchmark_logs_link.png)
+
+
## Navigating log files
### Identifying why a story failed
diff --git a/chromium/docs/speed/bot_health_sheriffing/images/flakiness_dashboard_new_recipe.png b/chromium/docs/speed/bot_health_sheriffing/images/flakiness_dashboard_new_recipe.png
new file mode 100644
index 00000000000..6e2985e1e8f
--- /dev/null
+++ b/chromium/docs/speed/bot_health_sheriffing/images/flakiness_dashboard_new_recipe.png
Binary files differ
diff --git a/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_benchmark_logs_link.png b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_benchmark_logs_link.png
new file mode 100644
index 00000000000..a7b0997ab46
--- /dev/null
+++ b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_benchmark_logs_link.png
Binary files differ
diff --git a/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_choose_builder.png b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_choose_builder.png
new file mode 100644
index 00000000000..f8417c9fdd5
--- /dev/null
+++ b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_choose_builder.png
Binary files differ
diff --git a/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_identify_failed_tests.png b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_identify_failed_tests.png
new file mode 100644
index 00000000000..580e13f77fe
--- /dev/null
+++ b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_identify_failed_tests.png
Binary files differ
diff --git a/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_story_log.png b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_story_log.png
new file mode 100644
index 00000000000..575a426743e
--- /dev/null
+++ b/chromium/docs/speed/bot_health_sheriffing/images/som_new_recipe_story_log.png
Binary files differ
diff --git a/chromium/docs/speed/bot_health_sheriffing/main.md b/chromium/docs/speed/bot_health_sheriffing/main.md
index 624e151d881..f327ea85e2a 100644
--- a/chromium/docs/speed/bot_health_sheriffing/main.md
+++ b/chromium/docs/speed/bot_health_sheriffing/main.md
@@ -32,7 +32,7 @@ The sheriff should *not* feel like responsible for investigating hard problems.
Incoming failures are shown in [Sheriff-o-matic](https://sheriff-o-matic.appspot.com/chromium.perf), which acts as a task management system for bot health sheriffs. Failures are divided into three groups on the dashboard:
-* **Infra failures** show general infrastructure problems that are affecting benchmarks.
+* **Infra failures** show general infrastructure problems that are affecting benchmarks. Besides surfacing in Sheriff-o-matic, we also need to check for down bots in the lame duck pool. Please file a ticket for any bots you see in [this list](https://chrome-swarming.appspot.com/botlist?c=id&c=os&c=task&c=status&c=os&c=task&c=status&c=pool&f=status%3Adead&f=pool%3Achrome.tests.perf&l=100&q=pool%3Achrome.tests.perf&s=id%3Aasc) or [this list for webview](https://chrome-swarming.appspot.com/botlist?c=id&c=os&c=task&c=status&c=os&c=task&c=status&c=pool&f=status%3Adead&f=pool%3Achrome.tests.perf-webview&l=100&q=pool%3Achrome.tests.perf&s=id%3Aasc) as they will not show up in Sheriff-o-matic.
* **Consistent failures** show benchmarks that have been failing for a while.
diff --git a/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md b/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
index 9eb5b192722..91b15bece4a 100644
--- a/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
+++ b/chromium/docs/speed/bot_health_sheriffing/what_test_is_failing.md
@@ -6,6 +6,11 @@ The easiest way to identify these is to use the [Flakiness dashboard](https://te
![The flakiness dashboard](images/flakiness_dashboard.png)
+If the bot is running the new performance_test_suite than all stories will be
+listed under test type 'performance_test_suite' and the associated builder.
+
+![The flakiness dashboard new recipe](images/flakiness_dashboard_new_recipe.png)
+
Each row represents a particular story and each column represents a recent run, listed with the most recent run on the left. If the cell is green, then the story passed; if it's red, then it failed. Only stories that have failed at least once will be listed. You can click on a particular cell to see more information like revision ranges (useful for launching bisects) and logs.
With this view, you can easily see how often a given story is failing. Usually, any story that appears to be failing in over 20% of recent runs should be disabled.
diff --git a/chromium/docs/sublime_ide.md b/chromium/docs/sublime_ide.md
index 1e692d3f150..b0b904ae3e8 100644
--- a/chromium/docs/sublime_ide.md
+++ b/chromium/docs/sublime_ide.md
@@ -553,6 +553,36 @@ shortcut to run it after building:
]
```
+### More detailed stack traces
+
+Chrome's default stack traces don't have full file paths so Sublime can't
+parse them. You can enable more detailed stack traces and use F4 to step right
+to the crashing line of code.
+
+First, add `print_unsymbolized_stack_traces = true` to your gn args, and make
+sure you have debug symbols enabled too (`symbol_level = 2`). Then, pipe
+Chrome's stderr through the asan_symbolize.py script. Here's a suitable build
+variant for Linux (with tweaked file_regex):
+
+```json
+{
+ "name": "Build and run with asan_symbolize",
+ "cmd": "ninja -j 1000 -C out/Debug chrome && out/Debug/chrome 2>&1 | ./tools/valgrind/asan/asan_symbolize.py",
+ "shell": true,
+ "file_regex": "(?:^|[)] )[.\\\\/]*([a-z]?:?[\\w.\\\\/]+)[(:]([0-9]+)[,:]?([0-9]+)?[)]?:?(.*)$"
+}
+```
+
+You can test it by visiting chrome://crash. You should be able to step through
+each line in the resulting stacktrace with F4. You can also get a stack trace
+without crashing like so:
+
+```c++
+#include "base/debug/stack_trace.h"
+[...]
+base::debug::StackTrace().Print();
+```
+
### Assigning builds to keyboard shortcuts
To assign a build to a keyboard shortcut, select `Preferences > Key Bindings -
diff --git a/chromium/docs/testing/json_test_results_format.md b/chromium/docs/testing/json_test_results_format.md
index 5b8261451b3..4eae9beda1f 100644
--- a/chromium/docs/testing/json_test_results_format.md
+++ b/chromium/docs/testing/json_test_results_format.md
@@ -69,8 +69,8 @@ contains the results of every test run, structured in a hierarchical trie format
to reduce duplication of test suite names (as you can see from the deeply
hierarchical Python test name).
-The file is strictly JSON-compliant. As a part of this, the order the name
-appear in each object is unimportant.
+The file is strictly JSON-compliant. As a part of this, the fields in each
+object may appear in any order.
## Top-level field names
@@ -83,7 +83,7 @@ appear in each object is unimportant.
| `tests` | dict | **Required.** The actual trie of test results. Each directory or module component in the test name is a node in the trie, and the leaf contains the dict of per-test fields as described below. |
| `version` | integer | **Required.** Version of the file format. Current version is 3. |
| `artifact_types` | dict | **Optional. Required if any artifacts are present for any tests.** MIME Type information for artifacts in this json file. All artifacts with the same name must share the same MIME type. |
-| `artifact_permament_location` | string | **Optional.** The URI of the root location where the artifacts are stored. If present, any artifact locations are taken to be relative to this location. Currently only the `gs://` scheme is supported. |
+| `artifact_permanent_location` | string | **Optional.** The URI of the root location where the artifacts are stored. If present, any artifact locations are taken to be relative to this location. Currently only the `gs://` scheme is supported. |
| `build_number` | string | **Optional.** If this test run was produced on a bot, this should be the build number of the run, e.g., "1234". |
| `builder_name` | string | **Optional.** If this test run was produced on a bot, this should be the builder name of the bot, e.g., "Linux Tests". |
| `chromium_revision` | string | **Optional.** The revision of the current Chromium checkout, if relevant, e.g. "356123". |
@@ -95,13 +95,16 @@ appear in each object is unimportant.
| `num_flaky` | integer | **Optional, deprecated.** The number of tests that were run more than once and produced different results each time. |
| `num_passes` | integer | **Optional, deprecated.** The number of successful tests; equivalent to `num_failures_by_type["Pass"]` |
| `num_regressions` | integer | **Optional, deprecated.** The number of tests that produced results that were unexpected failures. |
-| `skips` | integer | **Optional, deprecated.** The number of tests that were found but not run (tests should be listed in the trie with "expected" and "actual" values of `SKIP`. |
+| `skips` | integer | **Optional, deprecated.** The number of tests that were found but not run (tests should be listed in the trie with "expected" and "actual" values of `SKIP`). |
## Per-test fields
Each leaf of the `tests` trie contains a dict containing the results of a
particular test name. If a test is run multiple times, the dict contains the
-results for each invocation in the `actual` field.
+results for each invocation in the `actual` field. Unless otherwise noted,
+if the test is run multiple times, all of the other fields represent the
+overall / final / last value. For example, if a test unexpectedly fails and
+then is retried and passes, both `is_regression` and `is_unexpected` will be false).
| Field Name | Data Type | Description |
|-------------|-----------|-------------|
@@ -109,7 +112,9 @@ results for each invocation in the `actual` field.
| `expected` | string | **Required.** An unordered space-separated list of the result types expected for the test, e.g. `FAIL PASS` means that a test is expected to either pass or fail. A test that contains multiple values is expected to be flaky. |
| `artifacts` | dict | **Optional.** A dictionary describing test artifacts generated by the execution of the test. The dictionary maps the name of the artifact (`screenshot`, `crash_log`) to a list of relative locations of the artifact (`screenshot/page.png`, `logs/crash.txt`). Any '/' characters in the file paths are meant to be platform agnostic; tools will replace them with the appropriate per platform path separators. There is one entry in the list per test execution. If `artifact_permanent_location` is specified, then this location is relative to that path. Otherwise, the path is assumed to be relative to the location of the json file which contains this. |
| `bugs` | string | **Optional.** A comma-separated list of URLs to bug database entries associated with each test. |
-| `is_unexpected` | bool | **Optional.** If present and true, the failure was unexpected (a regression). If false (or if the key is not present at all), the failure was expected and will be ignored. |
+| `is_flaky` | bool | **Optional.** If present and true, the test was run multiple times and produced more than one kind of result. If false (or if the key is not present at all), the test either only ran once or produced the same result every time. |
+| `is_regression` | bool | **Optional.** If present and true, the test failed unexpectedly. If false (or if the key is not present at all), the test either ran as expected or passed unexpectedly. |
+| `is_unexpected` | bool | **Optional.** If present and true, the test result was unexpected. This might include an unexpected pass, i.e., it is not necessarily a regression. If false (or if the key is not present at all), the test produced the expected result. |
| `time` | float | **Optional.** If present, the time it took in seconds to execute the first invocation of the test. |
| `times` | array of floats | **Optional.** If present, the times in seconds of each invocation of the test. |
| `has_repaint_overlay` | bool | **Optional, layout test specific.** If present and true, indicates that the test output contains the data needed to draw repaint overlays to help explain the results (only used in layout tests). |
@@ -127,23 +132,34 @@ failure types.
| Result type | Description |
|--------------|-------------|
-| `SKIP` | The test was not run. |
-| `PASS` | The test ran as expected. |
-| `FAIL` | The test did not run as expected. |
| `CRASH` | The test runner crashed during the test. |
+| `FAIL` | The test did not run as expected. |
+| `PASS` | The test ran as expected. |
+| `SKIP` | The test was not run. |
| `TIMEOUT` | The test hung (did not complete) and was aborted. |
-| `MISSING` | **Layout test specific.** The test completed but we could not find an expected baseline to compare against |
-| `LEAK` | **Layout test specific.** Memory leaks were detected during the test execution. |
-| `SLOW` | **Layout test specific.** The test is expected to take longer than normal to run. |
-| `TEXT` | **Layout test specific, deprecated.** The test is expected to produce a text-only failure (the image, if present, will match). Normally you will see `FAIL` instead. |
| `AUDIO` | **Layout test specific, deprecated.** The test is expected to produce audio output that doesn't match the expected result. Normally you will see `FAIL` instead. |
-| `IMAGE` | **Layout test specific.** The test produces image (and possibly text output). The image output doesn't match what we'd expect, but the text output, if present, does. |
+| `IMAGE` | **Layout test specific, deprecated.** The test produces image (and possibly text output). The image output doesn't match what we'd expect, but the text output, if present, does. Normally you will see `FAIL` instead. |
| `IMAGE+TEXT` | **Layout test specific, deprecated.** The test produces image and text output, both of which fail to match what we expect. Normally you will see `FAIL` instead. |
-| `REBASELINE` | **Layout test specific.** The expected test result is out of date and will be ignored (any result other than a crash or timeout will be considered as passing). This test result should only ever show up on local test runs, not on bots (it is forbidden to check in a TestExpectations file with this expectation). This should never show up as an "actual" result. |
-| `NEEDSREBASELINE` | **Layout test specific.** The expected test result is out of date and will be ignored (as above); the auto-rebaseline-bot will look for tests of this type and automatically update them. This should never show up as an "actual" result. |
-| `NEEDSMANUALREBASELINE` | **Layout test specific.** The expected test result is out of date and will be ignored (as above). This result may be checked in to the TestExpectations file, but the auto-rebasline-bot will ignore these entries. This should never show up as an "actual" result. |
+| `LEAK` | **Layout test specific, deprecated.** Memory leaks were detected during the test execution. |
+| `MISSING` | **Layout test specific, deprecated.** The test completed but we could not find an expected baseline to compare against. |
+| `NEEDSREBASELINE` | **Layout test specific, deprecated.** The expected test result is out of date and will be ignored (as above); the auto-rebaseline-bot will look for tests of this type and automatically update them. This should never show up as an `actual` result. |
+| `REBASELINE` | **Layout test specific, deprecated.** The expected test result is out of date and will be ignored (any result other than a crash or timeout will be considered as passing). This test result should only ever show up on local test runs, not on bots (it is forbidden to check in a TestExpectations file with this expectation). This should never show up as an "actual" result. |
+| `SLOW` | **Layout test specific, deprecated.** The test is expected to take longer than normal to run. This should never appear as an `actual` result, but may (incorrectly) appear in the expected fields. |
+| `TEXT` | **Layout test specific, deprecated.** The test is expected to produce a text-only failure (the image, if present, will match). Normally you will see `FAIL` instead. |
+
+Unexpected results, failures, and regressions are different things.
+
+An unexpected result is simply a result that didn't appear in the `expected`
+field. It may be used for tests that _pass_ unexpectedly, i.e. tests that
+were expected to fail but passed. Such results should _not_ be considered
+failures.
+
+Anything other than `PASS`, `SKIP`, `SLOW`, or one of the REBASELINE types is
+considered a failure.
+
+A regression is a result that is both unexpected and a failure.
-## "full_results.json" and "failing_results.json"
+## `full_results.json` and `failing_results.json`
The layout tests produce two different variants of the above file. The
`full_results.json` file matches the above definition and contains every test
@@ -153,5 +169,5 @@ data. The `failing_results.json` file is also in the JSONP format, so it can be
read via as a `<script>` tag from an html file run from the local filesystem
without falling prey to the same-origin restrictions for local files. The
`failing_results.json` file is converted into JSONP by containing the JSON data
-preceded by the string `ADD_RESULTS(` and followed by the string `);`, so you
+preceded by the string "ADD_RESULTS(" and followed by the string ");", so you
can extract the JSON data by stripping off that prefix and suffix.
diff --git a/chromium/docs/testing/layout_test_expectations.md b/chromium/docs/testing/layout_test_expectations.md
index 5fbc462063c..937127511b0 100644
--- a/chromium/docs/testing/layout_test_expectations.md
+++ b/chromium/docs/testing/layout_test_expectations.md
@@ -131,15 +131,6 @@ depends on its arguments.
assuming that there are no platform-specific results for those platforms,
you can add the flag `--fill-missing`.
-### Rebaselining manually
-
-1. If the tests is already listed in TestExpectations as flaky, mark the test
- `NeedsManualRebaseline` and comment out the flaky line so that your patch can
- land without turning the tree red. If the test is not in TestExpectations,
- you can add a `[ Rebaseline ]` line to TestExpectations.
-2. Run `third_party/blink/tools/blink_tool.py rebaseline-expectations`
-3. Post the patch created in step 2 for review.
-
## Kinds of expectations files
* [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations): The
@@ -212,7 +203,7 @@ The syntax of a line is roughly:
[third_party/blink/tools/blinkpy/web_tests/port/base.py](../../third_party/blink/tools/blinkpy/web_tests/port/base.py)
for the meta keywords and which modifiers they represent.
* Expectations can be one or more of `Crash`, `Failure`, `Pass`, `Rebaseline`,
- `Slow`, `Skip`, `Timeout`, `WontFix`, `Missing`, `NeedsManualRebaseline`.
+ `Slow`, `Skip`, `Timeout`, `WontFix`, `Missing`.
If multiple expectations are listed, the test is considered "flaky" and any
of those results will be considered as expected.
diff --git a/chromium/docs/testing/layout_tests_tips.md b/chromium/docs/testing/layout_tests_tips.md
index fcda5dd8e88..69efd16f9ef 100644
--- a/chromium/docs/testing/layout_tests_tips.md
+++ b/chromium/docs/testing/layout_tests_tips.md
@@ -20,7 +20,7 @@ capture the context that rests in the head of an experienced Blink engineer.
## General Principles
This section contains guidelines adopted from
-[web-platform-tests documentation](http://web-platform-tests.org/writing-tests/general-guidelines.html)
+[web-platform-tests documentation](https://web-platform-tests.org/writing-tests/general-guidelines.html)
and
[WebKit's Wiki page on Writing good test cases](https://trac.webkit.org/wiki/Writing%20Layout%20Tests%20for%20DumpRenderTree),
with Blink-specific flavoring.
@@ -93,7 +93,7 @@ feature being tested.
`testharness.js` makes a test self-describing when used correctly. Other types
of tests, such as reference tests and
[tests with manual fallback](./layout_tests_with_manual_fallback.md),
-[must be carefully designed](http://web-platform-tests.org/writing-tests/manual.html#requirements-for-a-manual-test)
+[must be carefully designed](https://web-platform-tests.org/writing-tests/manual.html#requirements-for-a-manual-test)
to be self-describing.
### Minimal
@@ -107,7 +107,7 @@ valid markup (no parsing errors).
Tests should provide as much relevant information as possible when failing.
`testharness.js` tests should prefer
-[rich assert_ functions](https://github.com/w3c/web-platform-tests/blob/master/docs/_writing-tests/testharness-api.md#list-of-assertions)
+[rich assert_ functions](https://web-platform-tests.org/writing-tests/testharness-api.html#list-of-assertions)
to combining `assert_true()` with a boolean operator. Using appropriate
`assert_` functions results in better diagnostic output when the assertion
fails.
diff --git a/chromium/docs/testing/layout_tests_with_manual_fallback.md b/chromium/docs/testing/layout_tests_with_manual_fallback.md
index 0d8e3a69b9e..74aa2f92e18 100644
--- a/chromium/docs/testing/layout_tests_with_manual_fallback.md
+++ b/chromium/docs/testing/layout_tests_with_manual_fallback.md
@@ -11,7 +11,7 @@ normal browser session.
A popular pattern used in these tests is to rely on the user to perform some
manual steps in order to run the test case in a normal browser session. These
tests are effectively
-[manual tests](http://web-platform-tests.org/writing-tests/manual.html), with
+[manual tests](https://web-platform-tests.org/writing-tests/manual.html), with
additional JavaScript code that automatically performs the desired manual steps,
when loaded in an environment that exposes the needed testing APIs.
diff --git a/chromium/docs/testing/web_platform_tests.md b/chromium/docs/testing/web_platform_tests.md
index d50041d7c00..910962b742f 100644
--- a/chromium/docs/testing/web_platform_tests.md
+++ b/chromium/docs/testing/web_platform_tests.md
@@ -5,7 +5,7 @@ Interoperability between browsers is
mission of improving the web. We believe that leveraging and contributing to a
shared test suite is one of the most important tools in achieving
interoperability between browsers. The [web-platform-tests
-repository](https://github.com/w3c/web-platform-tests) is the primary shared
+repository](https://github.com/web-platform-tests/wpt) is the primary shared
test suite where all browser engines are collaborating.
Chromium has a 2-way import/export process with the upstream web-platform-tests
@@ -13,7 +13,7 @@ repository, where tests are imported into
[LayoutTests/external/wpt](../../third_party/WebKit/LayoutTests/external/wpt)
and any changes to the imported tests are also exported to web-platform-tests.
-See http://web-platform-tests.org/ for general documentation on
+See https://web-platform-tests.org/ for general documentation on
web-platform-tests, including tips for writing and reviewing tests.
[TOC]
@@ -68,7 +68,7 @@ can fix it manually.
If you upload a CL with any changes in
[third_party/WebKit/LayoutTests/external/wpt](../../third_party/WebKit/LayoutTests/external/wpt),
once you add reviewers the exporter will create a provisional pull request with
-those changes in the [upstream WPT GitHub repository](https://github.com/w3c/web-platform-tests/).
+those changes in the [upstream WPT GitHub repository](https://github.com/web-platform-tests/wpt/).
Once you're ready to land your CL, please check the Travis CI status on the
upstream PR (link at the bottom of the page). If it's green, go ahead and land your CL
@@ -82,9 +82,9 @@ Additional things to note:
- CLs that change over 1000 files will not be exported.
- All PRs use the
- [`chromium-export`](https://github.com/w3c/web-platform-tests/pulls?utf8=%E2%9C%93&q=is%3Apr%20label%3Achromium-export) label.
+ [`chromium-export`](https://github.com/web-platform-tests/wpt/pulls?utf8=%E2%9C%93&q=is%3Apr%20label%3Achromium-export) label.
- All PRs for CLs that haven't yet been landed in Chromium also use the
- [`do not merge yet`](https://github.com/w3c/web-platform-tests/pulls?q=is%3Apr+is%3Aopen+label%3A%22do+not+merge+yet%22) label.
+ [`do not merge yet`](https://github.com/web-platform-tests/wpt/pulls?q=is%3Apr+is%3Aopen+label%3A%22do+not+merge+yet%22) label.
- The exporter cannot create upstream PRs for in-flight CLs with binary files (e.g. webm files).
An export PR will still be made after the CL lands.
@@ -184,19 +184,19 @@ beyond them. It is often necessary to change the specification to clarify what
is and isn't required.
When implementation experience is needed to inform the specification work,
-[tentative tests](http://web-platform-tests.org/writing-tests/file-names.html)
+[tentative tests](https://web-platform-tests.org/writing-tests/file-names.html)
can be appropriate. It should be apparent in context why the test is tentative
and what needs to be resolved to make it non-tentative.
### Tests that require testing APIs
-[testdriver.js](http://web-platform-tests.org/writing-tests/testdriver.html)
+[testdriver.js](https://web-platform-tests.org/writing-tests/testdriver.html)
provides a means to automate tests that cannot be written purely using web
platform APIs, similar to `internals.*` and `eventSender.*` in regular Blink
layout tests.
If no testdriver.js API exists, check if it's a
-[known issue](https://github.com/w3c/web-platform-tests/labels/testdriver.js)
+[known issue](https://github.com/web-platform-tests/wpt/labels/testdriver.js)
and otherwise consider filing a new issue.
An alternative is to write manual tests that are automated with scripts from
@@ -240,7 +240,7 @@ resolve the conflict.
### Direct pull requests
It's still possible to make direct pull requests to web-platform-tests, see
-http://web-platform-tests.org/appendix/github-intro.html.
+https://web-platform-tests.org/appendix/github-intro.html.
## Running tests
diff --git a/chromium/docs/testing/writing_layout_tests.md b/chromium/docs/testing/writing_layout_tests.md
index 7d90642a9e5..f71eb3a08b5 100644
--- a/chromium/docs/testing/writing_layout_tests.md
+++ b/chromium/docs/testing/writing_layout_tests.md
@@ -73,7 +73,7 @@ There are four broad types of layout tests, listed in the order of preference.
Tests should be written under the assumption that they will be upstreamed
to the WPT project. To this end, tests should follow the
-[WPT guidelines](http://web-platform-tests.org/writing-tests/).
+[WPT guidelines](https://web-platform-tests.org/writing-tests/).
There is no style guide that applies to all layout tests. However, some projects
@@ -93,18 +93,18 @@ alternatives, which will be described in future sections, result in slower and
less reliable tests.
All new JavaScript tests should be written using the
-[testharness.js](https://github.com/w3c/web-platform-tests/tree/master/resources)
+[testharness.js](https://github.com/web-platform-tests/wpt/tree/master/resources)
testing framework. This framework is used by the tests in the
-[web-platform-tests](https://github.com/w3c/web-platform-tests) repository,
+[web-platform-tests](https://github.com/web-platform-tests/wpt) repository,
which is shared with all the other browser vendors, so `testharness.js` tests
are more accessible to browser developers.
-See the [API documentation](http://web-platform-tests.org/writing-tests/testharness-api.html)
+See the [API documentation](https://web-platform-tests.org/writing-tests/testharness-api.html)
for a thorough introduction to `testharness.js`.
Layout tests should follow the recommendations of the above documentation.
Furthermore, layout tests should include relevant
-[metadata](http://web-platform-tests.org/writing-tests/css-metadata.html). The
+[metadata](https://web-platform-tests.org/writing-tests/css-metadata.html). The
specification URL (in `<link rel="help">`) is almost always relevant, and is
incredibly helpful to a developer who needs to understand the test quickly.
@@ -190,7 +190,7 @@ and
This is contrary to the WPT guidelines, which call for absolute paths.
This limitation does not apply to the tests in `LayoutTests/http`, which rely on
an HTTP server, or to the tests in `LayoutTests/external/wpt`, which are
-imported from the [WPT repository](https://github.com/w3c/web-platform-tests).
+imported from the [WPT repository](https://github.com/web-platform-tests/wpt).
***
### WPT Supplemental Testing APIs
@@ -362,7 +362,7 @@ be slower as well. Therefore, they should only be used for functionality that
cannot be covered by JavaScript tests.
New reference tests should follow the
-[WPT reftests guidelines](http://web-platform-tests.org/writing-tests/reftests.html).
+[WPT reftests guidelines](https://web-platform-tests.org/writing-tests/reftests.html).
The most important points are summarized below.
* &#x1F6A7; The test page declares the reference page using a
@@ -435,7 +435,7 @@ tests**.
Pixel tests should still follow the principles laid out above. Pixel tests pose
unique challenges to the desire to have *self-describing* and *cross-platform*
tests. The
-[WPT rendering test guidelines](http://web-platform-tests.org/writing-tests/rendering.html)
+[WPT rendering test guidelines](https://web-platform-tests.org/writing-tests/rendering.html)
contain useful guidance. The most relevant pieces of advice are below.
* Whenever possible, use a green paragraph / page / square to indicate success.
diff --git a/chromium/docs/updating_clang.md b/chromium/docs/updating_clang.md
index b994f1b201b..966955c937f 100644
--- a/chromium/docs/updating_clang.md
+++ b/chromium/docs/updating_clang.md
@@ -35,14 +35,42 @@
```shell
git cl try &&
git cl try -m tryserver.chromium.mac -b mac_chromium_asan_rel_ng &&
- git cl try -B luci.chromium.try -b ios-device &&
- git cl try -m tryserver.chromium.linux \
- -b linux_chromium_chromeos_asan_rel_ng -b linux_chromium_msan_rel_ng \
- -b linux_chromium_cfi_rel_ng &&
+ git cl try -m tryserver.chromium.linux -b linux_chromium_cfi_rel_ng &&
git cl try -m tryserver.blink -b linux_trusty_blink_rel &&
- git cl try -m tryserver.chromium.chromiumos -b linux-chromeos-dbg
+ git cl try -B luci.chromium.try -b ios-device \
+ -b linux_chromium_chromeos_asan_rel_ng -b linux_chromium_msan_rel_ng \
+ -b linux_chromium_chromeos_msan_rel_ng -b linux-chromeos-dbg
```
+1. Start Pinpoint perf tryjobs. These are generally too noisy to catch minor
+ regressions pre-commit, but make sure there are no large regressions.
+
+ a. (Log in to store OAuth2 token in the depot_tools cache. Only needs to be
+ run once:)
+
+ ```shell
+ $ PYTHONPATH=$(dirname $(which git-cl)) python -c"import auth;auth.OAUTH_CLIENT_ID='62121018386-h08uiaftreu4dr3c4alh3l7mogskvb7i.apps.googleusercontent.com';auth.OAUTH_CLIENT_SECRET='vc1fZfV1cZC6mgDSHV-KSPOz';print auth.get_authenticator_for_host('pinpoint',auth.make_auth_config()).login()"
+ ```
+
+ a. Generate a fresh Oath2 token:
+
+ ```shell
+ $ TOKEN=$(PYTHONPATH=$(dirname $(which git-cl)) python -c"import auth;print auth.get_authenticator_for_host('pinpoint',auth.make_auth_config()).get_access_token().token")
+ ```
+
+ a. Launch Pinpoint job:
+
+ ```shell
+ $ curl -H"Authorization: Bearer $TOKEN" -F configuration=chromium-rel-win7-gpu-nvidia \
+ -F target=telemetry_perf_tests -F benchmark=speedometer2 \
+ -F patch=https://chromium-review.googlesource.com/c/chromium/src/+/$(git cl issue | cut -d' ' -f3) \
+ -F start_git_hash=HEAD -F end_git_hash=HEAD https://pinpoint-dot-chromeperf.appspot.com/api/new
+ ```
+
+ a. Use the URL returned by the command above to see the progress and result
+ of the tryjob, checking that it doesn't regress significantly (> 10%).
+ Post the URL to the codereview.
+
1. Commit roll CL from the first step
1. The bots will now pull the prebuilt binary, and goma will have a matching
binary, too.
diff --git a/chromium/docs/useful_urls.md b/chromium/docs/useful_urls.md
index f8b9ca4cf2a..4d826296457 100644
--- a/chromium/docs/useful_urls.md
+++ b/chromium/docs/useful_urls.md
@@ -5,38 +5,24 @@ This page aims to be a repository of useful links that people may find useful.
## Build Status
-* [Main buildbot waterfall](https://build.chromium.org/p/chromium/console)
-* [Last Known Good Revision](http://chromium-status.appspot.com/lkgr) : Try bots pull this revision from trunk
-* [List of the last 100 potential LKGRs](http://chromium-status.appspot.com/revisions)
-* [Status dashboard for LKGR](https://build.chromium.org/p/chromium/lkgr-status/)
-* https://build.chromium.org/p/tryserver.chromium/waterfall?committer=developer@chromium.org : Try bot runs, by developer
-* [Tree uptime stats](https://chromium-status.appspot.com/status_viewer)
-* [Commit queue status](https://chromium-cq-status.appspot.com)
-* [Pending commit queue jobs](https://codereview.chromium.org/search?closed=3&commit=2&limit=50)
+* https://ci.chromium.org/p/chromium/g/main/console: Main build console
+* https://chromium-status.appspot.com/status_viewer: Tree uptime stats
## For Sheriffs
-* https://sheriff-o-matic.appspot.com/ : Sheriff-o-Matic
-* https://build.chromium.org/p/chromium.chromiumos/waterfall?show_events=true&reload=120&failures_only=true : List of failing bots for a waterfall(chromium.chromiumos as an example)
-* https://build.chromium.org/p/chromium.linux/waterfall?show_events=true&reload=120&builder=Linux%20Builder%20x64&builder=Linux%20Builder%20(dbg) : Monitor one or multiple bots(Linux Builder x64 and Linux Builder (dbg) on chromium.linux as an example)
-* https://build.chromium.org/p/chromium.win/waterfall/help : Customize the waterfall view for a waterfall(using chromium.win as an example)
-* [Lists historical test results for the bots](https://test-results.appspot.com/dashboards/flakiness_dashboard.html)
+* https://sheriff-o-matic.appspot.com/
## Release Information
-* [Current release versions of Chrome on all channels](https://omahaproxy.appspot.com/viewer)
-* [Looks up the revision of a build/release version](https://omahaproxy.appspot.com/)
+* https://omahaproxy.appspot.com/: Current release versions
## Source Information
-* [Code Search](https://cs.chromium.org/)
-* https://cs.chromium.org/SEARCH_TERM : Code Search for a specific SEARCH\_TERM
-* [Gitiles Source Code Browser](https://chromium.googlesource.com/chromium/src/)
-* https://chromium.googlesource.com/chromium/src/+log/b6cfa6a..9a2e0a8?pretty=fuller : Git changes in revision range(also works for build numbers)
-* https://build.chromium.org/f/chromium/perf/dashboard/ui/changelog.html?url=/trunk/src&mode=html&range=SUCCESS_REV:FAILURE_REV : SVN changes in revision range
-* https://build.chromium.org/f/chromium/perf/dashboard/ui/changelog_blink.html?url=/trunk&mode=html&range=SUCCESS_REV:FAILURE_REV : Blink changes in revision range
+* https://cs.chromium.org/: Code search
+* https://chromium.googlesource.com/chromium/src/: Gitiles source code browser
+* https://chromium.googlesource.com/chromium/src/+log/b6cfa6a..9a2e0a8?pretty=fuller: Git changelog
## Communication
-* [Chromium Developers List](https://groups.google.com/a/chromium.org/group/chromium-dev/topics)
-* [Chromium Users List](https://groups.google.com/a/chromium.org/group/chromium-discuss/topics)
+* [Chromium Developers List chromium-dev@](https://groups.google.com/a/chromium.org/group/chromium-dev/topics)
+* [Chromium Users List chromium-discuss@](https://groups.google.com/a/chromium.org/group/chromium-discuss/topics)
diff --git a/chromium/docs/vscode.md b/chromium/docs/vscode.md
index bc3dc0a24f8..f17afefe53c 100644
--- a/chromium/docs/vscode.md
+++ b/chromium/docs/vscode.md
@@ -237,7 +237,7 @@ You might have to adjust the commands to your situation and needs.
```
{
"version": "0.1.0",
- "_runner": "terminal",
+ "runner": "terminal",
"showOutput": "always",
"echoCommand": true,
"tasks": [
@@ -527,7 +527,7 @@ these files are ignored by VS Code (see files.exclude above) and cannot be
opened e.g. from quick-open (`Ctrl+P`).
As of version 1.21, VS Code does not support negated glob commands, but you can
define a set of exclude pattern to include only out/Debug/gen:
-
+```
"files.exclude": {
// Ignore build output folders. Except out/Debug/gen/
"out/[^D]*/": true,
@@ -535,6 +535,7 @@ define a set of exclude pattern to include only out/Debug/gen:
"out/Debug/g[^e]*": true,
"out_*/**": true,
},
+```
Once it does, you can use
```
diff --git a/chromium/docs/win_cross.md b/chromium/docs/win_cross.md
index c2639740675..f5479c394ba 100644
--- a/chromium/docs/win_cross.md
+++ b/chromium/docs/win_cross.md
@@ -35,16 +35,41 @@ cross builds ([.asm bug](https://crbug.com/762167)).
1. `gclient sync`, follow instructions on screen.
If you're at Google, this will automatically download the Windows SDK for you.
-If this fails with an error: Please follow the instructions at
-https://www.chromium.org/developers/how-tos/build-instructions-windows
+If this fails with an error:
+
+ Please follow the instructions at
+ https://chromium.googlesource.com/chromium/src/+/master/docs/windows_build_instructions.md
+
then you may need to re-authenticate via:
cd path/to/chrome/src
# Follow instructions, enter 0 as project id.
download_from_google_storage --config
-If you are not at Google, you'll have to figure out how to get the SDK, and
-you'll need to put a JSON file describing the SDK layout in a certain location.
+If you are not at Google, you can package your Windows SDK installation
+into a zip file by running the following on a Windows machine:
+
+ cd path/to/depot_tools/win_toolchain
+ # customize the Windows SDK version numbers
+ python package_from_installed.py 2017 -w 10.0.17134.0
+
+These commands create a zip file named `<hash value>.zip`. Then, to use the
+generated file in a Linux or Mac host, the following environment variables
+need to be set:
+
+ export DEPOT_TOOLS_WIN_TOOLCHAIN_BASE_URL=<path/to/sdk/zip/file>
+ export GYP_MSVS_<toolchain hash>=<hash value>
+
+`<toolchain hash>` is hardcoded in `src/build/vs_toolchain.py` and can be found by
+setting `DEPOT_TOOLS_WIN_TOOLCHAIN_BASE_URL` and running `gclient sync`:
+
+ gclient sync
+ ...
+ Running hooks: 17% (11/64) win_toolchain
+ ________ running '/usr/bin/python src/build/vs_toolchain.py update --force' in <chromium dir>
+ Windows toolchain out of date or doesn't exist, updating (Pro)...
+ current_hashes:
+ desired_hash: <toolchain hash>
## GN setup
@@ -66,7 +91,7 @@ You can run the Windows binaries you built on swarming, like so:
See the contents of run-swarmed.py for how to do this manually.
-There's a bot doing 64-bit release cross builds at
-https://ci.chromium.org/buildbot/chromium.clang/linux-win_cross-rel/
-which also runs tests. You can look at it to get an idea of which tests pass in
-the cross build.
+The
+[linux-win_cross-rel](https://ci.chromium.org/buildbot/chromium.clang/linux-win_cross-rel/)
+buildbot does 64-bit release cross builds, and also runs tests. You can look at
+it to get an idea of which tests pass in the cross build.
diff --git a/chromium/docs/windows_build_instructions.md b/chromium/docs/windows_build_instructions.md
index 38ba8a5da07..b4d1c4e029f 100644
--- a/chromium/docs/windows_build_instructions.md
+++ b/chromium/docs/windows_build_instructions.md
@@ -233,15 +233,10 @@ don't' set enable_nacl = false then build times may get worse.
* `remove_webcore_debug_symbols = true` - turn off source-level debugging for
blink to reduce build times, appropriate if you don't plan to debug blink.
-In order to ensure that linking is fast enough we recommend that you use one of
-these settings - they all have tradeoffs:
-* `use_lld = true` - this linker is very fast on full links but does not support
-incremental linking.
-* `is_win_fastlink = true` - this option makes the Visual Studio linker run much
-faster, and incremental linking is supported, but it can lead to debugger
-slowdowns or out-of-memory crashes.
-* `symbol_level = 1` - this option reduces the work the linker has to do but
-when this option is set you cannot do source-level debugging.
+In order to speed up linking you can set `symbol_level = 1` - this option
+reduces the work the linker has to do but when this option is set you cannot do
+source-level debugging. Switching from `symbol_level = 2` (the default) to
+`symbol_level = 1` requires recompiling everything.
In addition, Google employees should use goma, a distributed compilation system.
Detailed information is available internally but the relevant gn arg is: