summaryrefslogtreecommitdiff
path: root/chromium/docs
diff options
context:
space:
mode:
authorAllan Sandfeld Jensen <allan.jensen@qt.io>2020-03-11 11:32:04 +0100
committerAllan Sandfeld Jensen <allan.jensen@qt.io>2020-03-18 13:40:17 +0000
commit31ccca0778db85c159634478b4ec7997f6704860 (patch)
tree3d33fc3afd9d5ec95541e1bbe074a9cf8da12a0e /chromium/docs
parent248b70b82a40964d5594eb04feca0fa36716185d (diff)
downloadqtwebengine-chromium-31ccca0778db85c159634478b4ec7997f6704860.tar.gz
BASELINE: Update Chromium to 80.0.3987.136
Change-Id: I98e1649aafae85ba3a83e67af00bb27ef301db7b Reviewed-by: Jüri Valdmann <juri.valdmann@qt.io>
Diffstat (limited to 'chromium/docs')
-rw-r--r--chromium/docs/README.md8
-rw-r--r--chromium/docs/accessibility/android.md72
-rw-r--r--chromium/docs/accessibility/autoclick.md2
-rw-r--r--chromium/docs/accessibility/chromevox.md8
-rw-r--r--chromium/docs/accessibility/overview.md2
-rw-r--r--chromium/docs/accessibility/select_to_speak.md6
-rw-r--r--chromium/docs/adding_to_third_party.md23
-rw-r--r--chromium/docs/android_build_instructions.md57
-rw-r--r--chromium/docs/android_dynamic_feature_modules.md39
-rw-r--r--chromium/docs/android_native_libraries.md67
-rw-r--r--chromium/docs/chromoting_android_hacking.md2
-rw-r--r--chromium/docs/cipd.md12
-rw-r--r--chromium/docs/clang_tidy.md9
-rw-r--r--chromium/docs/clang_tool_refactoring.md2
-rw-r--r--chromium/docs/code_reviews.md13
-rw-r--r--chromium/docs/commit_checklist.md8
-rw-r--r--chromium/docs/contributing.md12
-rw-r--r--chromium/docs/enterprise/active_directory_native_integration.md6
-rw-r--r--chromium/docs/enterprise/add_new_policy.md268
-rw-r--r--chromium/docs/gpu/gpu_testing.md20
-rw-r--r--chromium/docs/gpu/gpu_testing_bot_details.md26
-rw-r--r--chromium/docs/gpu/pixel_wrangling.md11
-rw-r--r--chromium/docs/gwp_asan.md20
-rw-r--r--chromium/docs/how_to_add_your_feature_flag.md8
-rw-r--r--chromium/docs/infra/new_builder.md4
-rw-r--r--chromium/docs/infra/using_led.md77
-rw-r--r--chromium/docs/initialize_blink_features.md80
-rw-r--r--chromium/docs/jumbo.md118
-rw-r--r--chromium/docs/linux_build_instructions.md9
-rw-r--r--chromium/docs/login/user_types.md6
-rw-r--r--chromium/docs/mac/triage.md160
-rw-r--r--chromium/docs/mac_build_instructions.md9
-rw-r--r--chromium/docs/media/gpu/vdatest_usage.md165
-rw-r--r--chromium/docs/media/gpu/veatest_usage.md77
-rw-r--r--chromium/docs/media/gpu/video_decoder_test_usage.md5
-rw-r--r--chromium/docs/memory-infra/README.md4
-rw-r--r--chromium/docs/memory-infra/memory_benchmarks.md70
-rw-r--r--chromium/docs/mojo_and_services.md2
-rw-r--r--chromium/docs/native_relocations.md130
-rw-r--r--chromium/docs/no_sources_assignment_filter.md88
-rw-r--r--chromium/docs/ozone_overview.md41
-rw-r--r--chromium/docs/process/merge_request.md4
-rw-r--r--chromium/docs/security/autoupgrade-mixed.md6
-rw-r--r--chromium/docs/security/faq.md2
-rw-r--r--chromium/docs/security/security-labels.md4
-rw-r--r--chromium/docs/servicification.md6
-rw-r--r--chromium/docs/speed/addressing_performance_regressions.md16
-rw-r--r--chromium/docs/speed/apk_size_regressions.md10
-rw-r--r--chromium/docs/speed/benchmark/benchmark_ownership.md2
-rw-r--r--chromium/docs/speed/benchmark/harnesses/blink_perf.md8
-rw-r--r--chromium/docs/speed/binary_size/android_binary_size_trybot.md192
-rw-r--r--chromium/docs/speed/binary_size/optimization_advice.md198
-rw-r--r--chromium/docs/speed/perf_lab_platforms.md29
-rw-r--r--chromium/docs/speed/perf_regression_sheriffing.md6
-rw-r--r--chromium/docs/speed/perf_waterfall.md5
-rw-r--r--chromium/docs/sync/model_api.md1
-rw-r--r--chromium/docs/testing/android_test_instructions.md2
-rw-r--r--chromium/docs/testing/code_coverage.md17
-rw-r--r--chromium/docs/testing/code_coverage_in_gerrit.md21
-rw-r--r--chromium/docs/testing/images/code_coverage_percentages.pngbin0 -> 44748 bytes
-rw-r--r--chromium/docs/testing/rendering_representative_perf_tests.md44
-rw-r--r--chromium/docs/testing/web_tests.md79
-rw-r--r--chromium/docs/testing/web_tests_in_content_shell.md2
-rw-r--r--chromium/docs/threading_and_tasks.md57
-rw-r--r--chromium/docs/ui/ui_devtools/index.md2
-rw-r--r--chromium/docs/updating_clang.md15
-rw-r--r--chromium/docs/webui_explainer.md6
-rw-r--r--chromium/docs/win_cross.md14
-rw-r--r--chromium/docs/win_order_files.md36
-rw-r--r--chromium/docs/windows_build_instructions.md32
-rw-r--r--chromium/docs/workflow/debugging-with-swarming.md125
71 files changed, 1938 insertions, 749 deletions
diff --git a/chromium/docs/README.md b/chromium/docs/README.md
index 2072fbe576c..bc8ebd03fdd 100644
--- a/chromium/docs/README.md
+++ b/chromium/docs/README.md
@@ -303,12 +303,12 @@ used when committed.
* [Autoplay of HTMLMediaElements](media/autoplay.md) - How HTMLMediaElements
are autoplayed.
* [Piranha Plant](piranha_plant.md) - Future architecture of MediaStreams
-* [Video Decode/Encode Accelerator Tests](media/gpu/vdatest_usage.md) - How to
- use the accelerated video decoder/encoder test programs.
+* [Video Encode Accelerator Tests](media/gpu/veatest_usage.md) - How to
+ use the accelerated video encoder test program.
* [Video Decoder Tests](media/gpu/video_decoder_test_usage.md) - Running the
- new video decoder tests.
+ video decoder tests.
* [Video Decoder Performance Tests](media/gpu/video_decoder_perf_test_usage.md) -
- Running the new video decoder performance tests.
+ Running the video decoder performance tests.
### Accessibility
* [Accessibility Overview](accessibility/overview.md) - Overview of
diff --git a/chromium/docs/accessibility/android.md b/chromium/docs/accessibility/android.md
new file mode 100644
index 00000000000..45c60aeb3f0
--- /dev/null
+++ b/chromium/docs/accessibility/android.md
@@ -0,0 +1,72 @@
+# Chrome Accessibility on Android
+
+Chrome plays an important role on Android - not only is it the default
+browser, but Chrome powers WebView, which is used by many built-in and
+third-party apps to display all sorts of content.
+
+This document covers some of the technical details of how Chrome
+implements its accessibility support on Android.
+
+As background reading, you should be familiar with
+[https://developer.android.com/guide/topics/ui/accessibility](Android Accessibility)
+and in particular
+[https://developer.android.com/reference/android/view/accessibility/AccessibilityNodeInfo](AccessibilityNodeInfo)
+and
+[https://developer.android.com/reference/android/view/accessibility/AccessibilityNodeProvider](AccessibilityNodeProvider).
+
+## WebContentsAccessibility
+
+The main Java class that implements the accessibility protocol in Chrome is
+[https://cs.chromium.org/chromium/src/content/public/android/java/src/org/chromium/content/browser/accessibility/WebContentsAccessibilityImpl.java](WebContentsAccessibilityImpl.java). It implements the AccessibilityNodeProvider
+interface, so a single Android View can be represented by an entire tree
+of virtual views. Note that WebContentsAccessibilityImpl represents an
+entire web page, including all frames. The ids in the java code are unique IDs,
+not frame-local IDs.
+
+On most platforms, we create a native object for every AXNode in a web page,
+and we implement a bunch of methods on that object that assistive technology
+can query.
+
+Android is different - it's more lightweight in one way, in that we only
+create a native AccessibilityNodeInfo when specifically requested, when
+an Android accessibility service is exploring the virtual tree. In another
+sense it's more heavyweight, though, because every time a virtual view is
+requested we have to populate it with every possible accessibility attribute,
+and there are quite a few.
+
+## Populating AccessibilityNodeInfo
+
+Populating AccessibilityNodeInfo is a bit complicated for reasons of
+Android version compatibility and also code efficiency.
+
+WebContentsAccessibilityImpl.createAccessibilityNodeInfo is the starting
+point. That's called by the Android framework when we need to provide the
+info about one virtual view (a web node).
+
+We call into C++ code -
+[https://cs.chromium.org/chromium/src/content/browser/accessibility/web_contents_accessibility_android.cc](web_contents_accessibility_android.cc) from
+there, because all of the information about the accessibility tree is
+using the shared C++ BrowserAccessibilityManager code.
+
+However, the C++ code then calls back into Java in order to set all of the
+properties of AccessibilityNodeInfo, because those have to be set in Java.
+Each of those methods, like setAccessibilityNodeInfoBooleanAttributes, is
+often overridden by an Android-version-specific subclass to take advantage
+of newer APIs where available.
+
+Having the Java code query C++ for every individual attribute one at a time
+would be too expensive, we'd be going across the JNI boundary too many times.
+That's why it's structured the way it is now.
+
+## Touch Exploration
+
+The way touch exploration works on Android is complicated:
+
+* When the user taps or drags their finger, our View gets a hover event.
+* Accessibility code sends a hit test action to the renderer process
+* The renderer process fires a HOVER accessibility event on the accessibility
+ node at that coordinate
+* WebContentsAccessibilityImpl.handleHover is called with that node. We fire
+ an Android TYPE_VIEW_HOVER_ENTER event on that node and a
+ TYPE_VIEW_HOVER_EXIT event on the previous node.
+* Finally, TalkBack sets accessibility focus to that node.
diff --git a/chromium/docs/accessibility/autoclick.md b/chromium/docs/accessibility/autoclick.md
index 28de750e347..298d5b3ba19 100644
--- a/chromium/docs/accessibility/autoclick.md
+++ b/chromium/docs/accessibility/autoclick.md
@@ -46,7 +46,7 @@ ash/autoclick/
ash/system/accessibility/autoclick*
- A component extension to provide Accessibility tree information, in
-chrome/browser/resources/chromeos/autoclick/
+chrome/browser/resources/chromeos/accessibility/autoclick/
In addition, there are settings for automatic clicks in
chrome/browser/resources/settings/a11y_page/manage_a11y_page.*
diff --git a/chromium/docs/accessibility/chromevox.md b/chromium/docs/accessibility/chromevox.md
index c45edd559fb..05f69285a08 100644
--- a/chromium/docs/accessibility/chromevox.md
+++ b/chromium/docs/accessibility/chromevox.md
@@ -13,7 +13,7 @@ To start or stop ChromeVox, press Ctrl+Alt+Z at any time.
## Developer Info
-Code location: ```chrome/browser/resources/chromeos/chromevox```
+Code location: ```chrome/browser/resources/chromeos/accessibility/chromevox```
Ninja target: it's built as part of "chrome", but you can build and run
browser_tests to test it (Chrome OS target only - you must have target_os =
@@ -36,10 +36,12 @@ few use cases.
When developing a new feature, it may be helpful to save time by not having to
go through a compile cycle. This can be achieved by setting
```chromevox_compress_js``` to 0 in
-chrome/browser/resources/chromeos/chromevox/BUILD.gn, or by using a debug build.
+chrome/browser/resources/chromeos/accessibility/chromevox/BUILD.gn, or by using
+a debug build.
In a debug build or with chromevox_compress_js off, the unflattened files in the
-Chrome out directory (e.g. out/Release/resources/chromeos/chromevox/). Now you
+Chrome out directory
+(e.g. out/Release/resources/chromeos/accessibility/chromevox/). Now you
can hack directly on the copy of ChromeVox in out/ and toggle ChromeVox to pick
up your changes (via Ctrl+Alt+Z).
diff --git a/chromium/docs/accessibility/overview.md b/chromium/docs/accessibility/overview.md
index d11a2f0807f..7ab969bbb98 100644
--- a/chromium/docs/accessibility/overview.md
+++ b/chromium/docs/accessibility/overview.md
@@ -18,7 +18,7 @@ Assistive technology includes:
* Screen readers for blind users that describe the screen using
synthesized speech or braille
* Voice control applications that let you speak to the computer,
-* Switch access that lets you control the computer with a small number
+* Switch Access that lets you control the computer with a small number
of physical switches,
* Magnifiers that magnify a portion of the screen, and often highlight the
cursor and caret for easier viewing, and
diff --git a/chromium/docs/accessibility/select_to_speak.md b/chromium/docs/accessibility/select_to_speak.md
index da1b1b62f67..ac1f3a5f165 100644
--- a/chromium/docs/accessibility/select_to_speak.md
+++ b/chromium/docs/accessibility/select_to_speak.md
@@ -43,7 +43,7 @@ Use bugs.chromium.org, filing bugs under the component
STS code lives mainly in three places:
- A component extension to do the bulk of the logic and processing,
-chrome/browser/resources/chromeos/select_to_speak/
+chrome/browser/resources/chromeos/accessibility/select_to_speak/
- An event handler, ash/events/select_to_speak_event_handler.h
@@ -102,7 +102,7 @@ The STS extension does the following, at a high level:
### Select to Speak extension structure
Most STS logic takes place in
-[select_to_speak.js](https://cs.chromium.org/chromium/src/chrome/browser/resources/chromeos/select_to_speak/select_to_speak.js).
+[select_to_speak.js](https://cs.chromium.org/chromium/src/chrome/browser/resources/chromeos/accessibility/select_to_speak/select_to_speak.js).
#### User input
@@ -193,7 +193,7 @@ Google Drive apps require a few work-arounds to work correctly with STS.
- Any time a Google Drive document is loaded (such as a Doc, Sheet or Slides
document), the script
-[select_to_speak_gdocs_script](https://cs.chromium.org/chromium/src/chrome/browser/resources/chromeos/select_to_speak/select_to_speak_gdocs_script.js?q=select_to_speak_gdocs_script.js+file:%5Esrc/chrome/browser/resources/chromeos/select_to_speak/+package:%5Echromium$&dr)
+[select_to_speak_gdocs_script](https://cs.chromium.org/chromium/src/chrome/browser/resources/chromeos/accessibility/select_to_speak/select_to_speak_gdocs_script.js?q=select_to_speak_gdocs_script.js+file:%5Esrc/chrome/browser/resources/chromeos/accessibility/select_to_speak/+package:%5Echromium$&dr)
must be executed to remove aria-hidden from the content container.
- Using search+s to read highlighted text uses the clipboard to get text data
diff --git a/chromium/docs/adding_to_third_party.md b/chromium/docs/adding_to_third_party.md
index 26b1c9feb23..7c03c8cb8fa 100644
--- a/chromium/docs/adding_to_third_party.md
+++ b/chromium/docs/adding_to_third_party.md
@@ -18,6 +18,15 @@ situations and need explicit approval; don't assume that because there's some
other directory with third_party in the name it's okay to put new things
there.
+## Before you start
+
+To make sure the inclusion of a new third_party project makes sense for the
+Chromium project, you should first obtain Chrome Eng Review approval.
+Googlers should see go/chrome-eng-review and review existing topics in
+g/chrome-eng-review. Please include information about the additional checkout
+size, build times, and binary sizes. Please also make sure that the motivation
+for your project is clear, e.g., a design doc has been circulated.
+
## Get the code
There are two common ways to depend on third-party code: you can reference a
@@ -27,6 +36,14 @@ access to the history; the latter is better if you don't need the full history
of the repo or don't need to pick up every single change. And, of course, if
the code you need isn't in a Git repo, you have to do the latter.
+### Node packages
+
+To include a Node package, add the dependency to the
+[Node package.json](../third_party/node/package.json). Make sure to update
+the corresponding [`npm_exclude.txt`](../third_party/node/npm_exclude.txt)
+and [`npm_include.txt`](../third_party/node/npm_include.txt) to make the code
+available during checkout.
+
### Pulling the code via DEPS
If the code is in a Git repo that you want to mirror, please file an [infra git
@@ -126,10 +143,8 @@ following sign-offs. Some of these are accessible to Googlers only.
Non-Googlers can email one of the people in
[//third_party/OWNERS](../third_party/OWNERS) for help.
-* Get Chrome Eng Review approval. Googlers should see
- go/chrome-eng-review. Please include information about the additional
- checkout size, build times, and binary sizes. Please also make sure that the
- motivation for your project is clear, e.g., a design doc has been circulated.
+* Make sure you have the approval from Chrome Eng Review as mentioned
+ [above](#before-you-start).
* Get security@chromium.org approval. Email the list with relevant details and
a link to the CL. Third party code is a hot spot for security vulnerabilities.
When adding a new package that could potentially carry security risk, make
diff --git a/chromium/docs/android_build_instructions.md b/chromium/docs/android_build_instructions.md
index bd226689799..8830485dd37 100644
--- a/chromium/docs/android_build_instructions.md
+++ b/chromium/docs/android_build_instructions.md
@@ -177,11 +177,11 @@ out/Default` from the command line. To compile one, pass the GN label to Ninja
with no preceding "//" (so, for `//chrome/test:unit_tests` use `autoninja -C
out/Default chrome/test:unit_tests`).
-### Multiple Chrome APK Targets
+### Multiple Chrome Targets
-The Google Play Store allows apps to send customized `.apk` files depending on
-the version of Android running on a device. Chrome uses this feature to target
-4 different versions using 4 different ninja targets:
+The Google Play Store allows apps to send customized `.apk` or `.aab` files
+depending on the version of Android running on a device. Chrome uses this
+feature to target 4 different versions using 4 different ninja targets:
1. `chrome_public_apk` (ChromePublic.apk)
* `minSdkVersion=19` (KitKat).
@@ -201,7 +201,7 @@ the version of Android running on a device. Chrome uses this feature to target
* Stores libmonochrome.so uncompressed within the APK.
* Does not use Crazy Linker (WebView requires system linker).
* But system linker supports crazy linker features now anyways.
-4. `trichrome_chrome_apk` and `trichrome_library_apk` (TrichromeChrome.apk and TrichromeLibrary.apk)
+4. `trichrome_chrome_bundle` and `trichrome_library_apk` (TrichromeChrome.aab and TrichromeLibrary.apk)
* `minSdkVersion=Q` (Q).
* TrichromeChrome contains only the Chrome code that is not shared with WebView.
* TrichromeLibrary contains the shared code and is a "static shared library APK", which must be installed prior to TrichromeChrome.
@@ -363,48 +363,21 @@ Args that affect build speed:
* What it does: Disables ProGuard (slow build step)
#### Incremental Install
-"Incremental install" uses reflection and side-loading to speed up the edit
-& deploy cycle (normally < 10 seconds). The initial launch of the apk will be
-a little slower since updated dex files are installed manually.
+[Incremental Install](/build/android/incremental_install/README.md) uses
+reflection and sideloading to speed up the edit & deploy cycle (normally < 10
+seconds). The initial launch of the apk will be a lot slower on older Android
+versions (pre-N) where the OS needs to pre-optimize the side-loaded files, but
+then be only marginally slower after the first launch.
-* All apk targets have \*`_incremental` targets defined (e.g.
- `chrome_public_apk_incremental`) except for Webview and Monochrome
-
-Here's an example:
-
-```shell
-autoninja -C out/Default chrome_public_apk_incremental
-out/Default/bin/chrome_public_apk install --incremental --verbose
-```
-
-For gunit tests (note that run_*_incremental automatically add
-`--fast-local-dev` when calling `test_runner.py`):
-
-```shell
-autoninja -C out/Default base_unittests_incremental
-out/Default/bin/run_base_unittests_incremental
-```
-
-For instrumentation tests:
-
-```shell
-autoninja -C out/Default chrome_public_test_apk_incremental
-out/Default/bin/run_chrome_public_test_apk_incremental
-```
-
-To uninstall:
-
-```shell
-out/Default/bin/chrome_public_apk uninstall
-```
-
-To avoid typing `_incremental` when building targets, you can use the GN arg:
+To enable Incremental Install, add the gn args:
```gn
-incremental_apk_by_default = true
+incremental_install = true
```
-This will make `chrome_public_apk` build in incremental mode.
+Some APKs (e.g. WebView) do not work with incremental install, and are
+blacklisted from being built as such (via `never_incremental = true`), so are
+build as normal APKs even when `incremental_install = true`.
## Installing and Running Chromium on an Emulator
diff --git a/chromium/docs/android_dynamic_feature_modules.md b/chromium/docs/android_dynamic_feature_modules.md
index 19a0633a06a..7f0d56b5a63 100644
--- a/chromium/docs/android_dynamic_feature_modules.md
+++ b/chromium/docs/android_dynamic_feature_modules.md
@@ -94,6 +94,9 @@ For this, add `foo` to the `AndroidFeatureModuleName` in
</histogram_suffixes>
```
+See [below](#metrics) for what metrics will be automatically collected after
+this step.
+
<!--- TODO(tiborg): Add info about install UI. -->
Lastly, give your module a title that Chrome and Play can use for the install
UI. To do this, add a string to
@@ -135,16 +138,13 @@ $ $OUTDIR/bin/monochrome_public_bundle install -m base -m foo
```
This will install Foo alongside the rest of Chrome. The rest of Chrome is called
-_base_ module in the bundle world. The Base module will always be put on the
+_base_ module in the bundle world. The base module will always be put on the
device when initially installing Chrome.
*** note
-**Note:** You have to specify `-m base` here to make it explicit which modules
-will be installed. If you only specify `-m foo` the command will fail. It is
-also possible to specify no modules. In that case, the script will install the
-set of modules that the Play Store would install when first installing Chrome.
-That may be different than just specifying `-m base` if we have non-on-demand
-modules.
+**Note:** The install script may install more modules than you specify, e.g.
+when there are default or conditionally installed modules (see
+[below](#conditional-install) for details).
***
You can then check that the install worked with:
@@ -637,15 +637,6 @@ public class FooImpl implements Foo {
}
```
-*** note
-**Warning:** While your module is emulated (see [below](#on-demand-install))
-your resources are only available through
-`ContextUtils.getApplicationContext()`. Not through activities, etc. We
-therefore recommend that you only access DFM resources this way. See
-[crbug/949729](https://bugs.chromium.org/p/chromium/issues/detail?id=949729)
-for progress on making this more robust.
-***
-
### Module install
@@ -782,6 +773,22 @@ like this:
</manifest>
```
+### Metrics
+
+After adding your module to `AndroidFeatureModuleName` (see
+[above](#create-dfm-target)) we will collect, among others, the following
+metrics:
+
+* `Android.FeatureModules.AvailabilityStatus.Foo`: Measures your module's
+ install penetration. That is, the share of users who eventually installed
+ the module after requesting it (once or multiple times).
+
+* `Android.FeatureModules.InstallStatus.Foo`: The result of an on-demand
+ install request. Can be success or one of several error conditions.
+
+* `Android.FeatureModules.UncachedAwakeInstallDuration.Foo`: The duration to
+ install your module successfully after on-demand requesting it.
+
### Integration test APK and Android K support
diff --git a/chromium/docs/android_native_libraries.md b/chromium/docs/android_native_libraries.md
index eb920204780..0d3ade6f26a 100644
--- a/chromium/docs/android_native_libraries.md
+++ b/chromium/docs/android_native_libraries.md
@@ -1,5 +1,6 @@
# Shared Libraries on Android
-This doc outlines some tricks / gotchas / features of how we ship native code in Chrome on Android.
+This doc outlines some tricks / gotchas / features of how we ship native code in
+Chrome on Android.
[TOC]
@@ -11,9 +12,69 @@ This doc outlines some tricks / gotchas / features of how we ship native code in
* It is loaded directly from the apk (without extracting) by `mmap()`'ing it.
* Android N, O & P (MonochromePublic.apk):
* `libmonochrome.so` is stored uncompressed (AndroidManifest.xml attribute disables extraction) and loaded directly from the apk (functionality now supported by the system linker).
- * Android Q (TrichromeChrome.apk+TrichromeLibrary.apk):
+ * Android Q (TrichromeChrome.aab+TrichromeLibrary.apk):
* `libmonochrome.so` is stored in the shared library apk (TrichromeLibrary.apk) instead of in the Chrome apk, so that it can be shared with TrichromeWebView. It's stored uncompressed and loaded directly from the apk the same way as on N-P. Trichrome uses the same native library as Monochrome, so it's still called `libmonochrome.so`.
+## Build Variants (eg. monochrome_64_32_apk)
+The packaging above extends to cover both 32-bit and 64-bit device
+configurations.
+
+Chrome and ChromeModern support 64-bit builds, but these do not ship to Stable.
+The system Webview APK that ships to those devices contains a 32-bit library,
+and for 64-bit devices, a 64-bit library as well (32-bit Webview client apps
+will use the 32-bit library, and vice-versa).
+
+### Monochrome
+Monochrome's intent was to eliminate the duplication between the 32-bit Chrome
+and Webview libraries (most of the library is identical). In 32-bit Monochrome,
+a single combined library serves both Chrome and Webview needs. The 64-bit
+version adds an extra Webview-only library.
+
+More recently, additional Monochrome permutations have arrived. First, Google
+Play will eventually require that apps offer a 64-bit version to compatible
+devices. In Monochrome, this implies swapping the architecture of the Chrome and
+Webview libraries (64-bit combined lib, and extra 32-bit Webview lib). Further
+down the road, silicon vendors may drop 32-bit support from their chips, after
+which a pure 64-bit version of Monochrome will apply. In each of these cases,
+the library name of the combined and Webview-only libraries must match (an
+Android platform requirement), so both libs are named libmonochrome.so (or
+libmonochrome_64.so in the 64-bit browser case).
+
+Since 3 of these variations require a 64-bit build config, it makes sense to
+also support the 4th variant on 64-bit, thus allowing a single builder to build
+all variants (if desired). Further, a naming scheme must exist to disambiguate
+the various targets:
+
+**monochrome_(browser ABI)_(extra_webview ABI)**
+
+For example, the 64-bit browser version with extra 32-bit Webview is
+**monochrome_64_32_apk**. The combinations are as follows:
+
+Builds on | Variant | Description
+--- | --- | ---
+32-bit | monochrome | The original 32-bit-only version
+64-bit | monochrome | The original 64-bit version, with 32-bit combined lib and 64-bit Webview. This would be named monochrome_32_64_apk if not for legacy naming.
+64-bit | monochrome_64_32 | 64-bit combined lib with 32-bit Webview library.
+64-bit | monochrome_64 | 64-bit combined lib only, for eventual pure 64-bit hardware.
+64-bit | monochrome_32 | A mirror of the original 32-bit-only version on 64-bit, to allow building all products on one builder. The result won't be bit-identical to the original, since there are subtle compilation differences.
+
+### Trichrome
+Trichrome has the same 4 permutations as Monochrome, but adds another dimension.
+Trichrome returns to separate apps for Chrome and Webview, but places shared
+resources in a third shared-library APK. The table below shows which native
+libraries are packaged where. Note that **dummy** placeholder libraries are
+inserted where needed, since Android determines supported ABIs from the presence
+of native libraries, and the ABIs of a shared library APK must match its client
+app.
+
+Builds on | Variant | Chrome | Library | Webview
+--- | --- | --- | --- | ---
+32-bit | trichrome | `32/dummy` | `32/combined` | `32/dummy`
+64-bit | trichrome | `32/dummy`, `64/dummy` | `32/combined`, `64/dummy` | `32/dummy`, `64/webview`
+64-bit | trichrome_64_32 | `32/dummy`, `64/dummy` | `32/dummy`, `64/combined` | `32/webview`, `64/dummy`
+64-bit | trichrome_64 | `64/dummy` | `64/combined` | `64/dummy`
+64-bit | trichrome_32 | `32/dummy` | `32/combined` | `32/dummy`
+
## Crashpad Packaging
* Crashpad is a native library providing out-of-process crash dumping. When a
dump is requested (e.g. after a crash), a Crashpad handler process is started
@@ -74,7 +135,7 @@ This doc outlines some tricks / gotchas / features of how we ship native code in
* `JNI_OnLoad()` is the only exported symbol (enforced by a linker script).
* Native methods registered explicitly during start-up by generated code.
* Explicit generation is required because the Android runtime uses the system's `dlsym()`, which doesn't know about Crazy-Linker-opened libraries.
- * For MonochromePublic.apk and TrichromeChrome.apk:
+ * For MonochromePublic.apk and TrichromeChrome.aab:
* `JNI_OnLoad()` and `Java_*` symbols are exported by linker script.
* No manual JNI registration is done. Symbols are resolved lazily by the runtime.
diff --git a/chromium/docs/chromoting_android_hacking.md b/chromium/docs/chromoting_android_hacking.md
index 4ff112f55df..0bae0f39ee8 100644
--- a/chromium/docs/chromoting_android_hacking.md
+++ b/chromium/docs/chromoting_android_hacking.md
@@ -80,8 +80,6 @@ display log messages to the `LogCat` pane.
<classpathentry kind="src" path="remoting/android/java/src"/>
<classpathentry kind="src" path="remoting/android/apk/src"/>
<classpathentry kind="src" path="remoting/android/javatests/src"/>
-<classpathentry kind="src" path="third_party/blink/renderer/devtools/scripts/jsdoc-validator/src"/>
-<classpathentry kind="src" path="third_party/blink/renderer/devtools/scripts/compiler-runner/src"/>
<classpathentry kind="src" path="third_party/webrtc/voice_engine/test/android/android_test/src"/>
<classpathentry kind="src" path="third_party/webrtc/modules/video_capture/android/java/src"/>
<classpathentry kind="src" path="third_party/webrtc/modules/video_render/android/java/src"/>
diff --git a/chromium/docs/cipd.md b/chromium/docs/cipd.md
index 4185a9f85a1..ed6f65a78af 100644
--- a/chromium/docs/cipd.md
+++ b/chromium/docs/cipd.md
@@ -26,7 +26,7 @@ create the following:
README.chromium
```
-For more on third-party dependencies, see [here][2].
+For more on third-party dependencies, see [adding_to_third_party.md][2].
### 2. Acquire whatever you want to package
@@ -110,6 +110,12 @@ data:
- file: foo.jar
```
+To create a private (Googler-only) package:
+```
+# Map this to //clank/third_party/sample_cipd_dep.
+package: chrome_internal/third_party/sample_cipd_dep
+```
+
For more information about the package definition spec, see [the code][3].
> **Note:** Committing the .yaml file to the repository isn't required,
@@ -120,7 +126,7 @@ For more information about the package definition spec, see [the code][3].
To actually create your package, you'll need:
- - the cipd.yaml file (described above)
+ - the `cipd.yaml` file (described above)
- [permission](#permissions-in-cipd).
Once you have those, you can create your package like so:
@@ -142,7 +148,7 @@ You'll be adding it to DEPS momentarily.
### 5. Add your CIPD package to DEPS
-You can add your package to DEPS by adding an entry of the following form to
+You can add your package to `DEPS` by adding an entry of the following form to
the `deps` dict:
```
diff --git a/chromium/docs/clang_tidy.md b/chromium/docs/clang_tidy.md
index 6a52fade04f..72e3d5c329f 100644
--- a/chromium/docs/clang_tidy.md
+++ b/chromium/docs/clang_tidy.md
@@ -58,8 +58,7 @@ ninja clang-apply-replacements
## Running clang-tidy
Running clang-tidy is (hopefully) simple.
-1. Build chrome normally.\* Note that [Jumbo builds](jumbo.md) are not
- supported.
+1. Build chrome normally.
```
ninja -C out/Release chrome
```
@@ -99,9 +98,9 @@ Copy-Paste Friendly (though you'll still need to stub in the variables):
chrome/browser
```
-\*It's not clear which, if any, `gn` flags outside of `use_jumbo_build` may
-cause issues for `clang-tidy`. I've had no problems building a component release
-build, both with and without goma. if you run into issues, let us know!
+\*It's not clear which, if any, `gn` flags may cause issues for
+`clang-tidy`. I've had no problems building a component release build,
+both with and without goma. if you run into issues, let us know!
## Questions
diff --git a/chromium/docs/clang_tool_refactoring.md b/chromium/docs/clang_tool_refactoring.md
index b8d31a64921..d4e9f94748c 100644
--- a/chromium/docs/clang_tool_refactoring.md
+++ b/chromium/docs/clang_tool_refactoring.md
@@ -15,8 +15,6 @@ with a traditional find-and-replace regexp:
## Caveats
-* Clang tools do not work with jumbo builds.
-
* Invocations of a clang tool runs on on only one build config at a time. For
example, running the tool across a `target_os="win"` build won't update code
that is guarded by `OS_POSIX`. Performing a global refactoring will often
diff --git a/chromium/docs/code_reviews.md b/chromium/docs/code_reviews.md
index 4ebdead6d95..380eca99929 100644
--- a/chromium/docs/code_reviews.md
+++ b/chromium/docs/code_reviews.md
@@ -125,11 +125,18 @@ The text `set noparent` will stop owner propagation from parent directories.
This should be rarely used. If you want to use `set noparent` except for IPC
related files, please first reach out to chrome-eng-review@google.com.
-In this example, only the two listed people are owners:
+You have to use `set noparent` together with a reference to a file that lists
+the owners for the given use case. Approved use cases are listed in
+`//build/OWNERS.setnoparent`. Owners listed in those files are expected to
+execute special governance functions such as eng review or ipc security review.
+Every set of owners should implement their own means of auditing membership. The
+minimum expectation is that membership in those files is reevaluated on
+project, or affiliation changes.
+
+In this example, only the eng reviewers are owners:
```
set noparent
-a@chromium.org
-b@chromium.org
+file://ENG_REVIEW_OWNERS
```
The `per-file` directive allows owners to be added that apply only to files
diff --git a/chromium/docs/commit_checklist.md b/chromium/docs/commit_checklist.md
index 7e9fa367ae6..992817fc213 100644
--- a/chromium/docs/commit_checklist.md
+++ b/chromium/docs/commit_checklist.md
@@ -20,6 +20,9 @@ which is equivalent to
git checkout -b <branch_name> --track origin/master
+Mark the associated crbug as "started" so that other people know that you have
+started work on the bug. Doing this can avoid duplicated work.
+
## 2. Make your changes
Do your thing. There's no further advice here about how to write or fix code.
@@ -140,8 +143,11 @@ of your reviewers to approve your changes as well, even if they're not owners.
Click `Submit to CQ` to try your change in the commit queue (CQ), which will
land it if successful.
+## 18. Cleanup
+
After your CL is landed, you can use `git rebase-update` or `git cl archive` to
-clean up your local branches.
+clean up your local branches. These commands will automatically delete merged
+branches. Mark the associated crbug as "fixed".
[//]: # (the reference link section should be alphabetically sorted)
[contributing]: contributing.md
diff --git a/chromium/docs/contributing.md b/chromium/docs/contributing.md
index 6bb13a6fc03..66f650086c8 100644
--- a/chromium/docs/contributing.md
+++ b/chromium/docs/contributing.md
@@ -287,11 +287,13 @@ be used in emergencies because it will bypass all the safety nets.
In addition to the adhering to the [styleguide][cr-styleguide], the following
general rules of thumb can be helpful in navigating how to structure changes:
-- **Code in the Chromium project should be in service of code in the Chromium
- project.** This is important so developers can understand the constraints
- informing a design decision. Those constraints should be apparent from the
- scope of code within the boundary of the project and its various
- repositories.
+- **Code in the Chromium project should be in service of other code in the
+ Chromium project.** This is important so developers can understand the
+ constraints informing a design decision. Those constraints should be apparent
+ from the scope of code within the boundary of the project and its various
+ repositories. In other words, for each line of code, you should be able to
+ find a product in the Chromium repositories that depends on that line of code
+ or else the line of code should be removed.
- **Code should only be moved to a central location (e.g., //base) when
multiple consumers would benefit.** We should resist the temptation to
diff --git a/chromium/docs/enterprise/active_directory_native_integration.md b/chromium/docs/enterprise/active_directory_native_integration.md
index e433f3acb60..fa4f3d5c248 100644
--- a/chromium/docs/enterprise/active_directory_native_integration.md
+++ b/chromium/docs/enterprise/active_directory_native_integration.md
@@ -52,14 +52,14 @@ is necessary to get the latest policies.
## Chrome Architecture
The following Chrome classes are most relevant for the AD integration:
-[AuthPolicyClient](https://cs.chromium.org/chromium/src/chromeos/dbus/auth_policy/auth_policy_client.h)
+[AuthPolicyClient](https://cs.chromium.org/chromium/src/chromeos/dbus/authpolicy/authpolicy_client.h)
is the D-Bus client for the authpolicy daemon. All authpolicy D-Bus calls are
routed through it. The
[AuthPolicyHelper](https://cs.chromium.org/chromium/src/chrome/browser/chromeos/authpolicy/authpolicy_helper.h)
is a thin abstraction layer on top of the
-[AuthPolicyClient](https://cs.chromium.org/chromium/src/chromeos/dbus/auth_policy/auth_policy_client.h)
+[AuthPolicyClient](https://cs.chromium.org/chromium/src/chromeos/dbus/authpolicy/authpolicy_client.h)
to handle cancellation and other stuff. The
-[AuthPolicyCredentialsManager](https://cs.chromium.org/chromium/src/chrome/browser/chromeos/authpolicy/auth_policy_credentials_manager.h)
+[AuthPolicyCredentialsManager](https://cs.chromium.org/chromium/src/chrome/browser/chromeos/authpolicy/authpolicy_credentials_manager.h)
keeps track of user credential status, shows notifications if the Kerberos
ticket expires and handles network connection changes. The
[ActiveDirectoryPolicyManager](https://cs.chromium.org/chromium/src/chrome/browser/chromeos/policy/active_directory_policy_manager.h)
diff --git a/chromium/docs/enterprise/add_new_policy.md b/chromium/docs/enterprise/add_new_policy.md
new file mode 100644
index 00000000000..9608222127f
--- /dev/null
+++ b/chromium/docs/enterprise/add_new_policy.md
@@ -0,0 +1,268 @@
+# Policy Settings in Chrome
+
+## Terms
+
+- User Policy: The most common kind. Associated with a user login.
+- Device Policy: (a.k.a. cloud policy) ChromeOS only. Configures device-wide
+ settings and affect unmanaged (i.e. some random gmail) users. Short list
+ compared to user policy. The most important device policy controls which
+ users can log into the device.
+
+## Adding new policy settings
+
+This section describes the steps to add a new policy setting to Chromium, which
+administrators can then configure via Windows Group Policy, the G Suite Admin
+Console, etc. Administrator documentation about setting up Chrome management is
+[here](https://www.chromium.org/administrators) if you're looking for
+information on how to deploy policy settings to Chrome.
+
+1. Think carefully about the name and the desired semantics of the new policy:
+ - Chose a name that is consistent with the existing naming scheme. Prefer
+ "XXXEnabled" over "EnableXXX" because the former is more glanceable and
+ sorts better.
+ - Consider the foreseeable future and try to avoid conflicts with possible
+ future extensions or use cases.
+ - Negative policies (*Disable*, *Disallow*) are verboten because setting
+ something to "true" to disable it confuses people.
+2. Wire the feature you want to be controlled by policy to PrefService, so a
+ pref can be used to control your feature's behavior in the desired way.
+ - For existing command line switches that are being turned into policy,
+ you will want to modify the `ChromeCommandLinePrefStore` in
+ [chrome/browser/prefs/chrome_command_line_pref_store.cc](https://cs.chromium.org/chromium/src/chrome/browser/prefs/chrome_command_line_pref_store.cc?sq=package:chromium&dr=CSs&g=0)
+ to set the property appropriately from the command line switch (the
+ managed policy will override this value from the command line
+ automagically when policy is set if you do it this way).
+3. Add a policy to control the pref:
+ - [components/policy/resources/policy_templates.json](https://cs.chromium.org/chromium/src/components/policy/resources/policy_templates.json) -
+ This file contains meta-level descriptions of all policies and is used
+ to generated code, policy templates (ADM/ADMX for windows and the
+ application manifest for Mac), as well as documentation. When adding
+ your policy, please make sure you get the version and features flags
+ (such as dynamic_refresh and supported_on) right, since this is what
+ will later appear on
+ [http://dev.chromium.org/administrators/policy-list-3](http://dev.chromium.org/administrators/policy-list-3).
+ The textual policy description should include the following:
+ - What features of Chrome are affected.
+ - Which behavior and/or UI/UX changes the policy triggers.
+ - How the policy behaves if it's left unset or set to invalid/default
+ values. This may seem obvious to you, and it probably is. However,
+ this information seems to be provided for Windows Group Policy
+ traditionally, and we've seen requests from organizations to
+ explicitly spell out the behavior for all possible values and for
+ when the policy is unset.
+ - [chrome/browser/policy/configuration_policy_handler_list_factory.cc](https://cs.chromium.org/chromium/src/chrome/browser/policy/configuration_policy_handler_list_factory.cc) -
+ for mapping the policy to the right pref.
+4. If your feature can be controlled by GUI in `chrome://settings`, then you
+ will want `chrome://settings` to disable the GUI for the feature when the
+ policy controlling it is managed.
+ - There is a method on PrefService::Preference to ask if it's managed.
+ - You will also want `chrome://settings` to display the "some settings on
+ this page have been overridden by an administrator" banner. If you use
+ the pref attribute to connect your pref to the UI, this should happen
+ automagically. NB: There is work underway to replace the banner with
+ setting-level indicators. Once that's done, we'll update instructions
+ here.
+5. Wherever possible, we would like to support dynamic policy refresh, that is,
+ the ability for an admin to change policy and Chrome to honor the change at
+ run-time without requiring a restart of the process.
+ - This means that you should listen for preference change notifications
+ for your preference.
+ - Don't forget to update `chrome://settings` when the preference changes.
+ Note that for standard elements like checkboxes, this works out of the
+ box when you use the `pref` attribute.
+6. If you’re adding a device policy for Chrome OS:
+ - Add a message for your policy in
+ components/policy/proto/chrome_device_policy.proto.
+ - Add the end of the file, add an optional field to the message
+ ChromeDeviceSettingsProto.
+ - Make sure you’ve updated
+ chrome/browser/chromeos/policy/device_policy_decoder_chromeos.{h,cc} so
+ the policy shows up on the chrome://policy page.
+7. Build the `policy_templates` target to check that the ADM/ADMX, Mac app
+ manifests, and documentation are generated correctly.
+ - The generated files are placed in `out/Debug/gen/chrome/app/policy/` (on
+ Linux, adjust for other build types/platforms).
+8. Add an entry for the new policy in
+ `chrome/test/data/policy/policy_test_cases.json`.
+9. By running `python tools/metrics/histograms/update_policies.py`, add an
+ entry for the new policy in `tools/metrics/histograms/enums.xml` in the
+ EnterprisePolicies enum. You need to check the result manually.
+10. Add a test that verifies that the policy is being enforced in
+ `chrome/browser/policy/<area>_policy_browsertest.cc` (see
+ https://crbug.com/1002483 about per-area test files for policy browser
+ tests). Ideally, your test would set the policy, fire up the browser, and
+ interact with the browser just as a user would do to check whether
+ the policy takes effect. This significantly helps Chrome QA which otherwise
+ has to test your new policy for each Chrome release.
+11. Manually testing your policy.
+ - Windows: The simplest way to test is to write the registry keys manually
+ to `Software\Policies\Chromium` (for Chromium builds) or
+ `Software\Policies\Google\Chrome` (for Google Chrome branded builds). If
+ you want to test policy refresh, you need to use group policy tools and
+ gpupdate; see
+ [Windows Quick Start](https://www.chromium.org/administrators/windows-quick-start).
+ - Mac: See
+ [Mac Quick Start](https://www.chromium.org/administrators/mac-quick-start)
+ (section "Debugging")
+ - Linux: See
+ [Linux Quick Start](https://www.chromium.org/administrators/linux-quick-start)
+ (section "Set Up Policies")
+ - Chrome OS and Android are more complex to test, as a full end-to-end
+ test requires network transactions to the policy test server.
+ Instructions for how to set up the policy test server and have the
+ browser talk to it are here:
+ [Running the cloud policy test server](https://www.chromium.org/developers/how-tos/enterprise/running-the-cloud-policy-test-server).
+ If you'd just like to do a quick test for Chrome OS, the Linux code is
+ also functional on CrOS, see
+ [Linux Quick Start](https://www.chromium.org/administrators/linux-quick-start).
+12. If you are adding a new policy that supersedes an older one, verify that the
+ new policy works as expected even if the old policy is set (allowing us to
+ set both during the transition time when Chrome versions honoring the old
+ and the new policies coexist).
+13. If your policy has interactions with other policies, make sure to document,
+ test and cover these by automated tests.
+
+## Examples
+
+Here's a CL that has the basic infrastructure work required to add a policy for
+an already existing preference. It's a good, simple place to get started:
+[http://codereview.chromium.org/8395007](http://codereview.chromium.org/8395007).
+
+## Modifying existing policies
+
+If you are planning to modify an existing policy, please send out a one-pager to
+client- and server-side stakeholders explaining the planned change.
+
+There are a few noteworthy pitfalls that you should be aware of when updating
+code that handles existing policy settings, in particular:
+
+- Make sure the policy meta data is up-to-date, in particular supported_on, and
+the feature flags.
+- In general, don’t change policy semantics in a way that is incompatible
+(as determined by user/admin-visible behavior) with previous semantics. **In
+particular, consider that existing policy deployments may affect both old and
+new browser versions, and both should behave according to the admin's
+intentions**.
+- **An important pitfall is that adding an additional allowed
+value to an enum policy may cause compatibility issues.** Specifically, an
+administrator may use the new policy value, which makes older Chrome versions
+that may still be deployed (which don't understand the new value) fall back to
+the default behavior. Carefully consider if this is OK in your case. Usually,
+it is preferred to create a new policy with the additional value and deprecate
+the old one.
+- Don't rely on the cloud policy server for policy migrations because
+this has been proven to be error prone. To the extent possible, all
+compatibility and migration code should be contained in the client.
+- It is OK to expand semantics of policy values as long as the previous policy
+description is compatible with the new behavior (see the "extending enum"
+pitfall above however).
+- It is OK to update feature implementations and the policy
+description when Chrome changes as long as the intended effect of the policy
+remains intact.
+- The process for removing policies is to deprecate them first,
+wait a few releases (if possible) and then drop support for them. Make sure you
+put the deprecated flag if you deprecate a policy.
+
+### Presubmit Checks when Modifying Existing Policies
+
+To enforce the above rules concerning policy modification and ensure no
+backwards incompatible changes are introduced, there will be presubmit checks
+performed on every change to policy_templates.json.
+
+The presubmit checks perform the following verifications:
+
+1. It verifies if a policy is considered **un-released** before allowing a
+ change. A policy is considered un-released if **any** of the following
+ conditions are true:
+
+ 1. Is the unchanged policy marked as “future: true”.
+ 2. All the supported_versions of the policy satisfy **any** of the
+ following conditions
+ 1. The unchanged supported major version is >= the current major
+ version stored in the VERSION file at tip of tree. This covers the
+ case of a policy that was just recently been added but has not yet
+ been released to a stable branch.
+ 2. The changed supported version == unchanged supported version + 1 and
+ the changed supported version is equal to the version stored in the
+ VERSION file at tip of tree. This check covers the case of
+ “un-releasing” a policy after a new stable branch has been cut but
+ before a new stable release has rolled out. Normally such a change
+ should eventually be merged into the stable branch before the
+ release.
+
+2. If the policy is considered **un-released**, all changes to it are allowed.
+
+3. However if the policy is not un-released then the following verifications
+ are performed on the delta between the original policy and the changed
+ policy.
+
+ 1. Released policies cannot be removed.
+ 2. Released policies cannot have their type changed (e.g. from bool ->
+ Enum).
+ 3. Released policies cannot have the “future: true” flag added to it. This
+ flag can only be set on a new policy.
+ 4. Released policies can only add additional supported_on versions. They
+ cannot remove or modify existing values for this field except for the
+ special case above for determining if a policy is released. Policy
+ support end version (adding “-xx”) can however be added to the
+ supported_on version to specify that a policy will no longer be
+ supported going forward (as long as the initial supported_on version is
+ not changed).
+ 5. Released policies cannot be renamed (this is the equivalent of a
+ delete + add). 1
+ 6. Released policies cannot change their device_only flag. This flag can
+ only be set on a new policy.
+ 7. Released policies with non dict types cannot have their schema changed.
+ 1. For enum types this means values cannot be renamed or removed (these
+ should be marked as deprecated instead).
+ 2. For int types, we will allow making the minimum and maximum values
+ less restrictive than the existing values.
+ 3. For string types, we will allow the removal of the ‘pattern’
+ property to allow the validation to be less restrictive.
+ 4. We will allow addition to any list type values only at the end of
+ the list of values and not in the middle or at the beginning (this
+ restriction will cover the list of valid enum values as well).
+ 5. These same restrictions will apply recursively to all property
+ schema definitions listed in a dictionary type policy.
+ 8. Released dict policies cannot remove and modify any existing key in
+ their schema. They can only add new keys to the schema.
+ 1. Dictionary policies can have some of their ‘required’ fields removed
+ in order to be less restrictive.
+
+## Updating Policy List in this Wiki
+
+Steps for updating the policy list on
+[http://dev.chromium.org/administrators/policy-list-3](http://dev.chromium.org/administrators/policy-list-3):
+
+1. Use a recent checkout to build the GN target `policy_templates` with
+ `is_official_build=true` and `is_chrome_branded=true`.
+2. Edit page
+ [http://dev.chromium.org/administrators/policy-list-3](http://dev.chromium.org/administrators/policy-list-3)
+ and select "Edit HTML", therein delete everything except "Last updated for
+ Chrome XX." and set XX to the latest version that has been officially
+ released.
+3. Open
+ `<outdir>/gen/chrome/app/policy/common/html/en-US/chrome_policy_list.html`
+ in a text editor.
+4. Cut&paste everything from the text editor into the wiki.
+5. Add some <p>...</p> to format the paragraphs at the head of the page.
+
+## Updating ADM/ADMX/JSON templates
+
+The
+[ZIP file of ADM/ADMX/JSON templates and documentation](https://dl.google.com/dl/edgedl/chrome/policy/policy_templates.zip)
+is updated upon every push of a new Chrome stable version as part of the release
+process.
+
+## Updating YAPS
+
+Once your CL with your new policy lands, the next proto sync (currently done
+every Tuesday by hendrich@) will pick up the new policy and add it to YAPS. If
+you want to use your unpublished policies with YAPS during development, please
+refer to the "Custom update to the Policy Definitions" in
+(https://sites.google.com/a/google.com/chrome-enterprise-new/faq/using-yaps).
+
+## Updating Admin Console
+
+[See here for instructions](https://docs.google.com/document/d/1QgDTWISgOE8DVwQSSz8x5oKrI3O_qAvOmPryE5DQPcw/edit)
+on adding the policy to Admin Console (Google internal only).
diff --git a/chromium/docs/gpu/gpu_testing.md b/chromium/docs/gpu/gpu_testing.md
index 354b2f556ee..f662463474a 100644
--- a/chromium/docs/gpu/gpu_testing.md
+++ b/chromium/docs/gpu/gpu_testing.md
@@ -112,11 +112,11 @@ of the following tryservers' jobs:
* [linux-rel], formerly on the `tryserver.chromium.linux` waterfall
* [mac-rel], formerly on the `tryserver.chromium.mac` waterfall
-* [win7-rel], formerly on the `tryserver.chromium.win` waterfall
+* [win10_chromium_x64_rel_ng], formerly on the `tryserver.chromium.win` waterfall
-[linux-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux-rel?limit=100
-[mac-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac-rel?limit=100
-[win7-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win7-rel?limit=100
+[linux-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/linux-rel?limit=100
+[mac-rel]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/mac-rel?limit=100
+[win10_chromium_x64_rel_ng]: https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win10_chromium_x64_rel_ng?limit=100
Scan down through the steps looking for the text "GPU"; that identifies those
tests run on the GPU bots. For each test the "trigger" step can be ignored; the
@@ -280,9 +280,9 @@ reason, you can manually pass some flags to force the same behavior:
In order to get around the local run issues, simply pass the `--local-run=1`
flag to the tests. This will disable uploading, but otherwise go through the
-same steps as a test normally would. Each test will also print out a `file://`
-URL to the image it produces and a link to all approved images for that test in
-Gold.
+same steps as a test normally would. Each test will also print out `file://`
+URLs to the produced image, the closest image for the test known to Gold, and
+the diff between the two.
Because the image produced by the test locally is likely slightly different from
any of the approved images in Gold, local test runs are likely to fail during
@@ -377,9 +377,9 @@ Email kbr@ if you try this and find it doesn't work.
See the [Swarming documentation] for instructions on how to upload your binaries to the isolate server and trigger execution on Swarming.
-Be sure to use the correct swarming dimensions for your desired GPU e.g. "1002:6613" instead of "AMD Radeon R7 240 (1002:6613)" which is how it appears on swarming task page. You can query bots in the Chrome-GPU pool to find the correct dimensions:
+Be sure to use the correct swarming dimensions for your desired GPU e.g. "1002:6613" instead of "AMD Radeon R7 240 (1002:6613)" which is how it appears on swarming task page. You can query bots in the chromium.tests.gpu pool to find the correct dimensions:
-* `python tools\swarming_client\swarming.py bots -S chromium-swarm.appspot.com -d pool Chrome-GPU`
+* `python tools\swarming_client\swarming.py bots -S chromium-swarm.appspot.com -d pool chromium.tests.gpu`
[Swarming documentation]: https://www.chromium.org/developers/testing/isolated-testing/for-swes#TOC-Run-a-test-built-locally-on-Swarming
@@ -459,7 +459,7 @@ invoke it via:
[new-isolates]: gpu_testing_bot_details.md#Adding-a-new-isolated-test-to-the-bots
-o## Adding new steps to the GPU Bots
+### Adding new steps to the GPU Bots
The tests that are run by the GPU bots are described by a couple of JSON files
in the Chromium workspace:
diff --git a/chromium/docs/gpu/gpu_testing_bot_details.md b/chromium/docs/gpu/gpu_testing_bot_details.md
index 4d9af081848..4fd9af1dcba 100644
--- a/chromium/docs/gpu/gpu_testing_bot_details.md
+++ b/chromium/docs/gpu/gpu_testing_bot_details.md
@@ -25,11 +25,11 @@ waterfalls, and various tryservers, as described in [Using the GPU Bots].
[Using the GPU Bots]: gpu_testing.md#Using-the-GPU-Bots
All of the physical hardware for the bots lives in the Swarming pool, and most
-of it in the Chrome-GPU Swarming pool. The waterfall bots are simply virtual
-machines which spawn Swarming tasks with the appropriate tags to get them to run
-on the desired GPU and operating system type. So, for example, the [Win10 x64
-Release (NVIDIA)] bot is actually a virtual machine which spawns all of its jobs
-with the Swarming parameters:
+of it in the chromium.tests.gpu Swarming pool. The waterfall bots are simply
+virtual machines which spawn Swarming tasks with the appropriate tags to get
+them to run on the desired GPU and operating system type. So, for example, the
+[Win10 x64 Release (NVIDIA)] bot is actually a virtual machine which spawns all
+of its jobs with the Swarming parameters:
[Win10 x64 Release (NVIDIA)]: https://ci.chromium.org/p/chromium/builders/ci/Win10%20x64%20Release%20%28NVIDIA%29
@@ -37,7 +37,7 @@ with the Swarming parameters:
{
"gpu": "10de:1cb3-23.21.13.8816",
"os": "Windows-10",
- "pool": "Chrome-GPU"
+ "pool": "chromium.tests.gpu"
}
```
@@ -220,7 +220,7 @@ In the [chromium/src] workspace:
In the [infradata/config] workspace (Google internal only, sorry):
* [gpu.star]
- * Defines a `Chrome-GPU` Swarming pool which contains most of the
+ * Defines a `chromium.tests.gpu` Swarming pool which contains most of the
specialized hardware: as of this writing, the Windows and Linux NVIDIA
bots, the Windows AMD bots, and the MacBook Pros with NVIDIA and AMD
GPUs. New GPU hardware should be added to this pool.
@@ -325,10 +325,10 @@ Builder].
to determine the PCI IDs of the GPUs in the bots. (These instructions will
need to be updated for Android bots which don't have PCI buses.)
- 1. Make sure to add these new machines to the Chrome-GPU Swarming pool by
- creating a CL against [gpu.star] in the [infradata/config] (Google
- internal) workspace. Git configure your user.email to @google.com if
- necessary. Here is one [example
+ 1. Make sure to add these new machines to the chromium.tests.gpu Swarming
+ pool by creating a CL against [gpu.star] in the [infradata/config]
+ (Google internal) workspace. Git configure your user.email to
+ @google.com if necessary. Here is one [example
CL](https://chrome-internal-review.googlesource.com/913528) and a
[second
example](https://chrome-internal-review.googlesource.com/1111456).
@@ -346,8 +346,8 @@ Builder].
1. The swarming dimensions are crucial. These must match the GPU and
OS type of the physical hardware in the Swarming pool. This is what
causes the VMs to spawn their tests on the correct hardware. Make
- sure to use the Chrome-GPU pool, and that the new machines were
- specifically added to that pool.
+ sure to use the chromium.tests.gpu pool, and that the new machines
+ were specifically added to that pool.
1. Make triply sure that there are no collisions between the new
hardware you're adding and hardware already in the Swarming pool.
For example, it used to be the case that all of the Windows NVIDIA
diff --git a/chromium/docs/gpu/pixel_wrangling.md b/chromium/docs/gpu/pixel_wrangling.md
index e54c98ff5b7..3c4f4583526 100644
--- a/chromium/docs/gpu/pixel_wrangling.md
+++ b/chromium/docs/gpu/pixel_wrangling.md
@@ -52,8 +52,7 @@ so on. The waterfalls we’re interested in are:
[Chromium GPU]: https://ci.chromium.org/p/chromium/g/chromium.gpu/console?reload=120
[Chromium GPU FYI]: https://ci.chromium.org/p/chromium/g/chromium.gpu.fyi/console?reload=120
[ANGLE tryservers]: https://build.chromium.org/p/tryserver.chromium.angle/waterfall
-<!-- TODO(kainino): update link when the page is migrated -->
-[ANGLE Wrangler]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/angle-wrangling
+[ANGLE Wrangler]: https://chromium.googlesource.com/angle/angle/+/master/infra/ANGLEWrangling.md
## Test Suites
@@ -80,6 +79,8 @@ test the code that is actually shipped. As of this writing, the tests included:
`src/gpu/gles2_conform_support/BUILD.gn`
* `gl_tests`: see `src/gpu/BUILD.gn`
* `gl_unittests`: see `src/ui/gl/BUILD.gn`
+* `rendering_representative_perf_tests` (on the chromium.gpu.fyi waterfall):
+ see `src/chrome/test/BUILD.gn`
And more. See
[`src/testing/buildbot/README.md`](../../testing/buildbot/README.md)
@@ -237,8 +238,9 @@ shift, and a calendar appointment.
by Telemetry, rather than a Gtest harness. The tests and their
expectations are contained in [src/content/test/gpu/gpu_tests/test_expectations] . See
for example <code>[webgl_conformance_expectations.txt]</code>,
- <code>[gpu_process_expectations.txt]</code> and
- <code>[pixel_expectations.txt]</code>.
+ <code>[gpu_process_expectations.txt]</code>,
+ <code>[pixel_expectations.txt]</code> and
+ [rendering_representative_perf_tests].
1. See the header of the file a list of modifiers to specify a bot
configuration. It is possible to specify OS (down to a specific
version, say, Windows 7 or Mountain Lion), GPU vendor
@@ -277,6 +279,7 @@ https://ci.chromium.org/p/chromium/builders/luci.chromium.try/win7-rel
[pixel_expectations.txt]: https://chromium.googlesource.com/chromium/src/+/master/content/test/gpu/gpu_tests/test_expectations/pixel_expectations.txt
[stamping out flakiness]: gpu_testing.md#Stamping-out-Flakiness
[gtest-DISABLED]: https://github.com/google/googletest/blob/master/googletest/docs/AdvancedGuide.md#temporarily-disabling-tests
+[rendering_representative_perf_tests]: ../testing/rendering_representative_perf_tests.md#Updating-Expectations
### When Bots Misbehave (SSHing into a bot)
diff --git a/chromium/docs/gwp_asan.md b/chromium/docs/gwp_asan.md
index 90cdc7a5ea8..0751e2e3d97 100644
--- a/chromium/docs/gwp_asan.md
+++ b/chromium/docs/gwp_asan.md
@@ -5,6 +5,11 @@ samples allocations to a debug allocator, similar to ElectricFence or Page Heap,
causing memory errors to crash and report additional debugging context about
the error.
+It is also known by its recursive backronym, GWP-ASan Will Provide Allocation
+Sanity.
+
+To read a more in-depth explanation of GWP-ASan see [this post](https://sites.google.com/a/chromium.org/dev/Home/chromium-security/articles/gwp-asan).
+
## Allocator
The GuardedPageAllocator returns allocations on pages buffered on both sides by
@@ -47,10 +52,9 @@ validate the allocator internals before reasoning about them.
## Status
-GWP-ASan is implemented for malloc and PartitionAlloc, but not for Oilpan or v8,
-on Windows and macOS. It is currently enabled by default for malloc. The
-allocator parameters can be manually modified by using an invocation like the
-following:
+GWP-ASan is implemented for malloc and PartitionAlloc. It is enabled by default
+on Windows and macOS. The allocator parameters can be manually modified by using
+an invocation like the following:
```shell
chrome --enable-features="GwpAsanMalloc<Study" \
@@ -63,8 +67,8 @@ catch newly introduced bugs, and for specific processes depending on the
particular allocator.
A [hotlist of bugs discovered by by GWP-ASan](https://bugs.chromium.org/p/chromium/issues/list?can=1&q=Hotlist%3DGWP-ASan)
-exists, though GWP-ASan crashes are filed without external visibility by
-default.
+exists, though GWP-ASan crashes are filed Bug-Security, e.g. without external
+visibility, by default.
## Limitations
@@ -81,6 +85,10 @@ default.
- GWP-ASan does not hook PDFium's fork of PartitionAlloc.
- Right-aligned allocations to catch overflows are not perfectly right-aligned,
so small out-of-bounds accesses may be missed.
+- GWP-ASan does not sample some early allocations that occur before field trial
+ initialization.
+- Depending on the platform, GWP-ASan may or may not hook malloc allocations
+ that occur in code not linked directly against Chrome.
## Testing
diff --git a/chromium/docs/how_to_add_your_feature_flag.md b/chromium/docs/how_to_add_your_feature_flag.md
index e59fa5a0c18..9888ced5561 100644
--- a/chromium/docs/how_to_add_your_feature_flag.md
+++ b/chromium/docs/how_to_add_your_feature_flag.md
@@ -14,7 +14,7 @@ For example, if you want to use the flag in src/content, you should add a base::
If you want to use the flag in blink, you should also read
[Runtime Enable Features](https://www.chromium.org/blink/runtime-enabled-features).
-You can refer to [this CL](https://chromium-review.googlesource.com/c/554510/)
+You can refer to [this CL](https://chromium-review.googlesource.com/c/554510/) and [this document](https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/InitializeBlinkFeatures.md)
to see
1. where to add the base::Feature
@@ -22,11 +22,7 @@ to see
[[2](https://chromium-review.googlesource.com/c/554510/8/content/public/common/content_features.h)]
2. how to use it
[[1](https://chromium-review.googlesource.com/c/554510/8/content/common/service_worker/service_worker_utils.cc#153)]
-3. how to wire the base::Feature to WebRuntimeFeatures
-[[1](https://chromium-review.googlesource.com/c/554510/8/content/child/runtime_features.cc)]
-[[2](https://chromium-review.googlesource.com/c/554510/8/third_party/blink/public/platform/web_runtime_features.h)]
-[[3](https://chromium-review.googlesource.com/c/554510/third_party/blink/Source/platform/exported/web_runtime_features.cc)]
-[[4](https://chromium-review.googlesource.com/c/554510/8/third_party/blink/renderer/platform/runtime_enabled_features.json5)]
+3. how to wire your new base::Feature to a blink runtime feature[[1](https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/InitializeBlinkFeatures.md)]
4. how to use it in blink
[[1](https://chromium-review.googlesource.com/c/554510/8/third_party/blnk/renderere/core/workers/worker_thread.cc)]
diff --git a/chromium/docs/infra/new_builder.md b/chromium/docs/infra/new_builder.md
index f30c423c949..d486e5317d2 100644
--- a/chromium/docs/infra/new_builder.md
+++ b/chromium/docs/infra/new_builder.md
@@ -322,9 +322,9 @@ reach out to infra-dev@chromium.org or [file a bug][19]!
[5]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests
[6]: /infra/config
[7]: https://luci-config.appspot.com/schemas/projects:cr-buildbucket.cfg
-[8]: /infra/config/cr-buildbucket.cfg
+[8]: /infra/config/generated/cr-buildbucket.cfg
[9]: http://luci-config.appspot.com/schemas/projects:luci-milo.cfg
-[10]: /infra/config/luci-milo.cfg
+[10]: /infra/config/generated/luci-milo.cfg
[11]: https://chromium.googlesource.com/infra/luci/luci-go/+/master/scheduler/appengine/messages/config.proto
[12]: /infra/config/luci-scheduler.cfg
[13]: /tools/mb/README.md
diff --git a/chromium/docs/infra/using_led.md b/chromium/docs/infra/using_led.md
new file mode 100644
index 00000000000..a5617c182f2
--- /dev/null
+++ b/chromium/docs/infra/using_led.md
@@ -0,0 +1,77 @@
+# Using LED
+
+LED is an infrastructure tool used to manually trigger builds on any builder
+running on LUCI. It's designed to help debug build failures or experiment with
+new builder changes. This doc describes how to use it with Chromium's builders.
+
+[TOC]
+
+## When to use it
+
+Use cases include, but are not limited to, the following:
+* **Testing a recipe change**: Much of the code in the following repos
+define what and how a builder runs (also known as the builder's "recipe"):
+[recipes-py][1], [depot_tools][2], and [tools/build][3]. Changes to these
+repos can first be tested out on any builder prior to submitting via LED.
+* **Debugging a waterfall failure**: If a waterfall builder (that is, *not* a
+trybot) is exhibitting frequent or strange failures that can't be reproduced
+locally, LED can be used to retrigger any given build for debugging.
+
+## When *not* to use it
+
+Certain types of changes to a trybot (this includes all builders on the CQ)
+can be sufficiently tested without the use of LED. This includes changes to a
+trybot's:
+* **GN args**: A trybot's build args are configured via
+[mb_config.pyl][4].
+* **tests**: The list of tests a trybot runs are set via the \*.pyl files in
+[//testing/buildbot/][5]. (Some trybots may not be present in
+those files. Instead, change the waterfall builders they mirror. This mapping is
+configured in tools/build's [trybots.py][6].)
+
+Simply edit the needed files in a local chromium/src checkout, upload the change
+to Gerrit, then select the affected trybot(s) via the "select tryjobs" menu.
+
+## How to use it
+
+Provided that a local depot_tools checkout is present on $PATH, LED can be
+used by simply invoking `led` on the command line. A common use-case for LED is
+to modify the build steps of any given builder. The process for doing this is
+outlined below. (However, LED can be used for many other purposes. See the full
+list of features via `led help`.)
+
+1. Select a builder whose builds you'd like to reproduce. (Example:
+[linux-rel][7])
+2. Record its full builder name, along with its bucket. (The bucket name is
+present in the URL of the builder page, and is very likely "chromium/ci".)
+3. Checkout the [tools/build][3] repo (if not already present) and navigate to
+the [chromium][8] and/or [chromium_tests][9] recipe modules. These, along with
+the other recipe_modules located in tools/build, are how the majority of a
+Chromium builder's recipe is defined.
+4. Make the desired recipe change. (Consider running local recipe unittests
+before proceeding by running `recipes.py test train` via the [recipes.py][10]
+script.
+5. Launch a build with the given recipe change. This can be done with a single
+chained LED invocation, eg:
+`led get-builder chromium/ci:linux-rel | led edit-recipe-bundle | led launch`
+6. The LED invocation above will print out a link to the build that was
+launched. Repeat steps 4 & 5 until the triggered builds behave as expected
+with the new recipe change.
+
+## Questions? Feedback?
+
+If you're in need of further assistance, if you're not sure about
+one or more steps, or if you found this documentation lacking, please
+reach out to infra-dev@chromium.org or [file a bug][11]!
+
+[1]: https://chromium.googlesource.com/infra/luci/recipes-py/
+[2]: https://chromium.googlesource.com/chromium/tools/depot_tools/
+[3]: https://chromium.googlesource.com/chromium/tools/build/
+[4]: /tools/mb/mb_config.pyl
+[5]: /testing/buildbot/
+[6]: https://chromium.googlesource.com/chromium/tools/build/+/HEAD/scripts/slave/recipe_modules/chromium_tests/trybots.py
+[7]: https://ci.chromium.org/p/chromium/builders/ci/linux-rel
+[8]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium/api.py
+[9]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipe_modules/chromium_tests/api.py
+[10]: https://chromium.googlesource.com/chromium/tools/build/+/master/scripts/slave/recipes.py
+[11]: https://g.co/bugatrooper
diff --git a/chromium/docs/initialize_blink_features.md b/chromium/docs/initialize_blink_features.md
new file mode 100644
index 00000000000..16140d2eb67
--- /dev/null
+++ b/chromium/docs/initialize_blink_features.md
@@ -0,0 +1,80 @@
+# Initialization of Blink runtime features in content layer
+This document outlines how to initialize your Blink runtime features in the
+Chromium content layer, more specifically in
+[content/child/runtime_features.cc][runtime_features]. To learn more on how to
+set up features in blink, see
+[Runtime Enabled Features][RuntimeEnabledFeatures].
+
+## Step 1: Do you need a custom Blink feature enabler function?
+If you simply need to enable/disable the Blink feature you can simply use
+[WebRuntimeFeatures::EnableFeatureFromString()][EnableFeatureFromString].
+
+However, if there are side effects (e.g. you need to disable other features if
+this feature is also disabled), you should declare a custom enabler function in
+- [third_party/blink/public/platform/web_runtime_features.h][WebRuntimeFeatures.h]
+- [third_party/blink/public/platform/web_runtime_features.cc][WebRuntimeFeatures.cc]
+
+## Step 2: Determine how your feature is initialized.
+### 1) Depends on OS-specific Macros:
+Add your code for controlling the Blink feature in
+[SetRuntimeFeatureDefaultsForPlatform()][SetRuntimeFeatureDefaultsForPlatform]
+using the appropriate OS macros.
+### 2) Depends on the status of a base::Feature:
+Add your code to the function
+[SetRuntimeFeaturesFromChromiumFeatures()][SetRuntimeFeaturesFromChromiumFeatures].
+
+If your Blink feature has a custom enabler function, add a new entry to
+`blinkFeatureToBaseFeatureMapping`. For example, a new entry like this:
+```
+{wf::EnableNewFeatureX, features::kNewFeatureX, kEnableOnly},
+```
+will call `wf::EnableNewFeatureX` to enable it only if `features::kNewFeatureX`
+is enabled.
+
+If your Blink feature does not have a custom enabler function, you need to add
+the entry to `runtimeFeatureNameToChromiumFeatureMapping`. For example, a new
+entry like this:
+```
+{"NewFeatureY", features::kNewFeatureY, kUseFeatureState},
+```
+will call `wf::EnableFeatureFromString` with your feature name to set it to
+whichever state your `features::kNewFeatureY` is in.
+
+For more detailed explanation on the options you have, read the comment in enum
+[RuntimeFeatureEnableOptions][EnableOptions].
+### 3) Set by a command line switch to enable or disable:
+Add your code to the function
+[SetRuntimeFeaturesFromCommandLine()][SetRuntimeFeaturesFromCommandLine].
+
+If your Blink feature has a custom enabler function, add a new entry to
+`switchToFeatureMapping`. For example, a new entry like this:
+```
+{wrf::EnableNewFeatureX, switches::kNewFeatureX, false},
+```
+will call `wf::EnableNewFeatureX` to disable it only if that
+`switches::kNewFeatureX` exists on the command line.
+
+### 4) Controlled by parameters from a field trial:
+Add your code to the function
+[SetRuntimeFeaturesFromFieldTrialParams()][SetRuntimeFeaturesFromFieldTrialParams].
+
+### 5) Combination of the previous options or not covered:
+For example, you Blink feature could be controlled by both a base::Feature and a
+command line switch. In this case, your custom logic should live here in
+[`SetCustomizedRuntimeFeaturesFromCombinedArgs()`][SetCustomizedRuntimeFeaturesFromCombinedArgs].
+
+
+[EnableOptions]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#135>
+[runtime_features]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc>
+[RuntimeEnabledFeatures]:
+<https://chromium.googlesource.com/chromium/src/+/HEAD/third_party/blink/renderer/platform/RuntimeEnabledFeatures.md>
+[WebRuntimeFeatures.h]:
+<https://chromium.googlesource.com/chromium/src/+/HEAD/third_party/blink/renderer/platform/exported/web_runtime_features.h>
+[WebRuntimeFeatures.cc]:
+<https://chromium.googlesource.com/chromium/src/+/HEAD/third_party/blink/renderer/platform/exported/web_runtime_features.cc>
+[EnableFeatureFromString]:<https://chromium.googlesource.com/chromium/src/+/HEAD/third_party/blink/public/platform/web_runtime_features.h#56>
+[SetRuntimeFeatureDefaultsForPlatform]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#46>
+[SetCustomizedRuntimeFeaturesFromCombinedArgs]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#487>
+[SetRuntimeFeaturesFromChromiumFeatures]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#160>
+[SetRuntimeFeaturesFromCommandLine]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#390>
+[SetRuntimeFeaturesFromFieldTrialParams]:<https://chromium.googlesource.com/chromium/src/+/HEAD/content/child/runtime_features.cc#448> \ No newline at end of file
diff --git a/chromium/docs/jumbo.md b/chromium/docs/jumbo.md
index c2426e3d398..347f1dbc854 100644
--- a/chromium/docs/jumbo.md
+++ b/chromium/docs/jumbo.md
@@ -1,116 +1,12 @@
# Jumbo / Unity builds
-To improve compilation times it is possible to use "unity builds",
-called Jumbo builds, in Chromium. The idea is to merge many
-translation units ("source files") and compile them together. Since a
-large portion of Chromium's code is in shared header files that
-dramatically reduces the total amount of work needed.
+[Jumbo / Unity builds are no longer supported in Chromium](crbug.com/994387).
-## Build instructions
+They were a mechanism for speeding up local builds by combining multiple
+files into a single compilation unit.
-If jumbo isn't already enabled, you enable it in `gn` by setting
-`use_jumbo_build = true` then compile as normal.
+We are still in the process of cleaning up the build files to remove
+references to the jumbo templates.
-## Implementation
-
-Jumbo is currently implemented as a combined `gn` template and a
-python script. Eventually it may become a native `gn` feature. By
-(indirectly) using the template `internal_jumbo_target`, each target
-will split into one action to "merge" the files and one action to
-compile the merged files and any files left outside the merge.
-
-Template file: `//build/config/jumbo.gni`
-Merge script: `//build/config/merge_for_jumbo.py`
-
-### Merge
-
-The "merge" is currently done by creating wrapper files that `#include` the
-source files.
-
-## Jumbo Pros and Cons
-
-### Pros
-
-* Everything compiles significantly faster. When fully enabled
- everywhere this can save hours for a full build (binaries and tests)
- on a moderate computer. Linking is faster because there is less
- redundant data (debug information, inline functions) to merge.
-* Certain code bugs can be statically detected by the compiler when it
- sees more/all the relevant source code.
-
-### Cons
-
-* By merging many files, symbols that have internal linkage in
- different `cc` files can collide and cause compilation errors.
-* The smallest possible compilation unit grows which can add
- 10-20 seconds to some single file recompilations (though link
- times often shrink).
-
-### Mixed blessing
-* Slightly different compiler warnings will be active.
-
-## Tuning
-
-By default on average `50`, or `8` when using goma, files are merged at a
-time. The more files that are are merged, the less total CPU time is
-needed, but parallelism is reduced. This number can be changed by
-setting `jumbo_file_merge_limit`.
-
-## Naming
-
-The term jumbo is used to avoid the confusion resulting from talking
-about unity builds since unity is also the name of a graphical
-environment, a 3D engine, a webaudio filter and part of the QUIC
-congestion control code. Jumbo has been used as name for a unity build
-system in another browser engine.
-
-## Want to make your favourite piece of code jumbo?
-
-1. Add `import("//build/config/jumbo.gni")` to `BUILD.gn`.
-2. Change your target, for instance `static_library`, to
- `jumbo_static_library`. So far `source_set`, `component`,
- `static_library` are supported.
-3. Recompile and test.
-
-### Example
-Change from:
-
- source_set("foothing") {
- sources = [
- "foothing.cc"
- "fooutil.cc"
- "fooutil.h"
- ]
- }
-to:
-
- import("//build/config/jumbo.gni") # ADDED LINE
- jumbo_source_set("foothing") { # CHANGED LINE
- sources = [
- "foothing.cc"
- "fooutil.cc"
- "fooutil.h"
- ]
- }
-
-
-If you see some compilation errors about colliding symbols, resolve
-those by renaming symbols or removing duplicate code. If it's
-impractical to change the code, add a `jumbo_excluded_sources`
-variable to your target in `BUILD.gn`:
-
-`jumbo_excluded_sources = [ "problematic_file.cc" ]`
-
-## More information and pictures
-There are more information and pictures in a
-[Google Document](https://docs.google.com/document/d/19jGsZxh7DX8jkAKbL1nYBa5rcByUL2EeidnYsoXfsYQ)
-
-## Mailing List
-Public discussions happen on the generic blink-dev and chromium-dev
-mailing lists.
-
-https://groups.google.com/a/chromium.org/group/chromium-dev/topics
-
-## Bugs / feature requests
-Related bugs use the label `jumbo` in the bug database.
-See [the open bugs](http://code.google.com/p/chromium/issues/list?q=label:jumbo).
+**[TODO(crbug.com/994387)](crbug.com/994387)**: Remove this page itself
+when all references to jumbo have been removed from the build files.
diff --git a/chromium/docs/linux_build_instructions.md b/chromium/docs/linux_build_instructions.md
index d4324866883..c70a0c34548 100644
--- a/chromium/docs/linux_build_instructions.md
+++ b/chromium/docs/linux_build_instructions.md
@@ -153,15 +153,6 @@ use_goma=true
goma_dir=/path/to/goma-client
```
-#### Jumbo/Unity builds
-
-Jumbo builds merge many translation units ("source files") and compile them
-together. Since a large portion of Chromium's code is in shared header files,
-this dramatically reduces the total amount of work needed. Check out the
-[Jumbo / Unity builds](jumbo.md) for more information.
-
-Enable jumbo builds by setting the GN arg `use_jumbo_build=true`.
-
#### Disable NaCl
By default, the build includes support for
diff --git a/chromium/docs/login/user_types.md b/chromium/docs/login/user_types.md
index 230b414c4f5..3e5f8c999b6 100644
--- a/chromium/docs/login/user_types.md
+++ b/chromium/docs/login/user_types.md
@@ -13,7 +13,6 @@ enterprise user types, see
Regular users that were registered using their GAIA account.
-
## Child users
Users that logged in using
@@ -52,6 +51,8 @@ unnecessary time to the test runtime. To avoid this, tests should:
* If the test user is logged in using `LoginManagerMixin`, the injected
`UserContext` has to have the refresh token matching the token passed to
`FakeGaiaMixin`.
+* Note that `LoggedInUserMixin` is a compound helper mixin that conveniently
+ packages the mixins mentioned above into an easy-to-use interface.
## Guest
@@ -62,7 +63,7 @@ persisted after the guest session ends.
To test guest session state, use `GuestSessionMixin` - this will set up
appropriate guest session flags.
-Testing guest user login is more complicated, as guest login required Chrome
+Testing guest user login is more complicated, as guest login requires Chrome
restart. The test will require two parts:
* `PRE_BrowserTest` test that requests login
* `BrowserTest` that can test guest session state
@@ -70,4 +71,3 @@ restart. The test will require two parts:
To properly set up and preserve Chrome flags between sessions runs, use
`LoginManagerMixin`, and set it up using
`LoginManagerMixin::set_session_restore_enabled()`
-
diff --git a/chromium/docs/mac/triage.md b/chromium/docs/mac/triage.md
new file mode 100644
index 00000000000..4302aeca0c1
--- /dev/null
+++ b/chromium/docs/mac/triage.md
@@ -0,0 +1,160 @@
+# Mac Team Triage Process
+
+This document outlines how the Mac team triages bugs. Triage is the process of
+converting raw bug reports filed by developers or users into actionable,
+prioritized tasks assigned to engineers.
+
+The Mac bug triage process is split into two phases. First-phase triage is done
+daily in week-long shifts by a single person. Second-phase triage is done in a
+standing meeting at the end of the work week by three people. Each week, these
+three people are:
+
+* The primary oncall, who does both phases
+* The secondary oncall, who will become primary in the following week, and who
+ is in the triage meeting so they'll be aware of ongoing themes
+* The TL, who is currently ellyjones@
+
+A key tool of the triage process is the "Mac" label (*not* the same as the Mac
+OS tag), which makes bugs visible to the triaging step of the process. This
+process deliberately doesn't look at bugs with OS=Mac status:Untriaged, because
+maintaining the list of components that can be ignored during that triage step
+is untenable.
+
+## Quick Reference
+
+1. During the week, turn [OS=Mac status:Unconfirmed][unconfirmed] bugs into
+ [label:Mac status:Untriaged][untriaged-m] bugs.
+2. During the triage meeting, turn [label:Mac status:Untriaged][untriaged-m]
+ and [unlabelled Mac bugs][untriaged-c] into any of:
+ * [label:Mac status:Available][available]
+ * [label:Mac status:Assigned][assigned]
+ * Untriaged in any component that does triage, without the Mac label
+ * Assigned
+
+## First-phase triage
+
+First-phase triage is the step which ensures the symptoms and reproduction steps
+of a bug are well-understood. This phase operates on [OS=Mac
+status:Unconfirmed][unconfirmed] bugs, and moves these bugs to:
+
+* Needs-Feedback, if awaiting a response from the user
+* Untriaged bugs with the Mac label, if they are valid bug reports with working
+ repro steps or a crash stack
+* WontFix, if they are invalid bug reports or working as intended
+* Duplicate, if they are identical to an existing bug
+
+The main work of this phase is iterating with the bug reporter to get crash IDs,
+repro steps, traces, and other data we might need to nail down the bug. If the
+bugs is obviously very domain-specific (eg: "this advanced CSS feature is
+behaving strangely", or "my printer is printing everything upside down"), feel
+free to skip this iteration step and send the bug straight to the involved team
+or people. Useful tags at this step are:
+
+* Needs-Feedback, which marks the bug as waiting for a response from the
+ reporter
+* Needs-TestConfirmation, which requests that Test Engineering attempt the bug's
+ repro steps
+* Needs-Bisect, which requests that Test Engineering bisect the bug down to a
+ first bad release
+
+The latter two tags work much better when there are reliable repro steps for a
+bug, so endeavour to get those first - TE time is precious and we should make
+good use of it.
+
+We wait **30 days** for user feedback on Needs-Feedback bugs; after 30 days
+without a response to a question we move bugs to WontFix.
+
+Some useful debugging questions here:
+
+* What are your exact OS version and Chrome version?
+* Does it happen all the time?
+* Does it happen in Incognito? (this checks for bad cached data, cookies, etc)
+* Does it happen with extensions disabled?
+* Does it happen in a new profile?
+* Does it happen in a new user-data-dir?
+* If it's a web bug, is there a reduced test case? We generally can't act on "my
+ website is broken" type issues
+* Can you attach a screenshot/screen recording of what you mean?
+* Can you paste the crash IDs from chrome://crashes?
+* Can you get a sample of the misbehaving process with Activity Monitor?
+* Can you upload a trace from chrome://tracing?
+* Can you paste the contents of chrome://gpu?
+* Can you paste the contents of chrome://version?
+
+## Second-phase triage
+
+Second-phase triage is the step which either moves a bug to another team's
+triage queue, or assigns a priority, component, and (possibly) owner to a bug.
+This phase operates on [label:Mac status:Untriaged][untriaged-m] and [untagged
+status:Untriaged][untriaged-c] bugs. The first part of this phase is deciding
+whether a bug should be worked on by the Mac team. If so, the bug moves to one
+of:
+
+* Pri=2,3 in label:Mac, Assigned with an owner if one is obvious, Available
+ otherwise
+* Pri=0,1 in label:Mac, Assigned with an owner
+
+Otherwise, the bug loses label:Mac and moves to one of:
+
+* Untriaged in a different component
+* Assigned with an owner
+* WontFix
+* Duplicate
+
+Here are some rules of thumb for how to move bugs from label:Mac
+status:Untriaged to another component:
+
+* Is the bug Mac-only, or does it affect other platforms? If it affects other
+ platforms as well, it's probably out of scope for us and should go into
+ another component.
+* Is the bug probably in Blink? If so, it should be handled by the Blink
+ team's Mac folks; move to component `Blink`.
+* Is the bug localized to a specific feature, like the omnibox or the autofill
+ system? If so, it should be handled by that team; tag it with their component
+ for triage.
+* Is the bug a Views bug, even if it's Mac-specific? If so, it should be handled
+ by the Views team; mark it as `Internals>Views`.
+
+If the bug is Mac-specific and in scope for the Mac team, try to:
+
+* Assign it to a sublabel of `Mac`
+* Assign it a priority:
+ * Pri=0 means "this is an emergency, work on it immediately"
+ * Pri=1 means "we should not ship a stable release with this bug if we can
+ help it"
+ * Pri=2 means "we should probably fix this" - this is the default bug
+ priority
+ * Pri=3 means "it would be nice if we fixed this some day"
+* Maybe assign it an owner if needed - Pri=0 or 1 need one, Pri=2 or 3 can have
+ one if the owner is obvious but don't need one:
+ * `Mac-Accessibility`: ellyjones@ or lgrey@
+ * `Mac-Enterprise`: avi@
+ * `Mac-Graphics`: ccameron@
+ * `Mac-Infra`: ellyjones@
+ * `Mac-Performance`: lgrey@ or sdy@
+ * `Mac-PlatformIntegration`: sdy@
+ * `Mac-Polish`: sdy@
+ * `Mac-TechDebt`: ellyjones@
+ * `Mac-UI`: anyone
+
+**Caveat lector**: If you are outside the Mac team please do not use this
+assignment map - just mark bugs as Untriaged with label `Mac` and allow the Mac
+triage rotation to assign them. People go on vacation and such :)
+
+These are the other components we put bugs into that we assume have their own
+triage processes:
+* Admin
+* Blink
+* Infra
+* Internals>Headless, Network, Plugins, Printing, Skia, Views
+* IO>Bluetooth, USB
+* Platform
+* Services>Chromoting
+* Test>Telemetry
+* UI>Browser>WebUI
+
+[unconfirmed]: https://bugs.chromium.org/p/chromium/issues/list?q=OS%3DMac%20status%3AUnconfirmed%20-component%3ABlink%2CEnterprise%2CInternals%3ENetwork%2CPlatform%3EDevtools%2CServices%3ESync&can=2
+[untriaged-m]: https://bugs.chromium.org/p/chromium/issues/list?q=has%3AMac%20status%3AUntriaged&can=2
+[untriaged-c]: https://bugs.chromium.org/p/chromium/issues/list?q=OS%3DMac%20-OS%3DWindows%2CLinux%2CChrome%2CAndroid%2CiOS%20status%3AUntriaged%20-component%3AAdmin%2CBlink%2CInfra%2CInternals%3EHeadless%2CInternals%3ENetwork%2CInternals%3EPlugins%3EPDF%2CInternals%3EPrinting%2CInternals%3ESkia%2CInternals%3EViews%2CIO%3EBluetooth%2CIO%3EUSB%2CPlatform%2CServices%3EChromoting%2CTest%3ETelemetry%2CUI%3EBrowser%3EWebUI&can=2
+[available]: https://bugs.chromium.org/p/chromium/issues/list?q=has%3AMac%20status%3AAvailable&can=2
+[assigned]: https://bugs.chromium.org/p/chromium/issues/list?q=has%3AMac%20status%3AAssigned&can=2
diff --git a/chromium/docs/mac_build_instructions.md b/chromium/docs/mac_build_instructions.md
index 49c09f79a87..389a1fe8a5d 100644
--- a/chromium/docs/mac_build_instructions.md
+++ b/chromium/docs/mac_build_instructions.md
@@ -140,15 +140,6 @@ in your args.gn to disable debug symbols altogether. This makes both full
rebuilds and linking faster (at the cost of not getting symbolized backtraces
in gdb).
-#### Jumbo/Unity builds
-
-Jumbo builds merge many translation units ("source files") and compile them
-together. Since a large portion of Chromium's code is in shared header files,
-this dramatically reduces the total amount of work needed. Check out the
-[Jumbo / Unity builds](jumbo.md) for more information.
-
-Enable jumbo builds by setting the GN arg `use_jumbo_build=true`.
-
#### CCache
You might also want to [install ccache](ccache_mac.md) to speed up the build.
diff --git a/chromium/docs/media/gpu/vdatest_usage.md b/chromium/docs/media/gpu/vdatest_usage.md
deleted file mode 100644
index ce5cb196e35..00000000000
--- a/chromium/docs/media/gpu/vdatest_usage.md
+++ /dev/null
@@ -1,165 +0,0 @@
-# Using the Video Decode/Encode Accelerator Unittests Manually
-
-VDAtest (or `video_decode_accelerator_unittest`) and VEAtest (or
-`video_encode_accelerator_unittest`) are unit tests that embeds the Chrome video
-decoding/encoding stack without requiring the whole browser, meaning they can
-work in a headless environment. They includes a variety of tests to validate the
-decoding and encoding stacks with h264, vp8 and vp9.
-
-Running these tests manually can be very useful when bringing up a new codec, or
-in order to make sure that new code does not break hardware decoding and/or
-encoding. This document is a walk though the prerequisites for running these
-programs, as well as their most common options.
-
-## Prerequisites
-
-The required kernel drivers should be loaded, and there should exist a
-`/dev/video-dec0` symbolic link pointing to the decoder device node (e.g.
-`/dev/video-dec0` → `/dev/video0`). Similarly, a `/dev/video-enc0` symbolic
-link should point to the encoder device node.
-
-The unittests can be built by specifying the `video_decode_accelerator_unittest`
-and `video_encode_accelerator_unittest` targets to `ninja`. If you are building
-for an ARM board that is not yet supported by the
-[simplechrome](https://chromium.googlesource.com/chromiumos/docs/+/master/simple_chrome_workflow.md)
-workflow, use `arm-generic` as the board. It should work across all ARM targets.
-
-For unlisted Intel boards, any other Intel target (preferably with the same
-chipset) should be usable with libva. AMD targets can use `amd64-generic`.
-
-## Basic VDA usage
-
-The `media/test/data` folder in Chromium's source tree contains files with
-encoded video data (`test-25fps.h264`, `test-25fps.vp8` and `test-25fps.vp9`).
-Each of these files also has a `.md5` counterpart, which contains the md5
-checksums of valid thumbnails.
-
-Running the VDAtest can be done as follows:
-
- ./video_decode_accelerator_unittest --disable_rendering --single-process-tests --test_video_data=test_video
-
-Where test_video is of the form
-
- filename:width:height:numframes:numfragments:minFPSwithRender:minFPSnoRender:profile
-
-The correct value of test_video for each test file follows:
-
-* __H264__: `test-25fps.h264:320:240:250:258:35:150:1`
-* __VP8__: `test-25fps.vp8:320:240:250:250:35:150:11`
-* __VP9__: `test-25fps.vp9:320:240:250:250:35:150:12`
-
-So in order to run all h264 tests, one would invoke
-
- ./video_decode_accelerator_unittest --disable_rendering --single-process-tests --test_video_data=test-25fps.h264:320:240:250:258:35:150:1
-
-## Test filtering options
-
-`./video_decode_accelerator_unittest --help` will list all valid options.
-
-The list of available tests can be retrieved using the `--gtest_list_tests`
-option.
-
-By default, all tests are run, which can be a bit too much, especially when
-bringing up a new codec. The `--gtest_filter` option can be used to specify a
-pattern of test names to run. For instance, to only run the
-`TestDecodeTimeMedian` test, one can specify
-`--gtest_filter="*TestDecodeTimeMedian*"`.
-
-So the complete command line to test vp9 decoding with the
-`TestDecodeTimeMedian` test only (a good starting point for bringup) would be
-
- ./video_decode_accelerator_unittest --disable_rendering --single-process-tests --test_video_data=test-25fps.vp9:320:240:250:250:35:150:12 --gtest_filter="*TestDecodeTimeMedian*"
-
-## Verbosity options
-
-The `--vmodule` options allows to specify a set of source files that should be
-more verbose about what they are doing. For basic usage, a useful set of vmodule
-options could be:
-
- --vmodule=*/media/gpu/*=4
-
-## Testing performance
-
-Use the `--disable_rendering --rendering_fps=0 --gtest_filter="DecodeVariations/*/0"`
-options to max the decoder output and measure its performance.
-
-## Testing parallel decoding
-
-Use `--gtest_filter="ResourceExhaustion*/0"` to run 3 decoders in parallel, and
-`--gtest_filter="ResourceExhaustion*/1"` to run 4 decoders in parallel.
-
-## Wrap-up
-
-Using all these options together, we can invoke VDAtest in the following way for
-a verbose H264 decoding test:
-
- ./video_decode_accelerator_unittest --single-process-tests --disable_rendering --gtest_filter="*TestDecodeTimeMedian*" --vmodule=*/media/gpu/*=4 --test_video_data=test-25fps.h264:320:240:250:258:35:150:1
-
-## Import mode
-
-There are two modes in which VDA runs, ALLOCATE and IMPORT. In ALLOCATE mode,
-the video decoder is responsible for allocating the buffers containing the
-decoded frames itself. In IMPORT mode, the buffers are allocated by the client
-and provided to the decoder during decoding. ALLOCATE mode is used during
-playback within Chrome (e.g. HTML5 videos), while IMPORT mode is used by ARC++
-when Android applications require accelerated decoding.\\
-VDAtest runs VDA in ALLOCATE mode by default. Use `--test_import` to run VDA in
-IMPORT mode. VDA cannot run in IMPORT mode on platforms too old for ARC++ to be
-enabled.
-
-## (Recommended) Frame validator
-
-Use `--frame_validator=check` to verify the correctness of frames decoded by
-VideoDecodeAccelerator in all test cases. This validator is based on the fact
-that a decoded content is deterministic in H.264, VP8 and VP9. It reads the
-expected md5 value of each frame from `*.frames.md5`, for example, `test-25fps.h264.frames.md5`
-for `test-25fps.h264`.\\
-VDATest is able to read the memory of a decoded frame only if VDA runs in IMPORT
-mode. Therefore, if `--frame_validator=check` is specified, VDATest runs as if
-`--test_import` is specified. See [Import mode](#import-mode) about IMPORT mode.
-
-### Dump mode
-
-Use `--frame_validator=dump` to write down all the decoded frames. The output
-format will be I420 and the saved file name will be `frame_%{frame-num}_%{width}x%{height}_I420.yuv`
-in the specified directory or a directory whose name is the test file + `.frames`
-if unspecified. Here, width and height are visible width and height. For
-instance, they will be `test-25fps.h264.frames/frame_%{frame-num}_320x180_I420.yuv.`
-
-### How to generate md5 values of decoded frames for a new video stream
-
-It is necessary to generate md5 values of decoded frames for new test streams.
-ffmpeg with `-f framemd5` can be used for this purpose. For instance,
-`ffmpeg -i test-25fps.h264 -f framemd5 test-25fps.frames.md5`
-
-## Basic VEA usage
-
-The VEA works in a similar fashion to the VDA, taking raw YUV files in I420
-format as input and producing e.g. a H.264 Annex-B byte stream. Sample raw YUV
-files can be found at the following locations:
-
-* [1080 Crowd YUV](http://commondatastorage.googleapis.com/chromiumos-test-assets-public/crowd/crowd1080-96f60dd6ff87ba8b129301a0f36efc58.yuv)
-* [320x180 Bear YUV](http://commondatastorage.googleapis.com/chromiumos-test-assets-public/bear/bear-320x180-c60a86c52ba93fa7c5ae4bb3156dfc2a.yuv)
-
-It is recommended to rename these files after downloading them to e.g.
-`crowd1080.yuv` and `bear-320x180.yuv`.
-
-The VEA can then be tested as follows:
-
- ./video_encode_accelerator_unittest --single-process-tests --disable_flush --gtest_filter=SimpleEncode/VideoEncodeAcceleratorTest.TestSimpleEncode/0 --test_stream_data=bear-320x180.yuv:320:180:1:bear.mp4:100000:30
-
-for the `bear` file, and
-
- ./video_encode_accelerator_unittest --single-process-tests --disable_flush --gtest_filter=SimpleEncode/VideoEncodeAcceleratorTest.TestSimpleEncode/0 --test_stream_data=crowd1080.yuv:1920:1080:1:crowd.mp4:4000000:30
-
-for the larger `crowd` file. These commands will put the encoded output into
-`bear.mp4` and `crowd.mp4` respectively. They can then be copied on the host and
-played with `mplayer -fps 25`.
-
-## Source code
-
-The VDAtest's source code can be consulted here: [https://cs.chromium.org/chromium/src/media/gpu/video_decode_accelerator_unittest.cc](https://cs.chromium.org/chromium/src/media/gpu/video_decode_accelerator_unittest.cc).
-
-V4L2 support: [https://cs.chromium.org/chromium/src/media/gpu/v4l2/](https://cs.chromium.org/chromium/src/media/gpu/v4l2/).
-
-VAAPI support: [https://cs.chromium.org/chromium/src/media/gpu/vaapi/](https://cs.chromium.org/chromium/src/media/gpu/vaapi/).
diff --git a/chromium/docs/media/gpu/veatest_usage.md b/chromium/docs/media/gpu/veatest_usage.md
new file mode 100644
index 00000000000..464db23f412
--- /dev/null
+++ b/chromium/docs/media/gpu/veatest_usage.md
@@ -0,0 +1,77 @@
+# Using the Video Encode Accelerator Unittests Manually
+
+The VEAtest (or `video_encode_accelerator_unittest`) is a set of unit tests that
+embeds the Chrome video encoding stack without requiring the whole browser,
+meaning they can work in a headless environment. It includes a variety of tests
+to validate the encoding stack with h264, vp8 and vp9.
+
+Running this test manually can be very useful when bringing up a new codec, or
+in order to make sure that new code does not break hardware encoding. This
+document is a walk though the prerequisites for running this program, as well
+as the most common options.
+
+## Prerequisites
+
+The required kernel drivers should be loaded, and there should exist a
+`/dev/video-enc0` symbolic link pointing to the encoder device node (e.g.
+`/dev/video-enc0` → `/dev/video0`).
+
+The unittests can be built by specifying the `video_encode_accelerator_unittest`
+target to `ninja`. If you are building for an ARM board that is not yet
+supported by the
+[simplechrome](https://chromium.googlesource.com/chromiumos/docs/+/master/simple_chrome_workflow.md)
+workflow, use `arm-generic` as the board. It should work across all ARM targets.
+
+For unlisted Intel boards, any other Intel target (preferably with the same
+chipset) should be usable with libva. AMD targets can use `amd64-generic`.
+
+## Basic VEA usage
+
+The VEA test takes raw YUV files in I420 format as input and produces e.g. an
+H.264 Annex-B byte stream. Sample raw YUV files can be found at the following
+locations:
+
+* [1080 Crowd YUV](http://commondatastorage.googleapis.com/chromiumos-test-assets-public/crowd/crowd1080-96f60dd6ff87ba8b129301a0f36efc58.yuv)
+* [320x180 Bear YUV](http://commondatastorage.googleapis.com/chromiumos-test-assets-public/bear/bear-320x180-c60a86c52ba93fa7c5ae4bb3156dfc2a.yuv)
+
+It is recommended to rename these files after downloading them to e.g.
+`crowd1080.yuv` and `bear-320x180.yuv`.
+
+The VEA can then be tested as follows:
+
+ ./video_encode_accelerator_unittest --single-process-tests --disable_flush --gtest_filter=SimpleEncode/VideoEncodeAcceleratorTest.TestSimpleEncode/0 --test_stream_data=bear-320x180.yuv:320:180:1:bear.mp4:100000:30
+
+for the `bear` file, and
+
+ ./video_encode_accelerator_unittest --single-process-tests --disable_flush --gtest_filter=SimpleEncode/VideoEncodeAcceleratorTest.TestSimpleEncode/0 --test_stream_data=crowd1080.yuv:1920:1080:1:crowd.mp4:4000000:30
+
+for the larger `crowd` file. These commands will put the encoded output into
+`bear.mp4` and `crowd.mp4` respectively. They can then be copied on the host and
+played with `mplayer -fps 25`.
+
+## Test filtering options
+
+`./video_encode_accelerator_unittest --help` will list all valid options.
+
+The list of available tests can be retrieved using the `--gtest_list_tests`
+option.
+
+By default, all tests are run, which can be a bit too much, especially when
+bringing up a new codec. The `--gtest_filter` option can be used to specify a
+pattern of test names to run.
+
+## Verbosity options
+
+The `--vmodule` options allows to specify a set of source files that should be
+more verbose about what they are doing. For basic usage, a useful set of vmodule
+options could be:
+
+ --vmodule=*/media/gpu/*=4
+
+## Source code
+
+The VEAtest's source code can be consulted here: [https://cs.chromium.org/chromium/src/media/gpu/video_encode_accelerator_unittest.cc](https://cs.chromium.org/chromium/src/media/gpu/video_encode_accelerator_unittest.cc).
+
+V4L2 support: [https://cs.chromium.org/chromium/src/media/gpu/v4l2/](https://cs.chromium.org/chromium/src/media/gpu/v4l2/).
+
+VAAPI support: [https://cs.chromium.org/chromium/src/media/gpu/vaapi/](https://cs.chromium.org/chromium/src/media/gpu/vaapi/).
diff --git a/chromium/docs/media/gpu/video_decoder_test_usage.md b/chromium/docs/media/gpu/video_decoder_test_usage.md
index cb641b6d932..321e48f236e 100644
--- a/chromium/docs/media/gpu/video_decoder_test_usage.md
+++ b/chromium/docs/media/gpu/video_decoder_test_usage.md
@@ -30,11 +30,6 @@ See the
[Tast quickstart guide](https://chromium.googlesource.com/chromiumos/platform/tast/+/HEAD/docs/quickstart.md)
for more information about the Tast framework.
-__Note:__ Tast tests are currently being migrated from the
-_video_decode_accelerator_unittest_ to the new _video_decode_accelerator_tests_
-binary. Check the [documentation](vdatest_usage.md) for more info about the old
-video decode accelerator tests.
-
## Running manually
To run the video decoder tests manually the _video_decode_accelerator_tests_
target needs to be built and deployed to the device being tested. Running
diff --git a/chromium/docs/memory-infra/README.md b/chromium/docs/memory-infra/README.md
index f73d843e72a..acf73010d6d 100644
--- a/chromium/docs/memory-infra/README.md
+++ b/chromium/docs/memory-infra/README.md
@@ -10,7 +10,7 @@ click of a button you can understand where memory is being used in your system.
## Taking a memory-infra trace
1. [Record a trace as usual][record-trace]: open [chrome://tracing][tracing]
- on Desktop Chrome or [chrome://inspect?tracing][inspect-tracing] to trace
+ on Desktop Chrome or [chrome://inspect][inspect-tracing] to trace
Chrome for Android.
2. Make sure to enable the **memory-infra** category on the right.
@@ -20,7 +20,7 @@ click of a button you can understand where memory is being used in your system.
[record-trace]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs
[tracing]: chrome://tracing
-[inspect-tracing]: chrome://inspect?tracing
+[inspect-tracing]: chrome://inspect
[memory-infra-box]: https://storage.googleapis.com/chromium-docs.appspot.com/1c6d1886584e7cc6ffed0d377f32023f8da53e02
## Navigating a memory-infra trace
diff --git a/chromium/docs/memory-infra/memory_benchmarks.md b/chromium/docs/memory-infra/memory_benchmarks.md
index 068297ef60a..35599e131ac 100644
--- a/chromium/docs/memory-infra/memory_benchmarks.md
+++ b/chromium/docs/memory-infra/memory_benchmarks.md
@@ -56,20 +56,26 @@ perform with the browser:
browser to the background).
* `long_running` stories interact with a page for a longer period
of time (~5 mins).
-* `blank` has a single story that just navigates to **about:blank**.
+* `multitab` loads different web sites in several tabs, then cycles through
+ them.
+* `play` loads a web site and plays some media (e.g. a song).
-The full name of a story has the form `{interaction}:{category}:{site}` where:
+The full name of a story has the form `{interaction}:{category}:{site}[:{year}]`
+where:
* `interaction` is one the labels given above;
* `category` is used to group together sites with a similar purpose,
e.g. `news`, `social`, `tools`;
* `site` is a short name identifying the website in which the story mostly
takes place, e.g. `cnn`, `facebook`, `gmail`.
+* `year` indicates the year in which the web page recording for the story
+ was most recently updated.
-For example `browse:news:cnn` and `background:social:facebook` are two system
-health user stories.
+For example `browse:news:cnn:2018` and `background:social:facebook` are two
+system health user stories. The list of all current stories can be found at
+[bit.ly/csh-stories](http://bit.ly/csh-stories).
-Today, for most stories a garbage collection is forced at the end of the
+Today, for most stories, a garbage collection is forced at the end of the
story and a memory dump is then triggered. Metrics report the values
obtained from this single measurement.
@@ -89,15 +95,57 @@ To view data from one of the benchmarks on the
* **Subtest (3):** The name of a *[user story](#User-stories)*
(with `:` replaced by `_`).
-If you are investigating a Perf dashboard alert and would like to see the
-details, you can click on any point of the graph. It gives you the commit range,
-buildbot output and a link to the trace file taken during the buildbot run.
-(More information about reading trace files [here][memory-infra])
+Clicking on any point of the graph will give you the commit range, links to the
+builder that ran the benchmark, and a trace file collected during the story
+run. See below for details on how to interpret these traces when
+[debugging memory related issues](#debugging-memory-regressions).
-[memory-infra]: /docs/memory-infra/README.md
+Many of the high level memory measurements are automatically tracked and the
+Performance Dashboard will generate alerts when a memory regression is detected.
+These are triaged by [perf sheriffs][] who create bugs and start bisect jobs
+to find the root cause of regressions.
+
+[perf sheriffs]: /docs/speed/perf_regression_sheriffing.md
![Chrome Performance Dashboard Alert](https://storage.googleapis.com/chromium-docs.appspot.com/perfdashboard_alert.png)
+## Debugging memory regressions
+
+If you are investigating a memory regression, chances are, a [pinpoint][]
+job identified one of your CLs as a possible culprit.
+
+![Pinpoint Regression](https://storage.googleapis.com/chromium-docs.appspot.com/pinpoint_regression.png)
+
+Note the "chart" argument identifies the memory metric that regressed. The
+pinpoint results page also gives you easy access to traces before and after
+your commit landed. It's useful to look at both and compare them to identify what
+changed. The documentation on [memory-infra][memory-infra] explains how to dig
+down into details and interpret memory measurements. Also note that pinpoint
+runs each commit multiple times, so you can access more traces by clicking on
+a different "repeat" of either commit.
+
+Sometimes it's also useful to follow the link to "Analyze benchmark results"
+which will bring up the [Metrics Results UI][results-ui] to compare all
+measurements (not just the one caught by the alert) before and after your
+CL landed. Make sure to select the "before" commit as reference column, show
+absolute changes (i.e. "Δavg") instead of relative, and sort by the column
+with changes on the "after" commit to visualize them more easily. This can be
+useful to find a more specific source of the regression, e.g.
+`renderer_processes:reported_by_chrome:v8:heap:code_space:effective_size`
+rather than just `all_processes:reported_by_chrome:effective_size`, and help
+you pin down the source of the regression.
+
+To confirm whether a revert of your CL would fix the regression you can run
+a [pinpoint try job](#How-to-run-a-pinpoint-try-job) with a patch containing
+the revert. Finally, **do not close the bug** even if you suspect that your CL
+may not be the cause of the regression; instead follow the more general
+guidance on how to [address performance regressions][addressing-regressions].
+Bugs should only be closed if the regression has been fixed or justified.
+
+[results-ui]: https://chromium.googlesource.com/catapult.git/+/HEAD/docs/metrics-results-ui.md
+[memory-infra]: /docs/memory-infra/README.md
+[addressing-regressions]: /docs/speed/addressing_performance_regressions.md
+
## How to run the benchmarks
Benchmarks may be run on a local platform/device or remotely on a pinpoint
@@ -200,4 +248,6 @@ where:
`proportional_resident_size` (others are `peak_resident_size` and
`private_dirty_size`).
+Read the [memory-infra documentation][memory-infra] for more details on them.
+
[memory-infra]: /docs/memory-infra/README.md
diff --git a/chromium/docs/mojo_and_services.md b/chromium/docs/mojo_and_services.md
index c1827b01e21..2950004145c 100644
--- a/chromium/docs/mojo_and_services.md
+++ b/chromium/docs/mojo_and_services.md
@@ -162,7 +162,7 @@ earlier via `BindNewPipeAndPassReceiver`:
``` cpp
RenderFrame* my_frame = GetMyFrame();
-my_frame->GetBrowserInterfaceBrokerProxy()->GetInterface(std::move(receiver));
+my_frame->GetBrowserInterfaceBroker().GetInterface(std::move(receiver));
```
This will transfer the `PendingReceiver` endpoint to the browser process
diff --git a/chromium/docs/native_relocations.md b/chromium/docs/native_relocations.md
index 0910e6f91f9..aeb85f90716 100644
--- a/chromium/docs/native_relocations.md
+++ b/chromium/docs/native_relocations.md
@@ -1,12 +1,8 @@
# Native Relocations
-*** note
-Information here is mostly Android & Linux-specific and may not be 100% accurate.
-***
+[TOC]
## What are they?
- * For ELF files, they are sections of type REL, RELA, or RELR. They generally
- have the name ".rel.dyn" and ".rel.plt".
* They tell the runtime linker a list of addresses to post-process after
loading the executable into memory.
* There are several types of relocations, but >99% of them are "relative"
@@ -14,45 +10,81 @@ Information here is mostly Android & Linux-specific and may not be 100% accurate
initialized with the address of something.
* This includes vtables, function pointers, and string literals, but not
`char[]`.
- * Each relocation is stored as either 2 or 3 words, based on the architecture.
- * On Android, they are compressed, which trades off runtime performance for
- smaller file size.
- * As of Oct 2019, Chrome on Android has about 390000 of them.
- * Windows and Mac have them as well, but I don't know how they differ.
+
+### Linux & Android Relocations (ELF Format)
+ * Relocations are stored in sections of type: `REL`, `RELA`, [`APS2`][APS2], or
+ [`RELR`][RELR].
+ * Relocations are stored in sections named: `.rel.dyn`, `.rel.plt`,
+ `.rela.dyn`, or `.rela.plt`.
+ * For `REL` and `RELA`, each relocation is stored using either 2 or 3 words,
+ based on the architecture.
+ * For `RELR` and `APS2`, relative relocations are compressed.
+ * [`APS2`][APS2]: Somewhat involved compression which trades off runtime
+ performance for smaller file size.
+ * [`RELR`][RELR]: Supported in Android P+. Smaller and simpler than `APS2`.
+ * `RELR` is [used by default][cros] on Chrome OS.
+ * As of Oct 2019, Chrome on Android (arm32) has about 390,000 of them.
+
+[APS2]: android_native_libraries.md#Packed-Relocations
+[RELR]: https://reviews.llvm.org/D48247
+[cros]: https://chromium-review.googlesource.com/c/chromiumos/overlays/chromiumos-overlay/+/1210982
+
+### Windows Relocations (PE Format)
+ * For PE files, relocaitons are stored in per-code-page
+ [`.reloc` sections][win_relocs].
+ * Each relocation is stored using 2 bytes. Each `.reloc` section has a small
+ overhead as well.
+ * 64-bit executables have fewer relocations thanks to the ability to use
+ RIP-relative (instruction-relative) addressing.
+
+[win_relocs]: https://docs.microsoft.com/en-us/windows/win32/debug/pe-format#the-reloc-section-image-only
## Why do they matter?
- * **Binary Size:** Except on Android, relocations are stored very
- inefficiently.
- * Chrome on Linux has a `.rela.dyn` section of more than 14MiB!
- * Android uses a [custom compression scheme][android_relro1] to shrink them
- down to ~300kb.
- * There is an even better [RELR][RELR] encoding available on Android P+, but
- not widely available on Linux yet. It makes relocations ~60kb.
- * **Memory Overhead:** Symbols with relocations cannot be loaded read-only
- and result in "dirty" memory. 99% of these symbols live in `.data.rel.ro`,
- which as of Oct 2019 is ~6.5MiB on Linux and ~2MiB on Android.
- `.data.rel.ro` is data that *would* have been put into `.rodata` and mapped
- read-only if not for the required relocations. It does not get written to
- after it's relocated, so the linker makes it read-only once relocations are
- applied (but by that point the damage is done and we have the dirty pages).
+### Binary Size
+ * On Linux, relocations are stored very inefficiently.
+ * As of Oct 2019:
+ * Chrome on Linux has a `.rela.dyn` section of more than 14MiB!
+ * Chrome on Android uses [`APS2`] to compress these down to ~300kb.
+ * Chrome on Android with [`RELR`] would require only 60kb, but is
+ [not yet enabled][relr_bug].
+ * Chrome on Windows (x64) has `.relocs` sections that sum to 620KiB.
+
+[relr_bug]: https://bugs.chromium.org/p/chromium/issues/detail?id=895194
+
+### Memory Overhead
+ * On Windows, there is [almost no memory overhead] from relocations.
+ * On Linux and Android, memory with relocations cannot be loaded read-only and
+ result in dirty memory. 99% of these symbols live in `.data.rel.ro`, which as
+ of Oct 2019 is ~6.5MiB on Linux and ~2MiB on Android. `.data.rel.ro` is data
+ that *would* have been put into `.rodata` and mapped read-only if not for the
+ required relocations. The memory does not get written to after it's
+ relocated, so the linker makes it read-only once relocations are applied (but
+ by that point the damage is done and we have the dirty pages).
* On Linux, we share this overhead between processes via the [zygote].
- * [On Android][android_relro2], we share this overhead between processes by
+ * [On Android][relro_sharing], we share this overhead between processes by
loading the shared library at the same address in all processes, and then
`mremap` onto shared memory to dedupe after-the-fact.
- * **Start-up Time** The runtime linker applies relocations when loading the
- executable. On low-end Android, it can take ~100ms (measured on a first-gen
- Android Go devices with APS2 relocations). On Linux, it's
- [closer to 20ms][zygote].
+[almost no memory overhead]: https://devblogs.microsoft.com/oldnewthing/20160413-00/?p=93301
[zygote]: linux_zygote.md
-[RELR]: https://reviews.llvm.org/D48247
-[android_relro1]: android_native_libraries.md#Packed-Relocations
-[android_relro2]: android_native_libraries.md#relro-sharing
+[relro_sharing]: android_native_libraries.md#relro-sharing
+
+### Start-up Time
+ * On Windows, relocations are applied just-in-time on page faults, and are
+ backed by the PE file (not the pagefile).
+ * On other platforms, the runtime linker applies all relocations upfront.
+ * On low-end Android, it can take ~100ms (measured on a first-gen Android Go
+ devices with APS2 relocations).
+ * On Linux, it's [closer to 20ms][zygote].
## How do I see them?
```sh
+# For ELF files:
third_party/llvm-build/Release+Asserts/bin/llvm-readelf --relocs out/Release/libmonochrome.so
+
+# For PE files:
+python tools\win\pe_summarize.py out\Release\chrome.dll
```
## Can I avoid them?
@@ -61,22 +93,36 @@ smart about them.
For Example:
```c++
-// Wastes 2 bytes for each smaller string but creates no relocations.
+// The following uses 2 bytes of padding for each smaller string but creates no relocations.
// Total size overhead: 4 * 5 = 20 bytes.
const char kArr[][5] = {"as", "ab", "asdf", "fi"};
-// String data stored optimally, but uses 4 relocatable pointers.
+// The following requires no string padding, but uses 4 relocatable pointers.
// Total size overhead:
-// 64-bit: 8 bytes per pointer + 24 bytes per relocation + 14 bytes of char = 142 bytes
-// 32-bit: 4 bytes per pointer + 8 bytes per relocation + 14 bytes of char = 62 bytes
-const char *kArr2[] = {"as", "ab", "asdf", "fi"};
+// Linux 64-bit: (8 bytes per pointer + 24 bytes per relocation) * 4 entries + 14 bytes of char = 142 bytes
+// Windows 64-bit: (8 bytes per pointer + 2 bytes per relocation) * 4 entries + 14 bytes of char = 54 bytes
+// CrOS 64-bit: (8 bytes per pointer + ~0 bytes per relocation) * 4 entries + 14 bytes of char = ~46 bytes
+// Android 32-bit: (4 bytes per pointer + ~0 bytes per relocation) * 4 entries + 14 bytes of char = ~30 bytes
+const char * const kArr2[] = {"as", "ab", "asdf", "fi"};
```
-Note:
-* String literals are de-duped with others in the binary, so it's possible that
- the second example above might use 14 fewer bytes.
-* Not all string literals require relocations. Only those that are stored into
- global variables require them.
+Notes:
+* String literals (but not char arrays) are de-duped with others in the binary,
+ so it is possible that the second example above might use 14 fewer bytes.
+* Not all string literals require relocations. Which ones require them depends
+ on the ABI. Generally, All global variables that are initialized to the
+ address of something require them.
+
+Here's a simpler example:
+
+```c++
+// No pointer, no relocation. Just 5 bytes of character data.
+const char kText[] = "asdf";
+
+// Requires pointer, relocation, and character data.
+// In most cases there is no advantage to pointers for strings.
+const char* const kText = "asdf";
+```
Another thing to look out for:
* Large data structures with relocations that you don't need random access to,
diff --git a/chromium/docs/no_sources_assignment_filter.md b/chromium/docs/no_sources_assignment_filter.md
new file mode 100644
index 00000000000..8f67c6833ac
--- /dev/null
+++ b/chromium/docs/no_sources_assignment_filter.md
@@ -0,0 +1,88 @@
+# No sources_assignment_filter
+
+There is a [strong][0] [consensus][1] that the set_sources_assignment_filter
+feature from GN is a mis-feature and should be removed. This requires that
+Chromium's BUILD.gn file stop using the feature.
+
+## Why convert
+
+When set_sources_assignment_filter is called, it configures a list of patterns
+that will be used to filter names every time a variable named "sources" is
+assigned a value.
+
+As Chromium calls this function in build/BUILDCONFIG.gn, the patterns are
+applied to every BUILD.gn file in the project. This has multiple drawbacks:
+
+1. the configuration of the list of patterns is located far from the point
+ where they are applied and developer are usually confused when a file
+ they add to a rule is not build due to those pattern
+
+2. the filtering is applied to every assignment to a variable named "sources"
+ after interpreting the string as a relative filename, thus build breaks if
+ one of the forbidden pattern is used in unexpected location (like naming
+ the build directory out/linux, or having mac/ in path to SDK, ...)
+
+3. the filtering is applied to every assignment to a variable named "sources"
+ in the whole project, thus it has significant negative impact on the
+ performance of gn
+
+## Conversion pattern
+
+To convert a BUILD.gn file it is neccessary to change the following:
+
+```
+ source_set("foo") {
+ sources = [
+ "foo.h",
+ "foo_mac.mm",
+ "foo_win.cc",
+ "foo_linux.cc",
+ ]
+ }
+```
+
+to
+
+```
+ source_set("foo") {
+ sources = [
+ "foo.h",
+ ]
+ if (is_mac) {
+ sources += [
+ "foo_mac.mm",
+ ]
+ }
+ if (is_win) {
+ sources += [
+ "foo_win.cc",
+ ]
+ }
+ if (is_linux) {
+ sources += [
+ "foo_linux.cc",
+ ]
+ }
+ }
+```
+
+Since the second pattern never assign a name that will be filtered out, then
+it is compatible whether the set_sources_assignment_filter feature is used or
+not.
+
+## Preventing regression
+
+As said above, while the converted file are compatible with the feature, there
+is a risk of regression. To prevent such regression, the following is added at
+the top of every BUILD.gn file after it has been converted:
+
+```
+ # Reset sources_assignment_filter for the BUILD.gn file to prevent
+ # regression during the migration of Chromium away from the feature.
+ # See build/no_sources_assignment_filter.md for more information.
+ # TODO(crbug.com/1018739): remove this when migration is done.
+ set_sources_assignment_filter([])
+```
+
+[0]: https://groups.google.com/a/chromium.org/d/topic/chromium-dev/hyLuCU6g2V4/discussion
+[1]: https://groups.google.com/a/chromium.org/d/topic/gn-dev/oQcYStl_WkI/discussion
diff --git a/chromium/docs/ozone_overview.md b/chromium/docs/ozone_overview.md
index 0ba44b4e10a..cb77cef88e6 100644
--- a/chromium/docs/ozone_overview.md
+++ b/chromium/docs/ozone_overview.md
@@ -247,6 +247,17 @@ This platform is used for
This platform provides support for the [X window system](https://www.x.org/).
+The support for X11 is being actively developed by Igalia and the chromium
+community and is intended to replace the current legacy X11 path.
+
+You can try to compile and run it with the following configuration:
+
+``` shell
+gn args out/OzoneX11 --args="use_ozone=true"
+ninja -C out/OzoneX11 chrome
+./out/OzoneX11/chrome --ozone-platform=x11
+```
+
### Wayland
This platform provides support for the
@@ -254,22 +265,38 @@ This platform provides support for the
initially developed by Intel as
[a fork of chromium](https://github.com/01org/ozone-wayland)
and then partially upstreamed.
-It is still actively being developed by Igalia both on the
-[ozone-wayland-dev](https://github.com/Igalia/chromium/tree/ozone-wayland-dev)
-branch and the Chromium mainline repository, feel free to discuss
-with us on freenode.net, `#ozone-wayland` channel or on `ozone-dev`.
+
+Currently, the Ozone/Wayland is actively being developed by Igalia in
+the Chromium mainline repository with some features missing at the moment. The
+progress can be tracked in the [issue #578890](https://crbug.com/578890).
Below are some quick build & run instructions. It is assumed that you are
launching `chrome` from a Wayland environment such as `weston`. Execute the
-following commands (make sure a system version of gbm and drm is used, which are
-required by Ozone/Wayland by design, when running on Linux platforms.):
+following commands (make sure a system version of gbm and drm is used, which
+are required by Ozone/Wayland by design, when running on Linux platforms.):
``` shell
-gn args out/OzoneWayland --args="use_ozone=true use_system_minigbm=true use_system_libdrm=true"
+gn args out/OzoneWayland --args="use_ozone=true use_system_minigbm=true use_system_libdrm=true use_xkbcommon=true"
ninja -C out/OzoneWayland chrome
./out/OzoneWayland/chrome --ozone-platform=wayland
```
+Native file dialogs are currently supported through the GTK toolkit. That
+implies that the browser is compiled with glib and gtk enabled. Please
+append the following gn args to your configuration:
+
+``` shell
+use_ozone=true
+use_system_minigbm=true
+use_system_libdrm=true
+use_xkbcommon=true
+use_glib=true
+use_gtk=true
+```
+
+Feel free to discuss with us on freenode.net, `#ozone-wayland` channel or on
+`ozone-dev`, or on `#ozone-wayland-x11` channel in [chromium slack](https://www.chromium.org/developers/slack).
+
### Caca
This platform
diff --git a/chromium/docs/process/merge_request.md b/chromium/docs/process/merge_request.md
index 9ee67df5c15..0905103699a 100644
--- a/chromium/docs/process/merge_request.md
+++ b/chromium/docs/process/merge_request.md
@@ -86,6 +86,10 @@ fix is low complexity.
Security bugs should be consulted with [chrome-security@](chrome-security@google.com)
to determine criticality.
+If it is unclear whether the severity of the issue meets the bar for merging
+consult with the [TPM](https://chromiumdash.appspot.com/schedule) and your
+manager.
+
This table below provides key dates and phases as an example, for M61 release.
Key Event | Date
diff --git a/chromium/docs/security/autoupgrade-mixed.md b/chromium/docs/security/autoupgrade-mixed.md
index a55a18bf1e4..9e03ea33c5a 100644
--- a/chromium/docs/security/autoupgrade-mixed.md
+++ b/chromium/docs/security/autoupgrade-mixed.md
@@ -1,10 +1,10 @@
# Mixed content Autoupgrade
## Description
-We are currently running an experiment upgrading mixed content (insecure content on secure sites) to HTTPS, as part of this, some users will see HTTP subresource URLs rewritten as HTTPS when browsing a site served over HTTPS. This is similar behavior to that if the site included the Upgrade-Insecure-Requests CSP directive.
+Chrome will now (starting on M80) attempt to upgrade some types of mixed content (HTTP on an HTTPS site) subresources. Subresources that fail to load over HTTPS will not be loaded. For more information see [the official announcement](https://blog.chromium.org/2019/10/no-more-mixed-messages-about-https.html).
## Scope
-Currently subresources loaded over HTTP and Websocket URLs are autoupgraded for users who are part of the experiment. Form submissions are not currently part of the experiment.
+Currently only audio and video subresources are autoupgraded. On a future version images will be included.
## Opt-out
-You can opt out of having mixed content autoupgraded in your site by including an HTTP header with type 'mixed-content' and value 'noupgrade', this will disable autoupgrades for subresources. Since mixed content websockets are automatically blocked, autoupgrades cannot be disabled for those.
+Users can disable autoupgrades on a per-site basis through content settings (chrome://settings/content/insecureContent).
diff --git a/chromium/docs/security/faq.md b/chromium/docs/security/faq.md
index d57a52e2636..5689f153bad 100644
--- a/chromium/docs/security/faq.md
+++ b/chromium/docs/security/faq.md
@@ -417,7 +417,7 @@ clock is within ten weeks of the embedded build timestamp. Key pinning is a
useful security measure but it tightly couples client and server configurations
and completely breaks when those configurations are out of sync. In order to
manage that risk we need to ensure that we can promptly update pinning clients
-an in emergency and ensure that non-emergency changes can be deployed in a
+in an emergency and ensure that non-emergency changes can be deployed in a
reasonable timeframe.
Each of the conditions listed above helps ensure those properties:
diff --git a/chromium/docs/security/security-labels.md b/chromium/docs/security/security-labels.md
index 22f9d960dc1..5459b8e7170 100644
--- a/chromium/docs/security/security-labels.md
+++ b/chromium/docs/security/security-labels.md
@@ -88,6 +88,10 @@ guidelines are as follows:
* **reward-**{**topanel**, **unpaid**, **na**, **inprocess**, _#_}: Labels used
in tracking bugs nominated for our [Vulnerability Reward
Program](https://www.chromium.org/Home/chromium-security/vulnerability-rewards-program).
+If a bug is filed by a Google or Chromium user on behalf of an external party,
+but is not within scope for a vulnerability reward, nevertheless use **reward-na**
+to ensure that the report is still properly credited to the external reporter
+in the release notes.
* **M-#**: Target milestone for the fix.
* Component: For bugs filed as **Type-Bug-Security**, we also want to track
which component(s) the bug is in.
diff --git a/chromium/docs/servicification.md b/chromium/docs/servicification.md
index d24dcd36862..8e9735e1079 100644
--- a/chromium/docs/servicification.md
+++ b/chromium/docs/servicification.md
@@ -300,12 +300,6 @@ than whatever would normally service them in the browser process.
The current way to set up that sort of thing looks like
[this](https://cs.chromium.org/chromium/src/third_party/blink/web_tests/battery-status/resources/mock-battery-monitor.js?rcl=be6e0001855f7f1cfc26205d0ff5a2b5b324fcbd&l=19).
-*** aside
-**NOTE:** The above approach to mocking in JS no longer applies when using
-the new recommended `DocumentInterfaceBroker` approach to exposing interfaces
-to documents. New JS mocking support is in development for this.
-***
-
#### Feature Impls That Depend on Blink Headers
In the course of servicifying a feature that has Blink as a client, you might
encounter cases where the feature implementation has dependencies on Blink
diff --git a/chromium/docs/speed/addressing_performance_regressions.md b/chromium/docs/speed/addressing_performance_regressions.md
index e65f05a5e90..c4b8fcc4c7a 100644
--- a/chromium/docs/speed/addressing_performance_regressions.md
+++ b/chromium/docs/speed/addressing_performance_regressions.md
@@ -7,11 +7,12 @@ and assigned a bug to you! What should you do? Read on...
## About our performance tests
-The [chromium.perf waterfall](perf_waterfall.md) is a continuous build which
-runs performance tests on dozens of devices across Windows, Mac, Linux, and
-Android Chrome and WebView. Often, a performance regression only affects a
-certain type of hardware or a certain operating system, which may be different
-than what you tested locally before landing your CL.
+The [chrome.perf waterfall](perf_waterfall.md) is a continuous build which
+runs performance tests on dozens of devices across Android, Windows,
+Mac, and Linux hardware; see [list of platforms](perf_lab_platforms.md).
+Often, a performance regression only affects a certain type of hardware or a
+certain operating system, which may be different than what you tested locally
+before landing your CL.
Each test has an owner, named in
[this spreadsheet](https://docs.google.com/spreadsheets/d/1xaAo0_SU3iDfGdqDJZX_jRV0QtkufwHUKH3kQKF3YQs/edit#gid=0),
@@ -32,7 +33,7 @@ The bisect service spits out a comment on the bug that looks like this:
> **Roll src/third_party/depot_tools/ 0f7b2007a..fd4ad2416 (1 commit)**
> by depot-tools-roller@chromium.org<br>
> https://chromium.googlesource.com/chromium/src/+/14fc99e3fd3614096caab7c7a8362edde8327a5d
->
+>
> Understanding performance regressions:<br>
> &nbsp;&nbsp;http://g.co/ChromePerformanceRegressions
@@ -89,7 +90,8 @@ making the regression more likely to reproduce. From the Pinpoint Job page,
clicking the `+` button in the bottom-right corner to test a patch with the
current configuration.
-You can also run locally:
+You can also run locally (note that running this successfully on Android
+requires the device to be rooted):
```
src$ tools/perf/run_benchmark benchmark_name --story-filter story_name
```
diff --git a/chromium/docs/speed/apk_size_regressions.md b/chromium/docs/speed/apk_size_regressions.md
index 8b7a7faf105..a9f2110c4f0 100644
--- a/chromium/docs/speed/apk_size_regressions.md
+++ b/chromium/docs/speed/apk_size_regressions.md
@@ -23,8 +23,8 @@
by looking at the `android-binary-size` trybot result for the roll commit.
* For V8 rolls, try checking the [V8 size graph](https://chromeperf.appspot.com/report?sid=59435a74c93b42599af4b02e2b3df765faef4685eb015f8aaaf2ecf7f4afb29c)
to see if any jumps correspond with a CL in the roll.
- * Otherwise, use [diagnose_bloat.py](https://chromium.googlesource.com/chromium/src/+/master/tools/binary_size/README.md#diagnose_bloat_py)
- in a [local Android checkout](https://chromium.googlesource.com/chromium/src/+/master/docs/android_build_instructions.md)
+ * Otherwise, use [diagnose_bloat.py](/tools/binary_size/README.md#diagnose_bloat_py)
+ in a [local Android checkout](/docs/android_build_instructions.md)
to build all commits locally and find the culprit.
* If there were multiple commits due to a build breakage, use `--apply-patch`
with the fixing commit (last one in the range).
@@ -98,15 +98,15 @@ Figure out which file within the `.apk` increased (native library, dex, pak
resources, etc.) by looking at the trybot results or size graphs that were
linked from the bug (if it was not linked in the bug, see above).
-**See [//docs/speed/binary_size/metrics.md](https://chromium.googlesource.com/chromium/src/+/master/docs/speed/binary_size/metrics.md)
+**See [//docs/speed/binary_size/metrics.md](/docs/speed/binary_size/metrics.md)
for a description of high-level binary size metrics.**
-**See [//tools/binary_size/README.md](https://chromium.googlesource.com/chromium/src/+/master/tools/binary_size/README.md)
+**See [//tools/binary_size/README.md](/tools/binary_size/README.md)
for a description of binary size tools.**
## Step 2: Analyze
-See [optimization advice](//docs/speed/binary_size/optimization_advice.md).
+See [optimization advice](/docs/speed/binary_size/optimization_advice.md).
## Step 3: Give Up :/
diff --git a/chromium/docs/speed/benchmark/benchmark_ownership.md b/chromium/docs/speed/benchmark/benchmark_ownership.md
index 5297041eb4e..bb9343d4dd8 100644
--- a/chromium/docs/speed/benchmark/benchmark_ownership.md
+++ b/chromium/docs/speed/benchmark/benchmark_ownership.md
@@ -34,7 +34,7 @@ There can be multiple owners of a benchmark, for example if there are multiple t
### C++ Perf Benchmarks
1. Open [`src/tools/perf/core/perf_data_generator.py`](https://cs.chromium.org/chromium/src/tools/perf/core/perf_data_generator.py).
-1. Find the BenchmarkMetadata for the benchmark. It will be in a dictionary named `NON_TELEMETRY_BENCHMARKS` or `NON_WATERFALL_BENCHMARKS`.
+1. Find the BenchmarkMetadata for the benchmark. It will be in a dictionary named `GTEST_BENCHMARKS`.
1. Update the email (first field of `BenchmarkMetadata`).
1. Run `tools/perf/generate_perf_data` to update `tools/perf/benchmark.csv`.
1. Upload `perf_data_generator.py` and `benchmark.csv` to a CL for review. Please add any previous owners to the review.
diff --git a/chromium/docs/speed/benchmark/harnesses/blink_perf.md b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
index 299cd5c5a30..6aca9ff153a 100644
--- a/chromium/docs/speed/benchmark/harnesses/blink_perf.md
+++ b/chromium/docs/speed/benchmark/harnesses/blink_perf.md
@@ -180,9 +180,13 @@ viewer won't be supported.
**Running tests with Telemetry**
-Assuming your current directory is `chromium/src/`, you can run tests with:
+There are several `blink_perf` benchmarks. You can see the full list in
+`third_party/blink/perf_tests` or by running
+`tools/perf/run_benchmark list | grep blink_perf`. If you want to run the
+`blink_perf.paint` benchmark and your current directory is `chromium/src/`, you
+can run tests with:
-`./tools/perf/run_benchmark run blink_perf [--test-path=<path to your tests>]`
+`./tools/perf/run_benchmark run blink_perf.paint [--story-filter=<test_file_name>]`
For information about all supported options, run:
diff --git a/chromium/docs/speed/binary_size/android_binary_size_trybot.md b/chromium/docs/speed/binary_size/android_binary_size_trybot.md
new file mode 100644
index 00000000000..e77e15124da
--- /dev/null
+++ b/chromium/docs/speed/binary_size/android_binary_size_trybot.md
@@ -0,0 +1,192 @@
+# Trybot: android-binary-size
+
+[TOC]
+
+## About
+
+The android-binary-size trybot exists for three reasons:
+1. To measure and make developers aware of the binary size impact of commits.
+2. To perform checks that require comparing builds with & without patch.
+3. To provide bot coverage for building with `is_official_build=true`.
+
+## Measurements and Analysis
+
+The bot provides analysis using:
+* [resource_sizes.py]: The delta in metrics are reported. Most of these are
+ described in [//docs/speed/binary_size/metrics.md][metrics].
+* [SuperSize]: Provides visual and textual binary size breakdowns.
+
+[resource_sizes.py]: /build/android/resource_sizes.py
+[metrics]: /docs/speed/binary_size/metrics.md
+[SuperSize]: /tools/binary_size/README.md
+
+## Checks:
+
+### Binary Size Increase
+
+- **What:** Checks that [normalized apk size] increases by no more than 16kb.
+- **Why:** While we hope that binary size impact of all commits are looked at
+ to ensure they make sense, this check is to ensure they are looked at for
+ larger than average commits.
+
+[normalized apk size]: /docs/speed/binary_size/metrics.md#normalized-apk-size
+
+#### What to do if the Check Fails?
+
+- Look at the provided symbol diffs to understand where the size is coming from.
+- See if any of the generic [optimization advice] is applicable.
+- If you are writing a new feature or including a new library you might want to
+ think about skipping the android platform and to restrict this new
+ feature/library to desktop platforms that might care less about binary size.
+- If reduction is not practical, add a rationale for the increase to the commit
+ description. It should include:
+ - A list of any optimizations that you attempted (if applicable)
+ - If you think that there might not be a consensus that the code your adding
+ is worth the added file size, then add why you think it is.
+ - To get a feeling for how large existing features are, refer to
+ [milestone size breakdowns].
+
+- Add a footer to the commit description along the lines of:
+ - `Binary-Size: Size increase is unavoidable (see above).`
+ - `Binary-Size: Increase is temporary.`
+
+[optimization advice]: /docs/speed/binary_size/optimization_advice.md
+[milestone size breakdowns]: https://storage.googleapis.com/chrome-supersize/index.html
+
+
+
+### Dex Method Count
+
+- **What:** Checks that the number of Java methods after optimization does not
+ increase by more than 50.
+- **Why:** Ensures that large changes to this metric are scrutinized.
+
+#### What to do if the Check Fails?
+
+- Look at the bot's "Dex Class and Method Diff" output to see which classes and
+ methods survived optimization.
+- See if any of [Java Optimization] tips are applicable.
+- If the increase is from a new dependency, ensure that there is no existing
+ library that provides similar functionality.
+- If reduction is not practical, add a rationale for the increase to the commit
+ description. It should include:
+ - A list of any optimizations that you attempted (if applicable)
+ - If you think that there might not be a consensus that the code your adding
+ is worth the added file size, then add why you think it is.
+ - To get a feeling for how large existing features are, open the latest
+ [milestone size breakdown] and select "Method Count Mode".
+- Add a footer to the commit description along the lines of:
+ - `Binary-Size: Added a new library.`
+ - `Binary-Size: Enables a large feature that was previously flagged.`
+
+[Java Optimization]: /docs/speed/binary_size/optimization_advice.md#Optimizating-Java-Code
+
+### Mutable Constants
+
+- **What**: Checks that all variables named `kVariableName` are in read-only
+ sections of the binary (either `.rodata` or `.data.rel.do`).
+- **Why**: Guards against accidentally missing a `const` keyword. Non-const
+ variables have a larger memory footprint than const ones.
+- For more context see [https://crbug.com/747064](https://crbug.com/747064).
+
+#### What to do if the Check Fails?
+
+- Make the symbol read-only (usually by adding "const").
+- If you can't make it const, then rename it.
+- To check what section a symbol is in for a local build:
+ ```sh
+ ninja -C out/Release obj/.../your_file.o
+ third_party/llvm-build/Release+Asserts/bin/llvm-nm out/Release/.../your_file.o --format=darwin
+ ```
+ - Only `format=darwin` shows the difference between `.data` and `.data.rel.ro`.
+ - You need to use llvm's `nm` only when thin-lto is enabled
+ (when `is_official_build=true`).
+
+Here's the most common example:
+```c++
+const char * kMyVar = "..."; // A *mutable* pointer to a const char (bad).
+const char * const kMyVar = "..."; // A const pointer to a const char (good).
+constexpr char * kMyVar = "..."; // A const pointer to a const char (good).
+const char kMyVar[] = "..."; // A const char array (good).
+```
+
+For more information on when to use `const char *` vs `const char[]`, see
+[//docs/native_relocations.md](/docs/native_relocations.md).
+
+### Added Symbols named “ForTest”
+
+- **What:** This checks that we don't have symbols with “ForTest” in their name
+ in an optimized release binary.
+- **Why:** To prevent shipping unused test-only code to end-users.
+
+#### What to do if the Check Fails?
+
+- Make sure your ForTest methods are not called in non-test code.
+- Unfortunately, clang is unable to remove unused virtual methods, so try and
+ make sure your ForTest methods are not virtual.
+
+### Uncompressed Pak Entry
+
+- **What:** Checks that `.pak` file entries that are not translatable strings
+ and are stored compressed. Limit currently set to 1KB.
+- **Why:** Compression makes things smaller and there is normally no reason to
+ leaving resources uncompressed.
+
+#### What to do if the Check Fails?
+
+- Add `compress="gzip"` to the `.grd` entry for the resource.
+
+### Expectation Failures
+
+- **What & Why:** Learn about these expectation files [here][expectation files].
+
+[expectation files]: /chrome/android/java/README.md
+
+#### What to do if the Check Fails?
+
+- The output of the failing step contains the command to run to update the
+ relevant expectation file. Run this command to update the expectation files.
+
+### If All Else Fails
+
+- For help, email [binary-size@chromium.org]. Hearing about your issues helps us
+ to improve the tools!
+- Not all checks are perfect and sometimes you want to overrule the trybot (for
+ example if you did your best and are unable to reduce binary size any
+ further).
+- Adding a “Binary-Size: $ANY\_TEXT\_HERE” footer to your cl (next to “Bug:”)
+ will bypass the bot assertions.
+ - Most commits that trigger the warnings will also result in Telemetry
+ alerts and be reviewed by a binary size sheriff. Failing to write an
+ adequate justification may lead to the binary size sheriff filing a bug
+ against you to improve your cl.
+
+[binary-size@chromium.org]: https://groups.google.com/a/chromium.org/forum/#!forum/binary-size
+
+## Bot Links Provided by the Last Step
+
+### Size Assertion Results
+
+- Shows the list of checks that ran grouped by passing and failing checks.
+- Read this to know which checks failed the tryjob.
+
+### Supersize text diff
+
+- This is the text diff produced by the supersize tool.
+- It lists all changed symbols and for each one, which section it lives in,
+ which source file it came from as well as what is its size before, after and
+ the delta for your cl.
+- It also contains a histogram of symbol size deltas.
+- You can use this to find which symbols grew and where the binary size impact
+ of your cl comes from.
+
+### Supersize html diff
+
+- Visual representation of the text diff above.
+- It shows size deltas per file and directory
+- It allows you to filter symbols by type/section/size/etc.
+
+## Code Locations
+
+- [Link to recipe](https://cs.chromium.org/chromium/build/scripts/slave/recipes/binary_size_trybot.py)
+- [Link to src-side checks](/tools/binary_size/trybot_commit_size_checker.py)
diff --git a/chromium/docs/speed/binary_size/optimization_advice.md b/chromium/docs/speed/binary_size/optimization_advice.md
index 4170e86e75b..44ee0011ea6 100644
--- a/chromium/docs/speed/binary_size/optimization_advice.md
+++ b/chromium/docs/speed/binary_size/optimization_advice.md
@@ -1,52 +1,159 @@
-# Optimizing Chrome's Binary Size
+# Optimizing Chrome's Image Size
+
+The Chrome image size is important on all platforms as it affects download
+and update times.
>
- > This advice focuses on Android.
+ > This document primarily focuses on Android and Chrome OS where image size
+ > is especially important.
>
[TOC]
-## How To Tell if It's Worth Spending Time on Binary Size?
+## General Advice
+
+* Chrome image size on Android and Chrome OS is tightly limited.
+* Non trivial increases to the Chrome image size need to be included in the
+ [Feature proposal process].
+* Use [Compressed resources] wherever possible. This is particularly important
+ for images and WebUI resources, which can be substantial.
+* Recently a [CrOS Image Size Code Mauve] (googlers only) was called due to
+ growth concerns.
+
+[CrOS Image Size Code Mauve]: http://go/cros-image-size-code-mauve
+[Compressed resources]: #Compressed-resources
+
+### Size Optimization Help
+Feel free to email [binary-size@chromium.org](https://groups.google.com/a/chromium.org/forum/#!forum/binary-size).
+
+### Compressed resources
+
+[Grit] supports gzip and brotli compression for resources in the .grd files
+used to build the `resources.pak` file.
+
+* Ensure `compress="gzip"` or `compress="brotli"` is used for all
+ highly-compressible (e.g. text, WebUI) resources.
+ * gzip compression for highly-compressible data typically has minimal
+ impact on load times (but it is worth measuring this, see
+ [webui_load_timer.cc] for an example of measuring load times).
+ * Brotli compresses more but is much slower to decompress. Use brotli only
+ when performance doesn't matter (e.g. internals pages).
+* **Android**: Look at the SuperSize reports from the android-binary-size
+ trybot to look for unexpected resources, or unreasonably large symbols.
+
+[Grit]: https://www.chromium.org/developers/tools-we-use-in-chromium/grit
+[webui_load_timer.cc]: https://cs.corp.google.com/eureka_internal/chromium/src/chrome/browser/ui/webui/webui_load_timer.cc
+
+### Chrome binary size
+
+Changes that will significantly increase the [Chrome binary size] should be
+made with care and consideration:
+
+* Changes that introduce new libraries can have significant impact and should
+ go through the [Feature proposal process].
+* Changes intended to replace existing functionality with significant new code
+ should include a deprecation plan for the code being replaced.
+
+[Chrome binary size]: https://drive.google.com/a/google.com/open?id=1aeIxj8jPOimmlnqD7PvS6np51DZa96dcF72N6CtO6N8
+
+
+## Chrome OS Focused Advice
+
+### Compressed l10n Strings
+
+Strings do not compress well individually, but an entire l10n file compresses
+very well.
+
+There are two mechanisms for compressing Chrome l10n files.
+
+1. Compressed .pak files
+ * For desktop Chrome, string resource files generate individual .pak
+ files, e.g. `generated_resources_en.pak`.<br/>
+ These get combined into locale specific .pak files, e.g.
+ `locales/en-US.pak`
+ * On Chrome OS, we set`'compress = true` in [chrome_repack_locales.gni],
+ which causes these .pak files to be gzip compressed.<br/>
+ (Chrome identifies them as compressed by parsing the file header).
+ * So, *Chrome strings on Chrome OS will be compressed by default*,
+ nothing else needs to be done!
+1. Compressing .json l10n files
+ * Extensions and apps store l10n strings as `messages.json` files in
+ `{extension dir}/_locales/{locale}`.
+ * For Chrome OS component extensions (e.g. ChromeVox), we include
+ these extensions as part of the Chrome image.
+ * These strings get localized across 50+ languages, so it is
+ important to compress them.
+ * For *component extensions only*, these files can be gzip compressed
+ (and named `messages.json.gz`) as part of their build step.
+ * For extensions using GN:
+ 1. Specify `type="chrome_messages_json_gzip"` for each `<output>`
+ entry in the .grd file.
+ 1. Name the outputs `messages.json.gz` in the .grd and strings.gni
+ files.
+ * See https://crbug.com/1023568 for details and an example CL.
+
+[chrome_repack_locales.gni]: https://cs.chromium.org/chromium/src/chrome/chrome_repack_locales.gni
+
+### chromeos-assets
+
+* Input methods, speech synthesis, and apps consume a great deal of disk space
+ on the Chrome OS rootfs partition.
+* These assets are not part of the chromium repository, however they do
+ affect [rootfs size] on devices.
+* Proposed additions or increases to chromeos-assets should go through the
+ [Feature proposal process] and should consider using some form of
+ [Downloadable Content] if possible.
+
+[rootfs size]: https://docs.google.com/document/d/1d3Y2ngMGEP_yfxBFrgOE-dinDILfyAUY9LT0JLlq6zg/edit?usp=sharing
+[Downloadable Content]: https://docs.google.com/presentation/d/1wM-eDX-BQavecQz20gPxRF6CxWp1k1sh7zSKMxz4BLI/edit?usp=sharing
+
+
+## Android Focused Advice
+
+### How To Tell if It's Worth Spending Time on Binary Size?
* Binary size is a shared resource, and thus its growth is largely due to the
tragedy of the commons.
* It typically takes about a week of engineering time to reduce Android's
binary size by 50kb.
* As of 2019, Chrome for Android (arm32) grows by about 100kb per week.
+ * To get a feeling for how large existing features are, refer to the
+ [milestone size breakdowns] and group by "Component".
+
+[milestone size breakdowns]: https://storage.googleapis.com/chrome-supersize/index.html
-## Optimizing Translations (Strings)
+### Optimizing Translations (Strings)
* Use [Android System strings](https://developer.android.com/reference/android/R.string.html) where appropriate
* Ensure that strings in .grd files need to be there. For strings that do
not need to be translated, put them directly in source code.
-## Optimizing Non-Image Native Resources in .pak Files
-
- * Ensure `compress="gzip"` or `compress="brotli"` is used for all
- highly-compressible (e.g. text) resources.
- * Brotli compresses more but is much slower to decompress. Use brotli only
- when performance doesn't matter (e.g. internals pages).
- * Look at the SuperSize reports from the android-binary-size trybot to look for
- unexpected resources, or unreasonably large symbols.
-
-## Optimizing Images
+### Optimizing Images
* Would a vector image work?
+ * Images that can be described by a series of paths should generally be
+ stored as vectors.
+ * The one exception is if the image will be used pre-Lollipop in a
+ notification or application icon.
* For images used in native code: [VectorIcon](https://chromium.googlesource.com/chromium/src/+/HEAD/components/vector_icons/README.md).
- * For Android drawables: [VectorDrawable](https://codereview.chromium.org/2857893003/).
+ * For Android drawables: [VectorDrawable](https://developer.android.com/guide/topics/graphics/vector-drawable-resources).
+ * Convert from `.svg` online using https://inloop.github.io/svg2android/.
* Optimize vector drawables with [avocado](https://bugs.chromium.org/p/chromium/issues/detail?id=982302).
+ * (Googlers): Find most icons as .svg at [go/icons](https://goto.google.com/icons).
* Would **lossy** compression make sense (often true for large images)?
* If so, [use lossy webp](https://codereview.chromium.org/2615243002/).
* And omit some densities (e.g. add only an xxhdpi version).
- * Would **near-lossless** compression make sense?
- * This can often reduce size by >50% without a perceptible difference.
- * [Use pngquant](https://pngquant.org) to try this out (use one of the GUI
- tools to compare before/after).
- * Are the **lossless** images fully optimized?
+ * For lossless `.png` images, see how few unique colors you can use without a
+ noticeable difference.
+ * This can often reduce an already optimized .png by 33%-50%.
+ * [Use pngquant](https://pngquant.org) to try this out.
+ * Requires trial and error for each number of unique colors.
+ * Use one of the GUI tools linked from the website to do this easily.
+ * Finally - Ensure .png files are fully optimized.
* Use [tools/resources/optimize-png-files.sh](https://cs.chromium.org/chromium/src/tools/resources/optimize-png-files.sh).
* There is some [Googler-specific guidance](https://goto.google.com/clank/engineering/best-practices/adding-image-assets) as well.
-### What Build-Time Image Optimizations are There?
+#### What Build-Time Image Optimizations are There?
* For non-ninepatch images, `drawable-xxxhdpi` are omitted (they are not
perceptibly different from xxhdpi in most cases).
* For non-ninepatch images within res/ directories (not for .pak file images),
@@ -55,7 +162,13 @@
or just build `ChromePublic.apk` and use `unzip -l` to see the size of the
images within the built apk.
-## Optimizing Code
+### Optimizing Android Resources
+ * Use config-specific resource directories sparingly.
+ * Introducing a new config has [a large cost][arsc-bloat].
+
+[arsc-bloat]: https://medium.com/androiddevelopers/smallerapk-part-3-removing-unused-resources-1511f9e3f761#0b72
+
+### Optimizing Code
In most parts of the codebase, you should try to optimize your code for binary
size rather than performance. Most code runs "fast enough" and only needs to be
@@ -72,10 +185,12 @@ Practical advice:
the [android-binary-size trybot][size-trybot].
* Or use [//tools/binary_size/diagnose_bloat.py][diagnose_bloat] to create
diffs locally.
- * Ensure no symbols exists that are used only by tests.
+ * Ensure no symbols exist that are used only by tests.
* Be concise with strings used for error handling.
* Identical strings throughout the codebase are de-duped. Take advantage of
this for error-related strings.
+
+#### Optimizing Native Code
* If there's a notable increase in `.data.rel.ro`:
* Ensure there are not [excessive relocations][relocations].
* If there's a notable increase in `.rodata`:
@@ -92,7 +207,7 @@ Practical advice:
* E.g. Use PODs wherever possible, and especially in containers. They will
likely compile down to the same code as other pre-existing PODs.
* Try also to use consistent field ordering within PODs.
- * E.g. a `std::vector` of bare pointers will very likely by ICF'ed, but one
+ * E.g. a `std::vector` of bare pointers will very likely be ICF'ed, but one
that uses smart pointers gets type-specific destructor logic inlined into
it.
* This advice is especially applicable to generated code.
@@ -105,7 +220,7 @@ Practical advice:
separate `const char *` and `const std::string&` overloads rather than
a single `base::StringPiece`.
-Android-specific advice:
+#### Optimizing Java Code
* Prefer fewer large JNI calls over many small JNI calls.
* Minimize the use of class initializers (`<clinit>()`).
* If R8 cannot determine that they are "trivial", they will prevent
@@ -114,10 +229,8 @@ Android-specific advice:
are created by executing code within `<clinit>()`. There is often little
advantage to initializing class fields statically vs. upon first use.
* Don't use default interface methods on interfaces with multiple implementers.
- * Desugaring causes the methods to be added to every implementor separately.
+ * Desugaring causes the methods to be added to every implementer separately.
* It's more efficient to use a base class to add default methods.
- * Use config-specific resource directories sparingly.
- * Introducing a new config has [a large cost][arsc-bloat].
* Use `String.format()` instead of concatenation.
* Concatenation causes a lot of StringBuilder code to be generated.
* Try to use default values for fields rather than explicit initialization.
@@ -130,23 +243,32 @@ Android-specific advice:
`onFinished(bool)`.
* E.g. rather than have `onTextChanged()`, `onDateChanged()`, ..., have a
single `onChanged()` that assumes everything changed.
+ * Ensure unused code is optimized away by ProGuard / R8.
+ * Add `@CheckDiscard` to methods or classes that you expect R8 to inline.
+ * Add `@RemovableInRelease` to force a method to be a no-op when DCHECKs
+ are disabled.
+ * See [here][proguard-build-doc] for more info on how Chrome uses ProGuard.
-[size-trybot]: //tools/binary_size/README.md#Binary-Size-Trybot-android_binary_size
-[diagnose_bloat]: //tools/binary_size/README.md#diagnose_bloat_py
-[relocations]: //docs/native_relocations.md
+[proguard-build-doc]: /build/android/docs/java_optimization.md
+[size-trybot]: /tools/binary_size/README.md#Binary-Size-Trybot-android_binary_size
+[diagnose_bloat]: /tools/binary_size/README.md#diagnose_bloat_py
+[relocations]: /docs/native_relocations.md
[template_bloat]: https://bugs.chromium.org/p/chromium/issues/detail?id=716393
-[supersize-console]: //tools/binary_size/README.md#Usage_console
-[arsc-bloat]: https://medium.com/androiddevelopers/smallerapk-part-3-removing-unused-resources-1511f9e3f761#0b72
+[supersize-console]: /tools/binary_size/README.md#Usage_console
-## Optimizing Third-Party Android Dependencies
+### Optimizing Third-Party Android Dependencies
* Look through SuperSize symbols to see whether unwanted functionality
is being pulled in.
- * Use ProGuard's `-whyareyoukeeping` to see why unwanted symbols are kept.
- * Try adding `-assumenosideeffects` rules to strip out unwanted calls.
+ * Use ProGuard's [-whyareyoukeeping] to see why unwanted symbols are kept
+ (e.g. to [//base/android/proguard/chromium_apk.flags](/base/android/proguard/chromium_apk.flags)).
+ * Try adding [-assumenosideeffects] rules to strip out unwanted calls
+ (equivalent to adding @RemovableInRelease annotations).
* Consider removing all resources via `strip_resources = true`.
* Remove specific drawables via `resource_blacklist_regex`.
-## Size Optimization Help
+[-whyareyoukeeping]: https://r8-docs.preemptive.com/#keep-rules
+[-assumenosideeffects]: https://r8-docs.preemptive.com/#general-rules
+
- * Feel free to email [binary-size@chromium.org](https://groups.google.com/a/chromium.org/forum/#!forum/binary-size).
+[Feature proposal process]: http://www.chromium.org/developers/new-features
diff --git a/chromium/docs/speed/perf_lab_platforms.md b/chromium/docs/speed/perf_lab_platforms.md
index 57276957f88..33cc8b322db 100644
--- a/chromium/docs/speed/perf_lab_platforms.md
+++ b/chromium/docs/speed/perf_lab_platforms.md
@@ -6,27 +6,28 @@
## Android
- * [android-go-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/android-go-perf): Android O (gobo).
- * [android-go_webview-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/android-go_webview-perf): Android OPM1.171019.021 (gobo).
- * [Android Nexus5 Perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/Android%20Nexus5%20Perf): Android KOT49H.
- * [android-nexus5x-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/android-nexus5x-perf): Android MMB29Q.
- * [Android Nexus5X WebView Perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/Android%20Nexus5X%20WebView%20Perf): Android AOSP MOB30K.
- * [Android Nexus6 WebView Perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/Android%20Nexus6%20WebView%20Perf): Android AOSP MOB30K.
- * [android-pixel2-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/android-pixel2-perf): Android OPM1.171019.021.
- * [android-pixel2_webview-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/android-pixel2_webview-perf): Android OPM1.171019.021.
+ * [android-go-perf](https://ci.chromium.org/p/chrome/builders/ci/android-go-perf): Android O (gobo).
+ * [android-go_webview-perf](https://ci.chromium.org/p/chrome/builders/ci/android-go_webview-perf): Android OPM1.171019.021 (gobo).
+ * [Android Nexus5 Perf](https://ci.chromium.org/p/chrome/builders/ci/Android%20Nexus5%20Perf): Android KOT49H.
+ * [android-nexus5x-perf](https://ci.chromium.org/p/chrome/builders/ci/android-nexus5x-perf): Android MMB29Q.
+ * [Android Nexus5X WebView Perf](https://ci.chromium.org/p/chrome/builders/ci/Android%20Nexus5X%20WebView%20Perf): Android AOSP MOB30K.
+ * [Android Nexus6 WebView Perf](https://ci.chromium.org/p/chrome/builders/ci/Android%20Nexus6%20WebView%20Perf): Android AOSP MOB30K.
+ * [android-pixel2-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2-perf): Android OPM1.171019.021.
+ * [android-pixel2_weblayer-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2_weblayer-perf): Android OPM1.171019.021.
+ * [android-pixel2_webview-perf](https://ci.chromium.org/p/chrome/builders/ci/android-pixel2_webview-perf): Android OPM1.171019.021.
## Linux
- * [linux-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/linux-perf): Ubuntu-14.04, 8 core, NVIDIA Quadro P400.
+ * [linux-perf](https://ci.chromium.org/p/chrome/builders/ci/linux-perf): Ubuntu-14.04, 8 core, NVIDIA Quadro P400.
## Mac
- * [mac-10_12_laptop_low_end-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/mac-10_12_laptop_low_end-perf): MacBook Air, Core i5 1.8 GHz, 8GB RAM, 128GB SSD, HD Graphics.
- * [mac-10_13_laptop_high_end-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/mac-10_13_laptop_high_end-perf): MacBook Pro, Core i7 2.8 GHz, 16GB RAM, 256GB SSD, Radeon 55.
+ * [mac-10_12_laptop_low_end-perf](https://ci.chromium.org/p/chrome/builders/ci/mac-10_12_laptop_low_end-perf): MacBook Air, Core i5 1.8 GHz, 8GB RAM, 128GB SSD, HD Graphics.
+ * [mac-10_13_laptop_high_end-perf](https://ci.chromium.org/p/chrome/builders/ci/mac-10_13_laptop_high_end-perf): MacBook Pro, Core i7 2.8 GHz, 16GB RAM, 256GB SSD, Radeon 55.
## Win
- * [win-10-perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/win-10-perf): Windows Intel HD 630 towers, Core i7-7700 3.6 GHz, 16GB RAM, Intel Kaby Lake HD Graphics 630.
- * [Win 7 Nvidia GPU Perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/Win%207%20Nvidia%20GPU%20Perf): N/A.
- * [Win 7 Perf](https://ci.chromium.org/p/chrome/builders/luci.chrome.ci/Win%207%20Perf): N/A.
+ * [win-10-perf](https://ci.chromium.org/p/chrome/builders/ci/win-10-perf): Windows Intel HD 630 towers, Core i7-7700 3.6 GHz, 16GB RAM, Intel Kaby Lake HD Graphics 630.
+ * [Win 7 Nvidia GPU Perf](https://ci.chromium.org/p/chrome/builders/ci/Win%207%20Nvidia%20GPU%20Perf): N/A.
+ * [Win 7 Perf](https://ci.chromium.org/p/chrome/builders/ci/Win%207%20Perf): N/A.
diff --git a/chromium/docs/speed/perf_regression_sheriffing.md b/chromium/docs/speed/perf_regression_sheriffing.md
index 7cd90c4d923..ef058ffa050 100644
--- a/chromium/docs/speed/perf_regression_sheriffing.md
+++ b/chromium/docs/speed/perf_regression_sheriffing.md
@@ -76,6 +76,12 @@ below it the dashboard shows graphs of all the alerts checked in that table.
bisects as you feel are necessary to investigate; [give feedback](#feedback)
below if you feel that is not the case.
+### Dashboard UI Tips
+
+* Grouping is done client side today. If you click "Show more" at the bottom
+until you can see all the alerts, the alerts will be grouped together more.
+* You can shift click on the check boxes to select multiple alerts quickly.
+
## Follow up on Performance Regressions
During your shift, you should try to follow up on each of the bugs you filed.
diff --git a/chromium/docs/speed/perf_waterfall.md b/chromium/docs/speed/perf_waterfall.md
index 21c6e90a9ec..6d1de5e8443 100644
--- a/chromium/docs/speed/perf_waterfall.md
+++ b/chromium/docs/speed/perf_waterfall.md
@@ -2,9 +2,10 @@
## Overview
-The [chromium.perf waterfall](https://ci.chromium.org/p/chrome/g/chrome.perf/console)
+The [chrome.perf waterfall](https://ci.chromium.org/p/chrome/g/chrome.perf/console)
continuously builds and runs our performance tests on real Android, Windows,
-Mac, and Linux hardware. Results are reported to the
+Mac, and Linux hardware; see [list of platforms](perf_lab_platforms.md).
+Results are reported to the
[Performance Dashboard](https://chromeperf.appspot.com/) for analysis. The
[Perfbot Health Sheriffing Rotation](bot_health_sheriffing/main.md) ensures that the benchmarks stay green. The [Perf Sheriff Rotation](perf_regression_sheriffing.md) ensures that any regressions detected by those benchmarks are addressed quickly. Together, these rotations maintain
[Chrome's Core Principles](https://www.chromium.org/developers/core-principles)
diff --git a/chromium/docs/sync/model_api.md b/chromium/docs/sync/model_api.md
index 68a53ccaf1c..49ac84d9a90 100644
--- a/chromium/docs/sync/model_api.md
+++ b/chromium/docs/sync/model_api.md
@@ -278,7 +278,6 @@ the next client restart.
[GetUserSelectableTypeInfo].
* Add to the `SyncModelTypes` enum in [`enums.xml`][enums] and to the
`SyncModelType` suffix in [`histograms.xml`][histograms].
-* Add to the [`SYNC_DATA_TYPE_HISTOGRAM`][DataTypeHistogram] macro.
[protocol]: https://cs.chromium.org/chromium/src/components/sync/protocol/
[ModelType]: https://cs.chromium.org/chromium/src/components/sync/base/model_type.h
diff --git a/chromium/docs/testing/android_test_instructions.md b/chromium/docs/testing/android_test_instructions.md
index 2748f04bbed..b7280759691 100644
--- a/chromium/docs/testing/android_test_instructions.md
+++ b/chromium/docs/testing/android_test_instructions.md
@@ -61,7 +61,7 @@ adb shell settings put global package_verifier_enable 0
### Using Emulators
-Running tests on emulators is the same as on device. Refer to
+Running tests on emulators is the same as [on device](#Running-Tests). Refer to
[android_emulator.md](../android_emulator.md) for setting up emulators.
## Building Tests
diff --git a/chromium/docs/testing/code_coverage.md b/chromium/docs/testing/code_coverage.md
index f872e65facb..b437d132770 100644
--- a/chromium/docs/testing/code_coverage.md
+++ b/chromium/docs/testing/code_coverage.md
@@ -164,14 +164,14 @@ download the tools manually ([tools link]).
### Step 1 Build
In Chromium, to compile code with coverage enabled, one needs to add
-`use_clang_coverage=true` and `is_component_build=false` GN flags to the args.gn
-file in the build output directory. Under the hood, they ensure
-`-fprofile-instr-generate` and `-fcoverage-mapping` flags are passed to the
-compiler.
+`use_clang_coverage=true`, `is_component_build=false` and `is_debug=false` GN
+flags to the args.gn file in the build output directory. Under the hood, they
+ensure `-fprofile-instr-generate` and `-fcoverage-mapping` flags are passed to
+the compiler.
```
$ gn gen out/coverage \
- --args='use_clang_coverage=true is_component_build=false'
+ --args='use_clang_coverage=true is_component_build=false is_debug=false'
$ gclient runhooks
$ autoninja -C out/coverage crypto_unittests url_unittests
```
@@ -322,10 +322,7 @@ only reports generated on Linux and CrOS are available on the
### Is coverage reported for the code executed inside the sandbox?
-Not at the moment until [crbug.com/842424] is resolved. We do not disable the
-sandbox when running the tests. However, if there are any other non-sandbox'ed
-tests for the same code, the coverage should be reported from those. For more
-information, see [crbug.com/842424].
+Yes!
[assert]: http://man7.org/linux/man-pages/man3/assert.3.html
@@ -342,7 +339,6 @@ information, see [crbug.com/842424].
[crbug.com/821617]: https://crbug.com/821617
[crbug.com/831939]: https://crbug.com/831939
[crbug.com/834781]: https://crbug.com/834781
-[crbug.com/842424]: https://crbug.com/842424
[crrev.com/c/1172932]: https://crrev.com/c/1172932
[clang roll]: https://crbug.com/841908
[dead code example]: https://chromium.googlesource.com/chromium/src/+/ac6e09311fcc7e734be2ef21a9ccbbe04c4c4706
@@ -353,4 +349,3 @@ information, see [crbug.com/842424].
[How do crashes affect code coverage?]: #how-do-crashes-affect-code-coverage
[known issues]: https://bugs.chromium.org/p/chromium/issues/list?q=component:Infra%3ETest%3ECodeCoverage
[tools link]: https://storage.googleapis.com/chromium-browser-clang-staging/
-[test suite]: https://cs.chromium.org/chromium/src/tools/code_coverage/test_suite.txt
diff --git a/chromium/docs/testing/code_coverage_in_gerrit.md b/chromium/docs/testing/code_coverage_in_gerrit.md
index 03a92b25e4a..b0f102518bc 100644
--- a/chromium/docs/testing/code_coverage_in_gerrit.md
+++ b/chromium/docs/testing/code_coverage_in_gerrit.md
@@ -8,8 +8,17 @@ Chromium CLs can show a line-by-line breakdown of test coverage. **You can use
it to ensure you only submit well-tested code**.
To see code coverage for a Chromium CL, **trigger a CQ dry run**, and once the
-builds finish and code coverage data is processed successfully, **look
-at the right column of the side by side diff view to see coverage information**:
+builds finish and code coverage data is processed successfully, **look at the
+change view to see absolute and incremental code coverage percentages**:
+
+![code_coverage_percentages]
+
+Absolute coverage percentage is the percentage of lines covered by tests
+out of **all the lines** in the file, while incremental coverage percentage only
+accounts for **newly added or modified lines**.
+
+To further dig into specific lines that are not covered by tests, **look at the
+right column of the side by side diff view**:
![code_coverage_annotations]
@@ -18,7 +27,10 @@ trivial-rebase away**, however, if a newly uploaded patchset has
non-trivial code change, a new CQ dry run must be triggered before coverage data
shows up again.
-The code coverage tool currently **supports C/C++ code for Chrome on Linux**;
+The code coverage tool currently supports:
+* C/C++ code for [Chromium on Linux].
+* C/C++ code for [Chromium on Chromium OS].
+
support for more platforms and more languages is in progress.
## Contacts
@@ -51,8 +63,11 @@ in Gerrit.
[choose_tryjobs]: images/code_coverage_choose_tryjobs.png
[linux_coverage_rel]: images/code_coverage_linux_coverage_rel.png
[code_coverage_annotations]: images/code_coverage_annotations.png
+[code_coverage_percentages]: images/code_coverage_percentages.png
[file a bug]: https://bugs.chromium.org/p/chromium/issues/entry?components=Infra%3ETest%3ECodeCoverage
[code-coverage group]: https://groups.google.com/a/chromium.org/forum/#!forum/code-coverage
[code_coverage.md]: code_coverage.md
[clang_code_coverage_wrapper]: clang_code_coverage_wrapper.md
[chromium-coverage Gerrit plugin]: https://chromium.googlesource.com/infra/gerrit-plugins/code-coverage/
+[Chromium on Chromium OS]: https://chromium.googlesource.com/chromium/src/+/master/docs/chromeos_build_instructions.md
+[Chromium on Linux]: https://chromium.googlesource.com/chromium/src/+/master/docs/linux_build_instructions.md
diff --git a/chromium/docs/testing/images/code_coverage_percentages.png b/chromium/docs/testing/images/code_coverage_percentages.png
new file mode 100644
index 00000000000..e5c3f88d8cb
--- /dev/null
+++ b/chromium/docs/testing/images/code_coverage_percentages.png
Binary files differ
diff --git a/chromium/docs/testing/rendering_representative_perf_tests.md b/chromium/docs/testing/rendering_representative_perf_tests.md
new file mode 100644
index 00000000000..da58248b552
--- /dev/null
+++ b/chromium/docs/testing/rendering_representative_perf_tests.md
@@ -0,0 +1,44 @@
+# Representative Performance Tests for Rendering Benchmark
+
+`rendering_representative_perf_tests` runs a sub set of stories from rendering
+benchmark on CQ, to prevent performance regressions. For each platform there is
+a `story_tag` which describes the representative stories used in this test.
+These stories will be tested using the [`run_benchmark`](../../tools/perf/run_benchmark) script. Then the recorded values for `frame_times` will be
+compared with the historical upper limit described in [`src/testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json`](../../testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json).
+
+[TOC]
+
+## Clustering the Benchmark and Choosing Representatives
+
+The clustering of the benchmark is based on the historical values recorded for
+`frame_times`. For steps on clustering the benchmark check [Clustering benchmark stories](../../tools/perf/experimental/story_clustering/README.md).
+
+Currently there are three sets of representatives described by story tags below:
+* `representative_mac_desktop`
+* `representative_mobile`
+* `representative_win_desktop`
+
+Adding more stories to representatives or removing stories from the set is
+managed by adding and removing story tags above to stories in [rendering benchmark](../../tools/perf/page_sets/rendering).
+
+## Updating the Upper Limits
+
+The upper limits for averages and confidence interval (CI) ranges of
+`frame_times` described in [`src/testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json`](../../testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json)
+are used to passing or failing a test. These values are the 95 percentile of
+the past 30 runs of the test on each platform (for both average and CI).
+
+This helps with catching sudden regressions which results in a value higher
+than the upper limits. But in case of gradual regressions, the upper limits
+may not be useful in not updated frequently. Updating these upper limits also
+helps with adopting to improvements.
+
+Updating these values can be done by running [`src/tools/perf/experimental/representative_perf_test_limit_adjuster/adjust_upper_limits.py`](../../tools/perf/experimental/representative_perf_test_limit_adjuster/adjust_upper_limits.py)and committing the changes.
+The script will create a new JSON file using the values of recent runs in place
+of [`src/testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json`](../../testing/scripts/representative_perf_test_data/representatives_frame_times_upper_limit.json).
+
+## Updating Expectations
+
+To skip any of the tests, update
+[`src/tools/perf/expectations.config`](../../tools/perf/expectations.config) and
+add the story under rendering benchmark. \ No newline at end of file
diff --git a/chromium/docs/testing/web_tests.md b/chromium/docs/testing/web_tests.md
index f38d1356523..3ab8de8c3ba 100644
--- a/chromium/docs/testing/web_tests.md
+++ b/chromium/docs/testing/web_tests.md
@@ -68,11 +68,12 @@ python third_party/blink/tools/run_web_tests.py -t android --android
Tests marked as `[ Skip ]` in
[TestExpectations](../../third_party/blink/web_tests/TestExpectations)
-won't be run at all, generally because they cause some intractable tool error.
+won't be run by default, generally because they cause some intractable tool error.
To force one of them to be run, either rename that file or specify the skipped
-test as the only one on the command line (see below). Read the
-[Web Test Expectations documentation](./web_test_expectations.md) to learn
-more about TestExpectations and related files.
+test on the command line (see below) or in a file specified with --test-list
+(however, --skip=always can make the tests marked as `[ Skip ]` always skipped).
+Read the [Web Test Expectations documentation](./web_test_expectations.md) to
+learn more about TestExpectations and related files.
*** promo
Currently only the tests listed in
@@ -220,37 +221,70 @@ There are two ways to run web tests with additional command-line arguments:
`web_tests/FlagExpectations/blocking-repaint`, if this file exists. The
suppressions in this file override the main TestExpectations file.
+ It will also look for baselines in `web_tests/flag-specific/blocking-repaint`.
+ The baselines in this directory override the fallback baselines.
+
+ By default, name of the expectation file name under
+ `web_tests/FlagExpectations` and name of the baseline directory under
+ `web_tests/flag-specific` uses the first flag of --additional-driver-flag
+ with leading '-'s stripped.
+
+ You can also customize the name in `web_tests/FlagSpecificConfig` when
+ the name is too long or when we need to match multiple additional args:
+
+ ```json
+ {
+ "name": "short-name",
+ "args": ["--blocking-repaint", "--another-flag"]
+ }
+ ```
+
+ When at least `--additional-driver-flag=--blocking-repaint` and
+ `--additional-driver-flag=--another-flag` are specified, `short-name` will
+ be used as name of the flag specific expectation file and the baseline directory.
+
+ With the config, you can also use `--flag-specific=short-name` as a shortcut
+ of `--additional-driver-flag=--blocking-repaint --additional-driver-flag=--another-flag`.
+
* Using a *virtual test suite* defined in
[web_tests/VirtualTestSuites](../../third_party/blink/web_tests/VirtualTestSuites).
- A virtual test suite runs a subset of web tests under a specific path with
- additional flags. For example, you could test a (hypothetical) new mode for
+ A virtual test suite runs a subset of web tests with additional flags, with
+ `virtual/<prefix>/...` in their paths. The tests can be virtual tests that
+ map to real base tests (directories or files) whose paths match any of the
+ specified bases, or any real tests under `web_tests/virtual/<prefix>/`
+ directory. For example, you could test a (hypothetical) new mode for
repainting using the following virtual test suite:
```json
{
"prefix": "blocking_repaint",
- "base": "fast/repaint",
- "args": ["--blocking-repaint"],
+ "bases": ["compositing", "fast/repaint"],
+ "args": ["--blocking-repaint"]
}
```
This will create new "virtual" tests of the form
+ `virtual/blocking_repaint/compositing/...` and
`virtual/blocking_repaint/fast/repaint/...` which correspond to the files
- under `web_tests/fast/repaint` and pass `--blocking-repaint` to
- content_shell when they are run.
-
- These virtual tests exist in addition to the original `fast/repaint/...`
- tests. They can have their own expectations in TestExpectations, and their own
- baselines. The test harness will use the non-virtual baselines as a fallback.
- However, the non-virtual expectations are not inherited: if
- `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects
+ under `web_tests/compositing` and `web_tests/fast/repaint`, respectively,
+ and pass `--blocking-repaint` to `content_shell` when they are run.
+
+ These virtual tests exist in addition to the original `compositing/...` and
+ `fast/repaint/...` tests. They can have their own expectations in
+ `web_tests/TestExpectations`, and their own baselines. The test harness will
+ use the non-virtual baselines as a fallback. However, the non-virtual
+ expectations are not inherited: if `fast/repaint/foo.html` is marked
+ `[ Fail ]`, the test harness still expects
`virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the
virtual test to also fail, it needs its own suppression.
- The "prefix" value does not have to be unique. This is useful if you want to
- run multiple directories with the same flags (but see the notes below about
- performance). Using the same prefix for different sets of flags is not
- recommended.
+ This will also let any real tests under `web_tests/virtual/blocking_repaint`
+ directory run with the `--blocking-repaint` flag.
+
+ The "prefix" value should be unique. Multiple directories with the same flags
+ should be listed in the same "bases" list. The "bases" list can be empty,
+ in case that we just want to run the real tests under `virtual/<prefix>`
+ with the flags without creating any virtual tests.
For flags whose implementation is still in progress, virtual test suites and
flag-specific expectations represent two alternative strategies for testing.
@@ -273,7 +307,10 @@ Consider the following when choosing between them:
architectural changes that potentially impact all of the tests.
* Note that using wildcards in virtual test path names (e.g.
- `virtual/blocking_repaint/fast/repaint/*`) is not supported.
+ `virtual/blocking_repaint/fast/repaint/*`) is not supported, but you can
+ still use `virtual/blocking_repaint` to run all real and virtual tests
+ in the suite or `virtual/blocking_repaint/fast/repaint/dir` to run real
+ or virtual tests in the suite under a specific directory.
## Tracking Test Failures
diff --git a/chromium/docs/testing/web_tests_in_content_shell.md b/chromium/docs/testing/web_tests_in_content_shell.md
index 8fcde45de46..609bb5d7677 100644
--- a/chromium/docs/testing/web_tests_in_content_shell.md
+++ b/chromium/docs/testing/web_tests_in_content_shell.md
@@ -158,7 +158,7 @@ See [Run Web Tests Directly with Content Shell](#Run-Web-Tests-Directly-with-Con
In most cases you don't need `--single-process` because `content_shell` is
in single process mode when running most web tests.
-See [DevTools frontend](../../third_party/blink/renderer/devtools/readme.md#basics)
+See [DevTools frontend](../../third_party/devtools-frontend/src/README.md#basics)
for the commands that are useful for debugging devtools web tests.
### In The Default Multiple Process Mode
diff --git a/chromium/docs/threading_and_tasks.md b/chromium/docs/threading_and_tasks.md
index a6e15ddb04f..bb1a7257271 100644
--- a/chromium/docs/threading_and_tasks.md
+++ b/chromium/docs/threading_and_tasks.md
@@ -241,28 +241,24 @@ sequenced_task_runner->PostTask(FROM_HERE, base::BindOnce(&TaskA));
sequenced_task_runner->PostTask(FROM_HERE, base::BindOnce(&TaskB));
```
-### Posting to the Current Sequence
+### Posting to the Current (Virtual) Thread
-The `base::SequencedTaskRunner` to which the current task was posted can be
-obtained via
-[`base::SequencedTaskRunnerHandle::Get()`](https://cs.chromium.org/chromium/src/base/threading/sequenced_task_runner_handle.h).
-
-*** note
-**NOTE:** it is invalid to call `base::SequencedTaskRunnerHandle::Get()` from a
-parallel task, but it is valid from a single-threaded task (a
-`base::SingleThreadTaskRunner` is a `base::SequencedTaskRunner`).
-***
+The preferred way of posting to the current (virtual) thread is via
+`base::SequencedTaskRunnerHandle::Get()`.
```cpp
-// The task will run after any task that has already been posted
-// to the SequencedTaskRunner to which the current task was posted
-// (in particular, it will run after the current task completes).
-// It is also guaranteed that it won’t run concurrently with any
-// task posted to that SequencedTaskRunner.
-base::SequencedTaskRunnerHandle::Get()->
- PostTask(FROM_HERE, base::BindOnce(&Task));
+// The task will run on the current (virtual) thread's default task queue.
+base::SequencedTaskRunnerHandle::Get()->PostTask(
+ FROM_HERE, base::BindOnce(&Task);
```
+Note that SequencedTaskRunnerHandle::Get() returns the default queue for the
+current virtual thread. On threads with multiple task queues (e.g.
+BrowserThread::UI) this can be a different queue than the one the current task
+belongs to. The "current" task runner is intentionally not exposed via a static
+getter. Either you know it already and can post to it directly or you don't and
+the only sensible destination is the default queue.
+
## Using Sequences Instead of Locks
Usage of locks is discouraged in Chrome. Sequences inherently provide
@@ -380,33 +376,6 @@ Remember that we [prefer sequences to physical
threads](#prefer-sequences-to-physical-threads) and that this thus should rarely
be necessary.
-### Posting to the Current Thread
-
-*** note
-**IMPORTANT:** To post a task that needs mutual exclusion with the current
-sequence of tasks but doesn’t absolutely need to run on the current thread, use
-`base::SequencedTaskRunnerHandle::Get()` instead of
-`base::ThreadTaskRunnerHandle::Get()` (ref. [Posting to the Current
-Sequence](#Posting-to-the-Current-Sequence)). That will better document the
-requirements of the posted task and will avoid unnecessarily making your API
-thread-affine. In a single-thread task, `base::SequencedTaskRunnerHandle::Get()`
-is equivalent to `base::ThreadTaskRunnerHandle::Get()`.
-***
-
-To post a task to the current thread, use
-[`base::ThreadTaskRunnerHandle`](https://cs.chromium.org/chromium/src/base/threading/thread_task_runner_handle.h).
-
-```cpp
-// The task will run on the current thread in the future.
-base::ThreadTaskRunnerHandle::Get()->PostTask(
- FROM_HERE, base::BindOnce(&Task));
-```
-
-*** note
-**NOTE:** It is invalid to call `base::ThreadTaskRunnerHandle::Get()` from a parallel
-or a sequenced task.
-***
-
## Posting Tasks to a COM Single-Thread Apartment (STA) Thread (Windows)
Tasks that need to run on a COM Single-Thread Apartment (STA) thread must be
diff --git a/chromium/docs/ui/ui_devtools/index.md b/chromium/docs/ui/ui_devtools/index.md
index 29dc0a47b9f..79bf4d54617 100644
--- a/chromium/docs/ui/ui_devtools/index.md
+++ b/chromium/docs/ui/ui_devtools/index.md
@@ -5,7 +5,7 @@ currently supported on Linux, Windows, Mac, and ChromeOS.
* [Old Ash Doc](https://www.chromium.org/developers/how-tos/inspecting-ash)
* [Backend Source Code](https://cs.chromium.org/chromium/src/components/ui_devtools/)
-* [Inspector Frontend Source Code](https://cs.chromium.org/chromium/src/third_party/blink/renderer/devtools/front_end/)
+* [Inspector Frontend Source Code](https://chromium.googlesource.com/devtools/devtools-frontend)
## How to run
diff --git a/chromium/docs/updating_clang.md b/chromium/docs/updating_clang.md
index 7006c63736b..00474feec6b 100644
--- a/chromium/docs/updating_clang.md
+++ b/chromium/docs/updating_clang.md
@@ -30,6 +30,8 @@ An archive of all packages built so far is at https://is.gd/chromeclang
gs://chromium-browser-clang/$x/translation_unit-$rev.tgz ; \
gsutil.py cp -n -a public-read gs://chromium-browser-clang-staging/$x/llvm-code-coverage-$rev.tgz \
gs://chromium-browser-clang/$x/llvm-code-coverage-$rev.tgz ; \
+ gsutil.py cp -n -a public-read gs://chromium-browser-clang-staging/$x/libclang-$rev.tgz \
+ gs://chromium-browser-clang/$x/libclang-$rev.tgz ; \
done && gsutil.py cp -n -a public-read gs://chromium-browser-clang-staging/Mac/lld-$rev.tgz \
gs://chromium-browser-clang/Mac/lld-$rev.tgz
```
@@ -41,14 +43,14 @@ An archive of all packages built so far is at https://is.gd/chromeclang
```shell
git cl try &&
- git cl try -B luci.chromium.try -b mac_chromium_asan_rel_ng \
+ git cl try -B chromium/try -b mac_chromium_asan_rel_ng \
-b linux_chromium_cfi_rel_ng \
-b linux_chromium_chromeos_asan_rel_ng -b linux_chromium_msan_rel_ng \
-b linux_chromium_chromeos_msan_rel_ng -b linux-chromeos-dbg \
-b win-asan -b chromeos-amd64-generic-cfi-thin-lto-rel \
-b linux_chromium_compile_dbg_32_ng -b win7-rel \
-b win-angle-deqp-rel-64 &&
- git cl try -B luci.chrome.try -b iphone-device -b ipad-device
+ git cl try -B chrome/try -b iphone-device -b ipad-device
```
1. Optional: Start Pinpoint perf tryjobs. These are generally too noisy to
@@ -94,14 +96,7 @@ criteria:
If you want to add something to the clang package that doesn't (yet?) meet
these criteria, you can make package.py upload it to a separate zip file
-and then download it on an opt-in basis by requiring users to run a script
-to download the additional zip file. You can structure your script in a way that
-it downloads your additional zip automatically if the script detects an
-old version on disk, that way users have to run the download script just
-once. `tools/clang/scripts/download_lld_mac.py` is an example for this
-(It doesn't do the "only download if old version is on disk or if requested"
-bit, and hence doesn't run as a default DEPS hook. TODO(thakis): Make
-coverage stuff a better example and link to that.)
+and then download it on an opt-in basis by using update.py's --package option.
If you're adding a new feature that you expect will meet the inclusion criteria
eventually but doesn't yet, start by having your things in a separate zip
diff --git a/chromium/docs/webui_explainer.md b/chromium/docs/webui_explainer.md
index 80b4f693327..75d46bf45a5 100644
--- a/chromium/docs/webui_explainer.md
+++ b/chromium/docs/webui_explainer.md
@@ -157,6 +157,12 @@ Visiting `chrome://donuts` should show in something like:
Delicious success.
+By default $i18n{} escapes strings for HTML. $i18nRaw{} can be used for
+translations that embed HTML, and $i18nPolymer{} can be used for Polymer
+bindings. See
+[this comment](https://bugs.chromium.org/p/chromium/issues/detail?id=1010815#c1)
+for more information.
+
## C++ classes
### WebUI
diff --git a/chromium/docs/win_cross.md b/chromium/docs/win_cross.md
index c3ede6f440a..7b3d3af60b2 100644
--- a/chromium/docs/win_cross.md
+++ b/chromium/docs/win_cross.md
@@ -82,10 +82,20 @@ Add `target_os = "win"` to your args.gn. Then just build, e.g.
## Goma
-For now, one needs to use the rbe backend, not the (default) borg backend:
+For now, one needs to use the rbe backend, not the borg backend
+(default for Googlers).
+Use cloud backend instead.
+```shell
goma_auth.py login
- GOMA_SERVER_HOST=rbe-staging1.endpoints.cxx-compiler-service.cloud.goog goma_ctl.py ensure_start
+
+ # GOMA_* are needed for Googlers only
+ export GOMA_SERVER_HOST=goma.chromium.org
+ export GOMA_RPC_EXTRA_PARAMS=?rbe
+
+ goma_ctl.py ensure_start
+```
+
## Copying and running chrome
diff --git a/chromium/docs/win_order_files.md b/chromium/docs/win_order_files.md
index 49bda3d6218..2366f78a4dd 100644
--- a/chromium/docs/win_order_files.md
+++ b/chromium/docs/win_order_files.md
@@ -35,36 +35,30 @@ To update the order files:
[Process Explorer](https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer)
to be able to see the Process IDs of running programs.
- Run Chrome with the sandbox disabled (otherwise the render process
- instrumentation doesn't get written to disk) and with a startup dialog
- for each renderer:
+ Run Chrome:
```shell
- out\instrument\chrome --no-sandbox --renderer-startup-dialog
+ out\instrument\chrome
```
- Note the Process IDs of the browser and render process (there is sometimes
- more than one; you want the one that loads the New Tab Page).
+ Note the Process ID of the browser process.
- Check in `\src\tmp\` for instrumentation output from those processes, for
- example `cygprofile_14652.txt` and `cygprofile_23592.txt`. The files are
- only written once a certain number of function calls have been made, so
- sometimes the renderer needs to be reloaded in order for the file to be
- produced.
+ Check in `\src\tmp\` for instrumentation output from the process, for
+ example `cygprofile_14652.txt`. The files are only written once a certain
+ number of function calls have been made, so sometimes you need to browse a
+ bit for the file to be produced.
-
-1. If the files appear to have sensible contents (a long list of function names
- that eventually seem related to what the browser and render process should
- do), copy them into `chrome\build\`:
+1. If the file appears to have sensible contents (a long list of function names
+ that eventually seem related to what the browser should
+ do), copy it into `chrome\build\`:
```shell
copy \src\tmp\cygprofile_25392.txt chrome\build\chrome.x64.orderfile
- copy \src\tmp\cygprofile_14652.txt chrome\build\chrome_child.x64.orderfile
```
-1. Re-build the `chrome` target. This will re-link `chrome.dll` and
- `chrome_child.dll` using the new order files and surface any link errors if
- the files are broken.
+1. Re-build the `chrome` target. This will re-link `chrome.dll`
+ using the new order file and surface any link errors if
+ the order file is broken.
```shell
ninja -C out\instrument chrome
@@ -72,7 +66,7 @@ To update the order files:
1. Repeat the previous steps with a 32-bit build, i.e. passing
- `target_cpu="x86"` to gn and storing the files as `.x86.orderfile`.
+ `target_cpu="x86"` to gn and storing the file as `.x86.orderfile`.
1. Upload the order files to Google Cloud Storage. They will get downloaded
@@ -83,7 +77,7 @@ To update the order files:
```shell
cd chrome\build\
- upload_to_google_storage.py -b chromium-browser-clang/orderfiles -z orderfile chrome.x64.orderfile chrome.x86.orderfile chrome_child.x64.orderfile chrome_child.x86.orderfile
+ upload_to_google_storage.py -b chromium-browser-clang/orderfiles -z orderfile chrome.x64.orderfile chrome.x86.orderfile
gsutil.py setacl public-read gs://chromium-browser-clang/orderfiles/*
```
diff --git a/chromium/docs/windows_build_instructions.md b/chromium/docs/windows_build_instructions.md
index c24f222d922..dda0accd2a5 100644
--- a/chromium/docs/windows_build_instructions.md
+++ b/chromium/docs/windows_build_instructions.md
@@ -239,7 +239,6 @@ in the editor that appears when you create your output directory
(`gn args out/Default`) or on the gn gen command line
(`gn gen out/Default --args="is_component_build = true is_debug = true"`).
Some helpful settings to consider using include:
-* `use_jumbo_build = true` - [Jumbo/unity](jumbo.md) builds.
* `is_component_build = true` - this uses more, smaller DLLs, and incremental
linking.
* `enable_nacl = false` - this disables Native Client which is usually not
@@ -307,18 +306,29 @@ steps and slowest build-step types, as shown here:
```shell
$ set NINJA_SUMMARIZE_BUILD=1
$ autoninja -C out\Default base
- Longest build steps:
-...
- 1.2 weighted s to build base.dll, base.dll.lib, base.dll.pdb (1.2 s CPU time)
- 8.5 weighted s to build obj/base/base/base_jumbo_38.obj (30.1 s CPU time)
- Time by build-step type:
-...
- 1.2 s weighted time to generate 1 PEFile (linking) files (1.2 s CPU time)
- 30.3 s weighted time to generate 45 .obj files (688.8 s CPU time)
- 31.8 s weighted time (693.8 s CPU time, 21.8x parallelism)
- 86 build steps completed, average of 2.71/s
+Longest build steps:
+ 0.1 weighted s to build obj/base/base/trace_log.obj (6.7 s elapsed time)
+ 0.2 weighted s to build nasm.exe, nasm.exe.pdb (0.2 s elapsed time)
+ 0.3 weighted s to build obj/base/base/win_util.obj (12.4 s elapsed time)
+ 1.2 weighted s to build base.dll, base.dll.lib (1.2 s elapsed time)
+Time by build-step type:
+ 0.0 s weighted time to generate 6 .lib files (0.3 s elapsed time sum)
+ 0.1 s weighted time to generate 25 .stamp files (1.2 s elapsed time sum)
+ 0.2 s weighted time to generate 20 .o files (2.8 s elapsed time sum)
+ 1.7 s weighted time to generate 4 PEFile (linking) files (2.0 s elapsed
+time sum)
+ 23.9 s weighted time to generate 770 .obj files (974.8 s elapsed time sum)
+26.1 s weighted time (982.9 s elapsed time sum, 37.7x parallelism)
+839 build steps completed, average of 32.17/s
```
+The "weighted" time is the elapsed time of each build step divided by the number
+of tasks that were running in parallel. This makes it an excellent approximation
+of how "important" a slow step was. A link that is entirely or mostly serialized
+will have a weighted time that is the same or similar to its elapsed time. A
+compile that runs in parallel with 999 other compiles will have a weighted time
+that is tiny.
+
You can also generate these reports by manually running the script after a build:
```shell
diff --git a/chromium/docs/workflow/debugging-with-swarming.md b/chromium/docs/workflow/debugging-with-swarming.md
index 2de606e2162..a5c4a4119e2 100644
--- a/chromium/docs/workflow/debugging-with-swarming.md
+++ b/chromium/docs/workflow/debugging-with-swarming.md
@@ -54,6 +54,80 @@ or perhaps:
use_swarming_to_run(type, isolate)
```
+## The easy way
+
+A lot of the steps described in this doc have been bundled up into 2
+tools. Before using either of these you will need to
+[authenticate](#authenticating).
+
+### run-swarmed.py
+
+A lot of the logic below is wrapped up in `tools/run-swarmed.py`, which you can run
+like this:
+
+```
+$ tools/run-swarmed.py $outdir $target
+```
+
+See the `--help` option of `run-swarmed.py` for more details about that script.
+
+### mb.py run
+
+Similar to `tools/run_swarmed.py`, `mb.py run` bundles much of the logic into a
+single command line. Unlike `tools/run_swarmed.py`, `mb.py run` allows the user
+to specify extra arguments to pass to the test, but has a messier command line.
+
+To use it, run:
+```
+$ tools/mb/mb.py run \
+ -s --no-default-dimensions \
+ -d pool $pool \
+ $criteria \
+ $outdir $target \
+ -- $extra_args
+```
+
+## A concrete example
+
+Here's how to run `chrome_public_test_apk` on a bot with a Nexus 5 running KitKat.
+
+```sh
+$ tools/mb/mb.py run \
+ -s --no-default-dimensions \
+ -d pool chromium.tests \
+ -d device_os_type userdebug -d device_os KTU84P -d device_type hammerhead \
+ out/Android-arm-dbg chrome_public_test_apk
+```
+
+This assumes you have an `out/Android-arm-dbg/args.gn` like
+
+```
+ffmpeg_branding = "Chrome"
+is_component_build = false
+is_debug = true
+proprietary_codecs = true
+strip_absolute_paths_from_debug_symbols = true
+symbol_level = 1
+system_webview_package_name = "com.google.android.webview"
+target_os = "android"
+use_goma = true
+```
+
+## Bot selection criteria
+
+The examples in this doc use `$criteria`. To figure out what values to use, you
+can go to an existing swarming run
+([recent tasks page](https://chromium-swarm.appspot.com/tasklist)) and
+look at the `Dimensions` section. Each of these becomes a `-d dimension_name
+dimension_value` in your `$criteria`. Click on `bots` (or go
+[here](https://chromium-swarm.appspot.com/botlist)) to be taken to a UI that
+allows you to try out the criteria interactively, so that you can be sure that
+there are bots matching your criteria. Sometimes the web page shows a
+human-friendly name rather than the name required on the commandline. [This
+file](https://cs.chromium.org/chromium/infra/luci/appengine/swarming/ui2/modules/alias.js)
+contains the mapping to human-friendly names. You can test your commandline by
+entering `dimension_name:dimension_value` in the interactive UI.
+
## Building an isolate
At the moment, you can only build an isolate locally, like so (commands you type
@@ -74,6 +148,17 @@ $ tools/mb/mb.py isolate --no-build //$outdir $target
Support for building an isolate using swarming, which would allow you to build
for a platform you can't build for locally, does not yet exist.
+## Authenticating
+
+You may need to log in to `https://isolateserver.appspot.com` to do this:
+
+```
+$ python tools/swarming_client/auth.py login \
+ --service=https://isolateserver.appspot.com
+```
+
+Use your google.com account for this.
+
## Uploading an isolate
You can then upload the resulting isolate to the isolate server:
@@ -85,13 +170,6 @@ $ tools/swarming_client/isolate.py archive \
-s $outdir/$target.isolated
```
-You may need to log in to `https://isolateserver.appspot.com` to do this:
-
-```
-$ python tools/swarming_client/auth.py login \
- --service=https://isolateserver.appspot.com
-```
-
The `isolate.py` tool will emit something like this:
```
@@ -116,8 +194,8 @@ $ tools/swarming_client/swarming.py trigger \
```
There are two more things you need to fill in here. The first is the pool name;
-you should pick "Chrome" unless you know otherwise. The pool is the collection
-of hosts from which swarming will try to pick bots to run your tasks.
+you should pick "chromium.tests" unless you know otherwise. The pool is the
+collection of hosts from which swarming will try to pick bots to run your tasks.
The second is the criteria, which is how you specify which bot(s) you want your
task scheduled on. These are specified via "dimensions", which are specified
@@ -142,7 +220,7 @@ URL for the task it created, and a command you can run to collect the results of
that task. For example:
```
-Triggered task: ellyjones@chromium.org/os=Linux_pool=Chrome/e625130b712096e3908266252c8cd779d7f442f1
+Triggered task: ellyjones@chromium.org/os=Linux_pool=chromium.tests/e625130b712096e3908266252c8cd779d7f442f1
To collect results, use:
tools/swarming_client/swarming.py collect -S https://chromium-swarm.appspot.com 46fc393777163310
Or visit:
@@ -153,33 +231,6 @@ The 'collect' command given there will block until the task is complete, then
produce the task's results, or you can load that URL and watch the task's
progress.
-## run-swarmed.py
-
-A lot of this logic is wrapped up in `tools/run-swarmed.py`, which you can run
-like this:
-
-```
-$ tools/run-swarmed.py $outdir $target
-```
-
-See the `--help` option of `run-swarmed.py` for more details about that script.
-
-## mb.py run
-
-Similar to `tools/run_swarmed.py`, `mb.py run` bundles much of the logic into a
-single command line. Unlike `tools/run_swarmed.py`, `mb.py run` allows the user
-to specify extra arguments to pass to the test, but has a messier command line.
-
-To use it, run:
-```
-$ tools/mb/mb.py run \
- -s --no-default-dimensions \
- -d pool $pool \
- $criteria \
- $outdir $target \
- -- $extra_args
-```
-
## Other notes
If you are looking at a Swarming task page, be sure to check the bottom of the