diff options
author | Allan Sandfeld Jensen <allan.jensen@qt.io> | 2017-03-08 10:28:10 +0100 |
---|---|---|
committer | Allan Sandfeld Jensen <allan.jensen@qt.io> | 2017-03-20 13:40:30 +0000 |
commit | e733310db58160074f574c429d48f8308c0afe17 (patch) | |
tree | f8aef4b7e62a69928dbcf880620eece20f98c6df /chromium/docs | |
parent | 2f583e4aec1ae3a86fa047829c96b310dc12ecdf (diff) | |
download | qtwebengine-chromium-e733310db58160074f574c429d48f8308c0afe17.tar.gz |
BASELINE: Update Chromium to 56.0.2924.122
Change-Id: I4e04de8f47e47e501c46ed934c76a431c6337ced
Reviewed-by: Michael Brüning <michael.bruning@qt.io>
Diffstat (limited to 'chromium/docs')
28 files changed, 2995 insertions, 57 deletions
diff --git a/chromium/docs/README.md b/chromium/docs/README.md index 7526e5e04af..c203f4ed8ec 100644 --- a/chromium/docs/README.md +++ b/chromium/docs/README.md @@ -1,6 +1,7 @@ # Chromium docs -This directory contains chromium project documentation in [Markdown]. +This directory contains chromium project documentation in +[Gitiles-flavored Markdown]. It is automatically [rendered by Gitiles](https://chromium.googlesource.com/chromium/src/+/master/docs/). @@ -25,6 +26,6 @@ git cl patch <CL number or URL> ./tools/md_browser/md_browser.py ``` -[Markdown]: https://gerrit.googlesource.com/gitiles/+/master/Documentation/markdown.md +[Gitiles-flavored Markdown]: https://gerrit.googlesource.com/gitiles/+/master/Documentation/markdown.md [style guide]: https://github.com/google/styleguide/tree/gh-pages/docguide [md_browser]: ../tools/md_browser/ diff --git a/chromium/docs/accessibility.md b/chromium/docs/accessibility.md index 5029490c48d..c0c57797e44 100644 --- a/chromium/docs/accessibility.md +++ b/chromium/docs/accessibility.md @@ -1,55 +1,412 @@ # Accessibility Overview -This document describes how accessibility is implemented throughout Chromium at -a high level. +Accessibility means ensuring that all users, including users with disabilities, +have equal access to software. One piece of this involves basic design +principles such as using appropriate font sizes and color contrast, +avoiding using color to convey important information, and providing keyboard +alternatives for anything that is normally accomplished with a pointing device. +However, when you see the word "accessibility" in a directory name in Chromium, +that code's purpose is to provide full access to Chromium's UI via external +accessibility APIs that are utilized by assistive technology. + +**Assistive technology** here refers to software or hardware which +makes use of these APIs to create an alternative interface for the user to +accommodate some specific needs, for example: + +Assistive technology includes: + +* Screen readers for blind users that describe the screen using + synthesized speech or braille +* Voice control applications that let you speak to the computer, +* Switch access that lets you control the computer with a small number + of physical switches, +* Magnifiers that magnify a portion of the screen, and often highlight the + cursor and caret for easier viewing, and +* Assistive learning and literacy software that helps users who have a hard + time reading print, by highlighting and/or speaking selected text + +In addition, because accessibility APIs provide a convenient and universal +way to explore and control applications, they're often used for automated +testing scripts, and UI automation software like password managers. + +Web browsers play an important role in this ecosystem because they need +to not only provide access to their own UI, but also provide access to +all of the content of the web. + +Each operating system has its own native accessibility API. While the +core APIs tend to be well-documented, it's unfortunately common for +screen readers in particular to depend on additional undocumented or +vendor-specific APIs in order to fully function, especially with web +browsers, because the standard APIs are insufficient to handle the +complexity of the web. + +Chromium needs to support all of these operating system and +vendor-specific accessibility APIs in order to be usable with the full +ecosystem of assistive technology on all platforms. Just like Chromium +sometimes mimics the quirks and bugs of older browsers, Chromium often +needs to mimic the quirks and bugs of other browsers' implementation +of accessibility APIs, too. ## Concepts -The three central concepts of accessibility are: +While each operating system and vendor accessibility API is different, +there are some concepts all of them share. 1. The *tree*, which models the entire interface as a tree of objects, exposed - to screenreaders or other accessibility software; -2. *Events*, which let accessibility software know that a part of the tree has + to assistive technology via accessibility APIs; +2. *Events*, which let assistive technology know that a part of the tree has changed somehow; -3. *Actions*, which come from accessibility software and ask the interface to +3. *Actions*, which come from assistive technology and ask the interface to change. -Here's an example of an accessibility tree looks like. The following HTML: +Consider the following small HTML file: ``` -<select title="Select A"> - <option value="1">Option 1</option> - <option value="2" selected>Option 2</option> - <option value="3">Option 3</option> -</select> +<html> +<head> + <title>How old are you?</title> +</head> +<body> + <label for="age">Age</label> + <input id="age" type="number" name="age" value="42"> + <div> + <button>Back</button> + <button>Next</button> + </div> +</body> +</html> ``` -has a generated accessibility tree like this: +### The Accessibility Tree and Accessibility Attributes + +Internally, Chromium represents the accessibility tree for that web page +using a data structure something like this: ``` -0: AXMenuList title="Select A" -1: AXMenuListOption title="Option 1" -2: AXMenuListOption title="Option 2" selected -3: AXMenuListOption title="Option 3" +id=1 role=WebArea name="How old are you?" + id=2 role=Label name="Age" + id=3 role=TextField labelledByIds=[2] value="42" + id=4 role=Group + id=5 role=Button name="Back" + id=6 role=Button name="Next" ``` -Given that accessibility tree, an example of the events generated when selecting -"Option 1" might be: +Note that the tree structure closely resembles the structure of the +HTML elements, but slightly simplified. Each node in the accessibility +tree has an ID and a role. Many have a name. The text field has a value, +and instead of a name it has labelledByIds, which indicates that its +accessible name comes from another node in the tree, the label node +with id=2. + +On a particular platform, each node in the accessibility tree is implemented +by an object that conforms to a particular protocol. + +On Windows, the root node implements the IAccessible protocol and +if you call IAccessible::get_accRole, it returns ROLE_SYSTEM_DOCUMENT, +and if you call IAccessible::get_accName, it returns "How old are you?". +Other methods let you walk the tree. + +On macOS, the root node implements the NSAccessibility protocol and +if you call [NSAccessibility accessibilityRole], it returns @"AXWebArea", +and if you call [NSAccessibility accessibilityLabel], it returns +"How old are you?". + +The Linux accessibility API, ATK, is more similar to the Windows APIs; +they were developed together. (Chrome's support for desktop Linux +accessibility is unfinished.) + +The Android accessibility API is of course based on Java. The main +data structure is AccessibilityNodeInfo. It doesn't have a role, but +if you call AccessibilityNodeInfo.getClassName() on the root node +it returns "android.webkit.WebView", and if you call +AccessibilityNodeInfo.getContentDescription() it returns "How old are you?". + +On Chrome OS, we use our own accessibility API that closely maps to +Chrome's internal accessibility API. + +So while the details of the interface vary, the underlying concepts are +similar. Both IAccessible and NSAccessibility have a concept of a role, +but IAccessible uses a role of "document" for a web page, while NSAccessibility +uses a role of "web area". Both IAccessible and NSAccessibility have a +concept of the primary accessible text for a node, but IAccessible calls +it the "name" while NSAccessibility calls it the "label", and Android +calls it a "content description". + +**Historical note:** The internal names of roles and attributes in +Chrome often tend to most closely match the macOS accessibility API +because Chromium was originally based on WebKit, where most of the +accessibility code was written by Apple. Over time we're slowly +migrating internal names to match what those roles and attributes are +called in web accessibility standards, like ARIA. + +### Accessibility Events + +In Chromium's internal terminology, an Accessibility Event always represents +communication from the app to the assistive technology, indicating that the +accessibility tree changed in some way. + +As an example, if the user were to press the Tab key and the text +field from the example above became focused, Chromium would fire a +"focus" accessibility event that assistive technology could listen +to. A screen reader might then announce the name and current value of +the text field. A magnifier might zoom the screen to its bounding +box. If the user types some text into the text field, Chromium would +fire a "value changed" accessibility event. + +As with nodes in the accessibility tree, each platform has a slightly different +API for accessibility events. On Windows we'd fire EVENT_OBJECT_FOCUS for +a focus change, and on Mac we'd fire @"AXFocusedUIElementChanged". +Those are pretty similar. Sometimes they're quite different - to support +live regions (notifications that certain key parts of a web page have changed), +on Mac we simply fire @"AXLiveRegionChanged", but on Windows we need to +fire IA2_EVENT_TEXT_INSERTED and IA2_EVENT_TEXT_REMOVED events individually +on each affected node within the changed region, with additional attributes +like "container-live:polite" to indicate that the affected node was part of +a live region. This discussion is not meant to explain all of the technical +details but just to illustrate that the concepts are similar, +but the details of notifying software on each platform about changes can +vary quite a bit. + +### Accessibility Actions + +Each native object that implements a platform's native accessibility API +supports a number of actions, which are requests from the assistive +technology to control or change the UI. This is the opposite of events, +which are messages from Chromium to the assistive technology. + +For example, if the user had a voice control application running, such as +Voice Access on Android, the user could just speak the name of one of the +buttons on the page, like "Next". Upon recognizing that text and finding +that it matches one of the UI elements on the page, the voice control +app executes the action to click the button id=6 in Chromium's accessibility +tree. Internally we call that action "do default" rather than click, since +it represents the default action for any type of control. + +Other examples of actions include setting focus, changing the value of +a control, and scrolling the page. + +### Parameterized attributes + +In addition to accessibility attributes, events, and actions, native +accessibility APIs often have so-called "parameterized attributes". +The most common example of this is for text - for example there may be +a function to retrieve the bounding box for a range of text, or a +function to retrieve the text properties (font family, font size, +weight, etc.) at a specific character position. + +Parameterized attributes are particularly tricky to implement because +of Chromium's multi-process architecture. More on this in the next section. + +## Chromium's multi-process architecture + +Native accessibility APIs tend to have a *functional* interface, where +Chromium implements an interface for a canonical accessible object that +includes methods to return various attributes, walk the tree, or perform +an action like click(), focus(), or setValue(...). + +In contrast, the web has a largely *declarative* interface. The shape +of the accessibility tree is determined by the DOM tree (occasionally +influenced by CSS), and the accessible semantics of a DOM element can +be modified by adding ARIA attributes. + +One important complication is that all of these native accessibility APIs +are *synchronous*, while Chromium is multi-process, with the contents of +each web page living in a different process than the process that +implements Chromium's UI and the native accessibility APIs. Furthermore, +the renderer processes are *sandboxed*, so they can't implement +operating system APIs directly. + +If you're unfamiliar with Chrome's multi-process architecture, see +[this blog post introducing the concept]( +https://blog.chromium.org/2008/09/multi-process-architecture.html) or +[the design doc on chromium.org]( +https://www.chromium.org/developers/design-documents/multi-process-architecture) +for an intro. + +Chromium's multi-process architecture means that we can't implement +accessibility APIs the same way that a single-process browser can - +namely, by calling directly into the DOM to compute the result of each +API call. For example, on some operating systems there might be an API +to get the bounding box for a particular range of characters on the +page. In other browsers, this might be implemented by creating a DOM +selection object and asking for its bounding box. + +That implementation would be impossible in Chromium because it'd require +blocking the main thread while waiting for a response from the renderer +process that implements that web page's DOM. (Not only is blocking the +main thread strictly disallowed, but the latency of doing this for every +API call makes it prohibitively slow anyway.) Instead, Chromium takes an +approach where a representation of the entire accessibility tree is +cached in the main process. Great care needs to be taken to ensure that +this representation is as concise as possible. + +In Chromium, we build a data structure representing all of the +information for a web page's accessibility tree, send the data +structure from the renderer process to the main browser process, cache +it in the main browser process, and implement native accessibility +APIs using solely the information in that cache. + +As the accessibility tree changes, tree updates and accessibility events +get sent from the renderer process to the browser process. The browser +cache is updated atomically in the main thread, so whenever an external +client (like assistive technology) calls an accessibility API function, +we're always returning something from a complete and consistent snapshot +of the accessibility tree. From time to time, the cache may lag what's +in the renderer process by a fraction of a second. + +Here are some of the specific challenges faced by this approach and +how we've addressed them. + +### Sparse data + +There are a *lot* of possible accessibility attributes for any given +node in an accessibility tree. For example, there are more than 150 +unique accessibility API methods that Chrome implements on the Windows +platform alone. We need to implement all of those APIs, many of which +request rather rare or obscure attributes, but storing all possible +attribute values in a single struct would be quite wasteful. + +To avoid each accessible node object containing hundreds of fields the +data for each accessibility node is stored in a relatively compact +data structure, ui::AXNodeData. Every AXNodeData has an integer ID, a +role enum, and a couple of other mandatory fields, but everything else +is stored in attribute arrays, one for each major data type. ``` -AXMenuListItemUnselected 2 -AXMenuListItemSelected 1 -AXMenuListValueChanged 0 +struct AXNodeData { + int32_t id; + AXRole role; + ... + std::vector<std::pair<AXStringAttribute, std::string>> string_attributes; + std::vector<std::pair<AXIntAttribute, int32_t>> int_attributes; + ... +} ``` -An example of a command used to change the selection from "Option 1" to "Option -3" might be: +So if a text field has a placeholder attribute, we can store +that by adding an entry to `string_attributes` with an attribute +of ui::AX_ATTR_PLACEHOLDER and the placeholder string as the value. + +### Incremental tree updates + +Web pages change frequently. It'd be terribly inefficient to send a +new copy of the accessibility tree every time any part of it changes. +However, the accessibility tree can change shape in complicated ways - +for example, whole subtrees can be reparented dynamically. + +Rather than writing code to deal with every possible way the +accessibility tree could be modified, Chromium has a general-purpose +tree serializer class that's designed to send small incremental +updates of a tree from one process to another. The tree serializer has +just a few requirements: + +* Every node in the tree must have a unique integer ID. +* The tree must be acyclic. +* The tree serializer must be notified when a node's data changes. +* The tree serializer must be notified when the list of child IDs of a + node changes. + +The tree serializer doesn't know anything about accessibility attributes. +It keeps track of the previous state of the tree, and every time the tree +structure changes (based on notifications of a node changing or a node's +children changing), it walks the tree and builds up an incremental tree +update that serializes as few nodes as possible. + +In the other process, the Unserialization code applies the incremental +tree update atomically. + +### Text bounding boxes + +One challenge faced by Chromium is that accessibility clients want to be +able to query the bounding box of an arbitrary range of text - not necessarily +just the current cursor position or selection. As discussed above, it's +not possible to block Chromium's main browser process while waiting for this +information from Blink, so instead we cache enough information to satisfy these +queries in the accessibility tree. + +To compactly store the bounding box of every character on the page, we +split the text into *inline text boxes*, sometimes called *text runs*. +For example, in a typical paragraph, each line of text would be its own +inline text box. In general, an inline text box or text run contians a +sequence of text characters that are all oriented in the same direction, +in a line, with the same font, size, and style. + +Each inline text box stores its own bounding box, and then the relative +x-coordinate of each character in its text (assuming left-to-right). +From that it's possible to compute the bounding box +of any individual character. + +The inline text boxes are part of Chromium's internal accessibility tree. +They're used purely internally and aren't ever exposed directly via any +native accessibility APIs. + +For example, suppose that a document contains a text field with the text +"Hello world", but the field is narrow, so "Hello" is on the first line and +"World" is on the second line. Internally Chromium's accessibility tree +might look like this: ``` -AccessibilityMsg_DoDefaultAction 3 +staticText location=(8, 8) size=(38, 36) name='Hello world' + inlineTextBox location=(0, 0) size=(36, 18) name='Hello ' characterOffsets=12,19,23,28,36 + inlineTextBox location=(0, 18) size=(38, 18) name='world' characterOffsets=12,20,25,29,37 ``` -All three concepts are handled at several layers in Chromium. +### Scrolling, transformations, and animation + +Native accessibility APIs typically want the bounding box of every element in the +tree, either in window coordinates or global screen coordinates. If we +stored the global screen coordinates for every node, we'd be constantly +re-serializing the whole tree every time the user scrolls or drags the +window. + +Instead, we store the bounding box of each node in the accessibility tree +relative to its *offset container*, which can be any ancestor. If no offset +container is specified, it's assumed to be the root of the tree. + +In addition, any offset container can contain scroll offsets, which can be +used to scroll the bounding boxes of anything in that subtree. + +Finally, any offset container can also include an arbitrary 4x4 transformation +matrix, which can be used to represent arbitrary 3-D rotations, translations, and +scaling, and more. The transformation matrix applies to the whole subtree. + +Storing coordinates this way means that any time an object scrolls, moves, or +animates its position and scale, only the root of the scrolling or animation +needs to post updates to the accessibility tree. Everything in the subtree +remains valid relative to that offset container. + +Computing the global screen coordinates for an object in the accessibility +tree just means walking up its ancestor chain and applying offsets and +occasionally multiplying by a 4x4 matrix. + +### Site isolation / out-of-process iframes + +At one point in time, all of the content of a single Tab or other web view +was contained in the same Blink process, and it was possible to serialize +the accessibility tree for a whole frame tree in a single pass. + +Today the situation is a bit more complicated, as Chromium supports +out-of-process iframes. (It also supports "browser plugins" such as +the `<webview>` tag in Chrome packaged apps, which embeds a whole +browser inside a browser, but for the purposes of accessibility this +is handled the same as frames.) + +Rather than a mix of in-process and out-of-process frames that are handled +differently, Chromium builds a separate independent accessibility tree +for each frame. Each frame gets its own tree ID, and it keeps track of +the tree ID of its parent frame (if any) and any child frames. + +In Chrome's main browser process, the accessibility trees for each frame +are cached separately, and when an accessibility client (assistive +technology) walks the accessibility tree, Chromium dynamically composes +all of the frames into a single virtual accessibility tree on the fly, +using those aforementioned tree IDs. + +The node IDs for accessibility trees only need to be unique within a +single frame. Where necessary, separate unique IDs are used within +Chrome's main browser process. In Chromium accessibility, a "node ID" +always means that ID that's only unique within a frame, and a "unique ID" +means an ID that's globally unique. ## Blink @@ -106,7 +463,7 @@ usually forwarded to [BrowserAccessibilityManager] which is responsible for: On Chrome OS, RenderFrameHostImpl does not route events to BrowserAccessibilityManager at all, since there is no platform screenreader -outside Chrome to integrate with. +outside Chromium to integrate with. ## Views diff --git a/chromium/docs/android_accessing_cpp_enums_in_java.md b/chromium/docs/android_accessing_cpp_enums_in_java.md new file mode 100644 index 00000000000..0fb3729c492 --- /dev/null +++ b/chromium/docs/android_accessing_cpp_enums_in_java.md @@ -0,0 +1,150 @@ +# Accessing C++ Enums In Java + +[TOC] + +## Introduction + +Accessing C++ enums in Java is implemented via a Python script which analyzes +the C++ enum and spits out the corresponding Java class. The enum needs to be +annotated in a particular way. By default, the generated class name will be the +same as the name of the enum. If all the names of the enum values are prefixed +with the MACRO\_CASED\_ name of the enum those prefixes will be stripped from +the Java version. + +## Features +* Customize the package name of the generated class using the +`GENERATED_JAVA_ENUM_PACKAGE` directive (required) +* Customize the class name using the `GENERATED_JAVA_CLASS_NAME_OVERRIDE` +directive (optional) +* Strip enum entry prefixes to make the generated classes less verbose using +the `GENERATED_JAVA_PREFIX_TO_STRIP` directive (optional) +* Supports +[`@IntDef`](https://developer.android.com/reference/android/support/annotation/IntDef.html) +* Copies comments that directly preceed enum entries into the generated Java +class + +## Usage + +1. Add directives to your C++ enum + + ```cpp + // GENERATED_JAVA_ENUM_PACKAGE: org.chromium.chrome + // GENERATED_JAVA_CLASS_NAME_OVERRIDE: FooBar + // GENERATED_JAVA_PREFIX_TO_STRIP: BAR_ + enum SomeEnum { + BAR_A, + BAR_B, + BAR_C = BAR_B, + }; + ``` + +2. Add a new build target + + ``` + import("//build/config/android/rules.gni") + + java_cpp_enum("foo_generated_enum") { + sources = [ + "base/android/native_foo_header.h", + ] + } + ``` + +3. Add the new target to the desired android_library targets srcjar_deps: + + ``` + android_library("base_java") { + srcjar_deps = [ + ":foo_generated_enum", + ] + } + ``` + +4. The generated file `org/chromium/chrome/FooBar.java` would contain: + + ```java + package org.chromium.chrome; + + import android.support.annotation.IntDef; + + import java.lang.annotation.Retention; + import java.lang.annotation.RetentionPolicy; + + public class FooBar { + @IntDef({ + A, B, C + }) + @Retention(RetentionPolicy.SOURCE) + public @interface FooBarEnum {} + public static final int A = 0; + public static final int B = 1; + public static final int C = 1; + } + ``` + +## Formatting Notes + +* Handling long package names: + + ``` + // GENERATED_JAVA_ENUM_PACKAGE: ( + // org.chromium.chrome.this.package.is.too.long.to.fit.on.a.single.line) + ``` + +* Enum entries + * Single line enums should look like this: + + // GENERATED_JAVA_ENUM_PACKAGE: org.foo + enum NotificationActionType { BUTTON, TEXT }; + + * Multi-line enums should have one enum entry per line, like this: + + // GENERATED_JAVA_ENUM_PACKAGE: org.foo + enum NotificationActionType { + BUTTON, + TEXT + }; + + * Multi-line enum entries are allowed but should be formatted like this: + + // GENERATED_JAVA_ENUM_PACKAGE: org.foo + enum NotificationActionType { + LongKeyNumberOne, + LongKeyNumberTwo, + ... + LongKeyNumberThree = + LongKeyNumberOne | LongKeyNumberTwo | ... + }; + +* Preserving comments + + ```cpp + // GENERATED_JAVA_ENUM_PACKAGE: org.chromium + enum CommentEnum { + // This comment will be preserved. + ONE, + TWO, // This comment will NOT be preserved. + THREE + } + ``` + + ```java + ... + public class CommentEnum { + ... + /** + * This comment will be preserved. + */ + public static final int ONE = 0; + public static final int TWO = 1; + public static final int THREE = 2; + } + ``` + +## Code +* [Generator +code](https://cs.chromium.org/chromium/src/build/android/gyp/java_cpp_enum.py?dr=C&sq=package:chromium) +and +[Tests](https://cs.chromium.org/chromium/src/build/android/gyp/java_cpp_enum_tests.py?dr=C&q=java_cpp_enum_tests&sq=package:chromium&l=1) +* [GN +template](https://cs.chromium.org/chromium/src/build/config/android/rules.gni?q=java_cpp_enum.py&sq=package:chromium&dr=C&l=458) diff --git a/chromium/docs/android_studio.md b/chromium/docs/android_studio.md index 35690bc55c4..95258865a3f 100644 --- a/chromium/docs/android_studio.md +++ b/chromium/docs/android_studio.md @@ -90,12 +90,16 @@ includes `R.java`). Gradle builds can be done from the command-line after importing the project into Android Studio (importing into the IDE causes the Gradle wrapper to be added). +This wrapper can also be used to invoke gradle commands. cd $GRADLE_PROJECT_DIR && bash gradlew The resulting artifacts are not terribly useful. They are missing assets, resources, native libraries, etc. + * Use a [gradle daemon](https://docs.gradle.org/2.14.1/userguide/gradle_daemon.html) to speed up builds: + * Add the line `org.gradle.daemon=true` to `~/.gradle/gradle.properties`, creating it if necessary. + ## Status (as of Sept 21, 2016) ### What works @@ -105,8 +109,7 @@ resources, native libraries, etc. ### What doesn't work (yet) ([crbug](https://bugs.chromium.org/p/chromium/issues/detail?id=620034)) - * JUnit Test targets - * Better support for instrumtation tests (they are treated as non-test .apks right now) + * Better support for instrumentation tests (they are treated as non-test .apks right now) * Make gradle aware of resources and assets * Make gradle aware of native code via pointing it at the location of our .so * Add a mode in which gradle is responsible for generating `R.java` diff --git a/chromium/docs/android_test_instructions.md b/chromium/docs/android_test_instructions.md index 6524177a06f..6d0ed7790d0 100644 --- a/chromium/docs/android_test_instructions.md +++ b/chromium/docs/android_test_instructions.md @@ -152,11 +152,11 @@ with the following commands: ```shell # Resize userdata partition to be 1G -resize2fs android_emulator_sdk/sdk/system-images/android-23/x86/userdata.img 1G +resize2fs android_emulator_sdk/sdk/system-images/android-24/x86/userdata.img 1G # Set filesystem parameter to continue on errors; Android doesn't like some # things e2fsprogs does. -tune2fs -e continue android_emulator_sdk/sdk/system-images/android-23/x86/userdata.img +tune2fs -e continue android_emulator_sdk/sdk/system-images/android-24/x86/userdata.img ``` ## Symbolizing Crashes @@ -298,8 +298,7 @@ You might want to add stars `*` to each as a regular expression, e.g. ## Running Blink Layout Tests -See -https://sites.google.com/a/chromium.org/dev/developers/testing/webkit-layout-tests +See [Layout Tests](testing/layout_tests.md). ## Running GPU tests diff --git a/chromium/docs/atom.md b/chromium/docs/atom.md index 999b3844200..c6d729030f5 100644 --- a/chromium/docs/atom.md +++ b/chromium/docs/atom.md @@ -8,7 +8,7 @@ A typical Atom workflow consists of the following. 1. Use `Ctrl-Shift-R` to find a symbol in the `.tags` file or `Ctrl-P` to find a file by name. -2. Switch between the header and the source using `Alt-O`. +2. Switch between the header and the source using `Alt-O`(`Ctrl-Opt-S` on OSX). 3. While editing, `you-complete-me` package helps with C++ auto-completion and shows compile errors through `lint` package. 4. Press `Ctrl-Shift-P` and type `format<Enter>` to format the code. @@ -22,7 +22,7 @@ To setup this workflow, install Atom packages for Chrome development. ``` $ apm install build-ninja clang-format \ - linter linter-eslint switch-header-source you-complete-me + linter linter-cpplint linter-eslint switch-header-source you-complete-me ``` ## Autocomplete diff --git a/chromium/docs/chromoting_android_hacking.md b/chromium/docs/chromoting_android_hacking.md index ba708b5387e..14f6bb222bb 100644 --- a/chromium/docs/chromoting_android_hacking.md +++ b/chromium/docs/chromoting_android_hacking.md @@ -72,8 +72,6 @@ display log messages to the `LogCat` pane. <classpathentry kind="src" path="mojo/public/java/src"/> <classpathentry kind="src" path="mojo/android/system/src"/> <classpathentry kind="src" path="mojo/android/javatests/src"/> -<classpathentry kind="src" path="services/shell/android/apk/src"/> -<classpathentry kind="src" path="mojo/services/native_viewport/android/src"/> <classpathentry kind="src" path="testing/android/java/src"/> <classpathentry kind="src" path="printing/android/java/src"/> <classpathentry kind="src" path="tools/binary_size/java/src"/> diff --git a/chromium/docs/how_to_extend_layout_test_framework.md b/chromium/docs/how_to_extend_layout_test_framework.md index 32d3b33f1aa..db3e495fabc 100644 --- a/chromium/docs/how_to_extend_layout_test_framework.md +++ b/chromium/docs/how_to_extend_layout_test_framework.md @@ -12,8 +12,8 @@ to help people who want to actually the framework to test whatever they want. ## Background Before you can start actually extending the framework, you should be familiar -with how to use it. This wiki is basically all you need to learn how to use it -http://www.chromium.org/developers/testing/webkit-layout-tests +with how to use it. See the +[layout tests documentation](testing/layout_tests.md). ## How to Extend the Framework @@ -129,7 +129,7 @@ Here are some of the functions that most likely need to be overridden. * `layout_tests_dir` * This tells the port where to look for all the and everything associated with them such as resources files. - * By default it returns absolute path to the webkit tests. + * By default it returns the absolute path to the layout tests directory. * If you are planning on running something in the chromium src/ directory, there are helper functions to allow you to return a path relative to the base of the chromium src directory. diff --git a/chromium/docs/images/ozone_caca.jpg b/chromium/docs/images/ozone_caca.jpg Binary files differnew file mode 100644 index 00000000000..28ccd37b628 --- /dev/null +++ b/chromium/docs/images/ozone_caca.jpg diff --git a/chromium/docs/layout_tests_linux.md b/chromium/docs/layout_tests_linux.md index d113043e5e0..b1c57e42976 100644 --- a/chromium/docs/layout_tests_linux.md +++ b/chromium/docs/layout_tests_linux.md @@ -10,8 +10,7 @@ `src/third_party/WebKit/LayoutTests/fast/`. 1. When the tests finish, any unexpected results should be displayed. -See -[Running WebKit Layout Tests](http://dev.chromium.org/developers/testing/webkit-layout-tests) +See [Layout Tests](testing/layout_tests.md) for full documentation about set up and available options. ## Pixel Tests diff --git a/chromium/docs/linux_build_instructions.md b/chromium/docs/linux_build_instructions.md index 4f97bf2ea3e..6ab3841944e 100644 --- a/chromium/docs/linux_build_instructions.md +++ b/chromium/docs/linux_build_instructions.md @@ -60,21 +60,22 @@ For openSUSE 11.0 and later, see Recent systems: - su -c 'yum install subversion pkgconfig python perl gcc-c++ bison flex \ - gperf nss-devel nspr-devel gtk2-devel glib2-devel freetype-devel atk-devel \ - pango-devel cairo-devel fontconfig-devel GConf2-devel dbus-devel \ - alsa-lib-devel libX11-devel expat-devel bzip2-devel dbus-glib-devel \ - elfutils-libelf-devel libjpeg-devel mesa-libGLU-devel libXScrnSaver-devel \ - libgnome-keyring-devel cups-devel libXtst-devel libXt-devel pam-devel httpd \ - mod_ssl php php-cli wdiff' + su -c 'yum install git python bzip2 tar pkgconfig atk-devel alsa-lib-devel \ + bison binutils brlapi-devel bluez-libs-devel bzip2-devel cairo-devel \ + cups-devel dbus-devel dbus-glib-devel expat-devel fontconfig-devel \ + freetype-devel gcc-c++ GConf2-devel glib2-devel glibc.i686 gperf \ + glib2-devel gtk2-devel gtk3-devel java-1.*.0-openjdk-devel libatomic \ + libcap-devel libffi-devel libgcc.i686 libgnome-keyring-devel libjpeg-devel \ + libstdc++.i686 libX11-devel libXScrnSaver-devel libXtst-devel \ + libxkbcommon-x11-devel ncurses-compat-libs nspr-devel nss-devel pam-devel \ + pango-devel pciutils-devel pulseaudio-libs-devel zlib.i686 httpd mod_ssl \ + php php-cli python-psutil wdiff' The msttcorefonts packages can be obtained by following the instructions present [here](http://www.fedorafaq.org/#installfonts). For the optional packages: * php-cgi is provided by the php-cli package -* wdiff doesn't exist in Fedora repositories, a possible alternative would be - dwdiff * sun-java6-fonts doesn't exist in Fedora repositories, needs investigating ### Arch Linux diff --git a/chromium/docs/linux_faster_builds.md b/chromium/docs/linux_faster_builds.md index 9582b389c18..06c571b56da 100644 --- a/chromium/docs/linux_faster_builds.md +++ b/chromium/docs/linux_faster_builds.md @@ -41,7 +41,7 @@ When you use Icecc, you need to [set some GN variables](https://www.chromium.org The `-B` option is not supported. [relevant commit](https://github.com/icecc/icecream/commit/b2ce5b9cc4bd1900f55c3684214e409fa81e7a92) - linux_use_debug_fission = false + use_debug_fission = false [debug fission](http://gcc.gnu.org/wiki/DebugFission) is not supported. [bug](https://github.com/icecc/icecream/issues/86) diff --git a/chromium/docs/memory-infra/README.md b/chromium/docs/memory-infra/README.md new file mode 100644 index 00000000000..edb573200f0 --- /dev/null +++ b/chromium/docs/memory-infra/README.md @@ -0,0 +1,171 @@ +# MemoryInfra + +MemoryInfra is a timeline-based profiling system integrated in chrome://tracing. +It aims at creating Chrome-scale memory measurement tooling so that on any +Chrome in the world --- desktop, mobile, Chrome OS or any other --- with the +click of a button you can understand where memory is being used in your system. + +[TOC] + +## Getting Started + + 1. Get a bleeding-edge or tip-of-tree build of Chrome. + + 2. [Record a trace as usual][record-trace]: open [chrome://tracing][tracing] + on Desktop Chrome or [chrome://inspect?tracing][inspect-tracing] to trace + Chrome for Android. + + 3. Make sure to enable the **memory-infra** category on the right. + + ![Tick the memory-infra checkbox when recording a trace.][memory-infra-box] + + 4. For now, some subsystems only work if Chrome is started with the + `--no-sandbox` flag. + <!-- TODO(primiano) TODO(ssid): https://crbug.com/461788 --> + +[record-trace]: https://sites.google.com/a/chromium.org/dev/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs +[tracing]: chrome://tracing +[inspect-tracing]: chrome://inspect?tracing +[memory-infra-box]: https://storage.googleapis.com/chromium-docs.appspot.com/1c6d1886584e7cc6ffed0d377f32023f8da53e02 + +![Timeline View and Analysis View][tracing-views] + +After recording a trace, you will see the **timeline view**. Timeline view +shows: + + * Total resident memory grouped by process (at the top). + * Total resident memory grouped by subsystem (at the top). + * Allocated memory per subsystem for every process. + +Click one of the ![M][m-blue] dots to bring up the **analysis view**. Click +on a cell in analysis view to reveal more information about its subsystem. +PartitionAlloc for instance, has more details about its partitions. + +![Component details for PartitionAlloc][partalloc-details] + +The purple ![M][m-purple] dots represent heavy dumps. In these dumps, components +can provide more details than in the regular dumps. The full details of the +MemoryInfra UI are explained in its [design doc][mi-ui-doc]. + +[tracing-views]: https://storage.googleapis.com/chromium-docs.appspot.com/db12015bd262385f0f8bd69133330978a99da1ca +[m-blue]: https://storage.googleapis.com/chromium-docs.appspot.com/b60f342e38ff3a3767bbe4c8640d96a2d8bc864b +[partalloc-details]: https://storage.googleapis.com/chromium-docs.appspot.com/02eade61d57c83f8ef8227965513456555fc3324 +[m-purple]: https://storage.googleapis.com/chromium-docs.appspot.com/d7bdf4d16204c293688be2e5a0bcb2bf463dbbc3 +[mi-ui-doc]: https://docs.google.com/document/d/1b5BSBEd1oB-3zj_CBAQWiQZ0cmI0HmjmXG-5iNveLqw/edit + +## Columns + +**Columns in blue** reflect the amount of actual physical memory used by the +process. This is what exerts memory pressure on the system. + + * **Total Resident**: (TODO: document this). + * **Peak Total Resident**: (TODO: document this). + * **PSS**: (TODO: document this). + * **Private Dirty**: (TODO: document this). + * **Swapped**: (TODO: document this). + +**Columns in black** reflect a best estimation of the the amount of physical +memory used by various subsystems of Chrome. + + * **Blink GC**: Memory used by [Oilpan][oilpan]. + * **CC**: Memory used by the compositor. + See [cc/memory][cc-memory] for the full details. + * **Discardable**: (TODO: document this). + * **Font Caches**: (TODO: document this). + * **GPU** and **GPU Memory Buffer**: GPU memory and RAM used for GPU purposes. + See [GPU Memory Tracing][gpu-memory]. + * **LevelDB**: (TODO: document this). + * **Malloc**: Memory allocated by calls to `malloc`, or `new` for most + non-Blink objects. + * **PartitionAlloc**: Memory allocated via [PartitionAlloc][partalloc]. + Blink objects that are not managed by Oilpan are allocated with + PartitionAlloc. + * **Skia**: (TODO: document this). + * **SQLite**: (TODO: document this). + * **V8**: (TODO: document this). + * **Web Cache**: (TODO: document this). + +The **tracing column in gray** reports memory that is used to collect all of the +above information. This memory would not be used if tracing were not enabled, +and it is discounted from malloc and the blue columns. + +<!-- TODO(primiano): Improve this. https://crbug.com/??? --> + +[oilpan]: /third_party/WebKit/Source/platform/heap/BlinkGCDesign.md +[cc-memory]: probe-cc.md +[gpu-memory]: probe-gpu.md +[partalloc]: /third_party/WebKit/Source/wtf/allocator/PartitionAlloc.md + +## Related Pages + + * [Adding MemoryInfra Tracing to a Component](adding_memory_infra_tracing.md) + * [GPU Memory Tracing](probe-gpu.md) + * [Heap Profiler Internals](heap_profiler_internals.md) + * [Heap Profiling with MemoryInfra](heap_profiler.md) + * [Startup Tracing with MemoryInfra](memory_infra_startup_tracing.md) + +## Rationale + +Another memory profiler? What is wrong with tool X? +Most of the existing tools: + + * Are hard to get working with Chrome. (Massive symbols, require OS-specific + tricks.) + * Lack Chrome-related context. + * Don't deal with multi-process scenarios. + +MemoryInfra leverages the existing tracing infrastructure in Chrome and provides +contextual data: + + * **It speaks Chrome slang.** + The Chromium codebase is instrumented. Its memory subsystems (allocators, + caches, etc.) uniformly report their stats into the trace in a way that can + be understood by Chrome developers. No more + `__gnu_cxx::new_allocator< std::_Rb_tree_node< std::pair< std::string const, base::Value*>>> ::allocate`. + * **Timeline data that can be correlated with other events.** + Did memory suddenly increase during a specific Blink / V8 / HTML parsing + event? Which subsystem increased? Did memory not go down as expected after + closing a tab? Which other threads were active during a bloat? + * **Works out of the box on desktop and mobile.** + No recompilations with unmaintained `GYP_DEFINES`, no time-consuming + symbolizations stages. All the logic is already into Chrome, ready to dump at + any time. + * **The same technology is used for telemetry and the ChromePerf dashboard.** + See [the slides][chromeperf-slides] and take a look at + [some ChromePerf dashboards][chromeperf] and + [telemetry documentation][telemetry]. + +[chromeperf-slides]: https://docs.google.com/presentation/d/1OyxyT1sfg50lA36A7ibZ7-bBRXI1kVlvCW0W9qAmM_0/present?slide=id.gde150139b_0_137 +[chromeperf]: https://chromeperf.appspot.com/report?sid=3b54e60c9951656574e19252fadeca846813afe04453c98a49136af4c8820b8d +[telemetry]: https://catapult.gsrc.io/telemetry + +## Development + +MemoryInfra is based on a simple and extensible architecture. See +[the slides][dp-slides] on how to get your subsystem reported in MemoryInfra, +or take a look at one of the existing examples such as +[malloc_dump_provider.cc][malloc-dp]. The crbug label is +[Hotlist-MemoryInfra][hotlist]. Don't hesitate to contact +[tracing@chromium.org][mailtracing] for questions and support. + +[dp-slides]: https://docs.google.com/presentation/d/1GI3HY3Mm5-Mvp6eZyVB0JiaJ-u3L1MMJeKHJg4lxjEI/present?slide=id.g995514d5c_1_45 +[malloc-dp]: https://chromium.googlesource.com/chromium/src.git/+/master/base/trace_event/malloc_dump_provider.cc +[hotlist]: https://code.google.com/p/chromium/issues/list?q=label:Hotlist-MemoryInfra +[mailtracing]: mailto:tracing@chromium.org + +## Design documents + +Architectural: + +<iframe width="100%" height="300px" src="https://docs.google.com/a/google.com/embeddedfolderview?id=0B3KuDeqD-lVJfmp0cW1VcE5XVWNxZndxelV5T19kT2NFSndYZlNFbkFpc3pSa2VDN0hlMm8"> +</iframe> + +Chrome-side design docs: + +<iframe width="100%" height="300px" src="https://docs.google.com/a/google.com/embeddedfolderview?id=0B3KuDeqD-lVJfndSa2dleUQtMnZDeWpPZk1JV0QtbVM5STkwWms4YThzQ0pGTmU1QU9kNVk"> +</iframe> + +Catapult-side design docs: + +<iframe width="100%" height="300px" src="https://docs.google.com/a/google.com/embeddedfolderview?id=0B3KuDeqD-lVJfm10bXd5YmRNWUpKOElOWS0xdU1tMmV1S3F4aHo0ZDJLTmtGRy1qVnQtVWM"> +</iframe> diff --git a/chromium/docs/memory-infra/adding_memory_infra_tracing.md b/chromium/docs/memory-infra/adding_memory_infra_tracing.md new file mode 100644 index 00000000000..7960e153dd1 --- /dev/null +++ b/chromium/docs/memory-infra/adding_memory_infra_tracing.md @@ -0,0 +1,178 @@ +# Adding MemoryInfra Tracing to a Component + +If you have a component that manages memory allocations, you should be +registering and tracking those allocations with Chrome's MemoryInfra system. +This lets you: + + * See an overview of your allocations, giving insight into total size and + breakdown. + * Understand how your allocations change over time and how they are impacted by + other parts of Chrome. + * Catch regressions in your component's allocations size by setting up + telemetry tests which monitor your allocation sizes under certain + circumstances. + +Some existing components that use MemoryInfra: + + * **Discardable Memory**: Tracks usage of discardable memory throughout Chrome. + * **GPU**: Tracks OpenGL and other GPU object allocations. + * **V8**: Tracks the heap size for JS. + +[TOC] + +## Overview + +In order to hook into Chrome's MemoryInfra system, your component needs to do +two things: + + 1. Create a [`MemoryDumpProvider`][mdp] for your component. + 2. Register and unregister you dump provider with the + [`MemoryDumpManager`][mdm]. + +[mdp]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/memory_dump_provider.h +[mdm]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/memory_dump_manager.h + +## Creating a Memory Dump Provider + +You can implement a [`MemoryDumpProvider`][mdp] as a stand-alone class, or as an +additional interface on an existing class. For example, this interface is +frequently implemented on classes which manage a pool of allocations (see +[`cc::ResourcePool`][resource-pool] for an example). + +A `MemoryDumpProvider` has one basic job, to implement `OnMemoryDump`. This +function is responsible for iterating over the resources allocated or tracked by +your component, and creating a [`MemoryAllocatorDump`][mem-alloc-dump] for each +using [`ProcessMemoryDump::CreateAllocatorDump`][pmd]. A simple example: + +```cpp +bool MyComponent::OnMemoryDump(const MemoryDumpArgs& args, + ProcessMemoryDump* process_memory_dump) { + for (const auto& allocation : my_allocations_) { + auto* dump = process_memory_dump->CreateAllocatorDump( + "path/to/my/component/allocation_" + allocation.id().ToString()); + dump->AddScalar(base::trace_event::MemoryAllocatorDump::kNameSize, + base::trace_event::MemoryAllocatorDump::kUnitsBytes, + allocation.size_bytes()); + + // While you will typically have a kNameSize entry, you can add additional + // entries to your dump with free-form names. In this example we also dump + // an object's "free_size", assuming the object may not be entirely in use. + dump->AddScalar("free_size", + base::trace_event::MemoryAllocatorDump::kUnitsBytes, + allocation.free_size_bytes()); + } +} +``` + +For many components, this may be all that is needed. See +[Handling Shared Memory Allocations](#Handling-Shared-Memory-Allocations) and +[Suballocations](#Suballocations) for information on more complex use cases. + +[resource-pool]: https://chromium.googlesource.com/chromium/src/+/master/cc/resources/resource_pool.h +[mem-alloc-dump]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/memory_allocator_dump.h +[pmd]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/process_memory_dump.h + +## Registering a Memory Dump Provider + +Once you have created a [`MemoryDumpProvider`][mdp], you need to register it +with the [`MemoryDumpManager`][mdm] before the system can start polling it for +memory information. Registration is generally straightforward, and involves +calling `MemoryDumpManager::RegisterDumpProvider`: + +```cpp +// Each process uses a singleton |MemoryDumpManager|. +base::trace_event::MemoryDumpManager::GetInstance()->RegisterDumpProvider( + my_memory_dump_provider_, my_single_thread_task_runner_); +``` + +In the above code, `my_memory_dump_provider_` is the `MemoryDumpProvider` +outlined in the previous section. `my_single_thread_task_runner_` is more +complex and may be a number of things: + + * Most commonly, if your component is always used from the main message loop, + `my_single_thread_task_runner_` may just be + [`base::ThreadTaskRunnerHandle::Get()`][task-runner-handle]. + * If your component already uses a custom `base::SingleThreadTaskRunner` for + executing tasks on a specific thread, you should likely use this runner. + +[task-runner-handle]: https://chromium.googlesource.com/chromium/src/+/master/base/thread_task_runner_handle.h + +## Unregistration + +Unregistration must happen on the thread belonging to the +`SingleThreadTaskRunner` provided at registration time. Unregistering on another +thread can lead to race conditions if tracing is active when the provider is +unregistered. + +```cpp +base::trace_event::MemoryDumpManager::GetInstance()->UnregisterDumpProvider( + my_memory_dump_provider_); +``` + +## Handling Shared Memory Allocations + +When an allocation is shared between two components, it may be useful to dump +the allocation in both components, but you also want to avoid double-counting +the allocation. This can be achieved using the concept of _ownership edges_. +An ownership edge represents that the _source_ memory allocator dump owns a +_target_ memory allocator dump. If multiple source dumps own a single target, +then the cost of that target allocation will be split between the sources. +Additionally, importance can be added to a specific ownership edge, allowing +the highest importance source of that edge to claim the entire cost of the +target. + +In the typical case, you will use [`ProcessMemoryDump`][pmd] to create a shared +global allocator dump. This dump will act as the target of all +component-specific dumps of a specific resource: + +```cpp +// Component 1 is going to create a dump, source_mad, for an allocation, +// alloc_, which may be shared with other components / processes. +MyAllocationType* alloc_; +base::trace_event::MemoryAllocatorDump* source_mad; + +// Component 1 creates and populates source_mad; +... + +// In addition to creating a source dump, we must create a global shared +// target dump. This dump should be created with a unique global ID which can be +// generated any place the allocation is used. I recommend adding a global ID +// generation function to the allocation type. +base::trace_event::MemoryAllocatorDumpGUID guid(alloc_->GetGUIDString()); + +// From this global ID we can generate the parent allocator dump. +base::trace_event::MemoryAllocatorDump* target_mad = + process_memory_dump->CreateSharedGlobalAllocatorDump(guid); + +// We now create an ownership edge from the source dump to the target dump. +// When creating an edge, you can assign an importance to this edge. If all +// edges have the same importance, the size of the allocation will be split +// between all sources which create a dump for the allocation. If one +// edge has higher importance than the others, its source will be assigned the +// full size of the allocation. +const int kImportance = 1; +process_memory_dump->AddOwnershipEdge( + source_mad->guid(), target_mad->guid(), kImportance); +``` + +If an allocation is being shared across process boundaries, it may be useful to +generate a global ID which incorporates the ID of the local process, preventing +two processes from generating colliding IDs. As it is not recommended to pass a +process ID between processes for security reasons, a function +`MemoryDumpManager::GetTracingProcessId` is provided which generates a unique ID +per process that can be passed with the resource without security concerns. +Frequently this ID is used to generate a global ID that is based on the +allocated resource's ID combined with the allocating process' tracing ID. + +## Suballocations + +Another advanced use case involves tracking sub-allocations of a larger +allocation. For instance, this is used in +[`gpu::gles2::TextureManager`][texture-manager] to dump both the suballocations +which make up a texture. To create a suballocation, instead of calling +[`ProcessMemoryDump::CreateAllocatorDump`][pmd] to create a +[`MemoryAllocatorDump`][mem-alloc-dump], you call +[`ProcessMemoryDump::AddSubAllocation`][pmd], providing the ID of the parent +allocation as the first parameter. + +[texture-manager]: https://chromium.googlesource.com/chromium/src/+/master/gpu/command_buffer/service/texture_manager.cc diff --git a/chromium/docs/memory-infra/heap_profiler.md b/chromium/docs/memory-infra/heap_profiler.md new file mode 100644 index 00000000000..57961d35094 --- /dev/null +++ b/chromium/docs/memory-infra/heap_profiler.md @@ -0,0 +1,168 @@ +# Heap Profiling with MemoryInfra + +As of Chrome 48, MemoryInfra supports heap profiling. The core principle is +a solution that JustWorks™ on all platforms without patching or rebuilding, +intergrated with the chrome://tracing ecosystem. + +[TOC] + +## How to Use + + 1. Start Chrome with the `--enable-heap-profiling` switch. This will make + Chrome keep track of all allocations. + + 2. Grab a [MemoryInfra][memory-infra] trace. For best results, start tracing + first, and _then_ open a new tab that you want to trace. Furthermore, + enabling more categories (besides memory-infra) will yield more detailed + information in the heap profiler backtraces. + + 3. When the trace has been collected, select a heavy memory dump indicated by + a purple ![M][m-purple] dot. Heap dumps are only included in heavy memory + dumps. + + 4. In the analysis view, cells marked with a triple bar icon (☰) contain heap + dumps. Select such a cell. + + ![Cells containing a heap dump][cells-heap-dump] + + 5. Scroll down all the way to _Heap Details_. + + 6. Pinpoint the memory bug and live happily ever after. + +[memory-infra]: README.md +[m-purple]: https://storage.googleapis.com/chromium-docs.appspot.com/d7bdf4d16204c293688be2e5a0bcb2bf463dbbc3 +[cells-heap-dump]: https://storage.googleapis.com/chromium-docs.appspot.com/a24d80d6a08da088e2e9c8b2b64daa215be4dacb + +### Native stack traces + +By default heap profiling collects pseudo allocation traces, which are based +on trace events. I.e. frames in allocation traces correspond to trace events +that were active at the time of allocations, and are not real function names. +However, you can build a special Linux / Android build that will collect +real C/C++ stack traces. + + 1. Build with the following GN flags: + + Linux + + enable_profiling = true + + + Android + + arm_use_thumb = false + enable_profiling = true + + 2. Start Chrome with `--enable-heap-profiling=native` switch (notice + `=native` part). + + On Android use the command line tool before starting the app: + + build/android/adb_chrome_public_command_line --enable-heap-profiling=native + + (run the tool with an empty argument `''` to clear the command line) + + 3. Grab a [MemoryInfra][memory-infra] trace. You don't need any other + categories besides `memory-infra`. + + 4. Save the grabbed trace file. This step is needed because freshly + taken trace file contains raw addresses (which look like `pc:dcf5dbf8`) + instead of function names, and needs to be symbolized. + + 4. Symbolize the trace file. During symbolization addresses are resolved to + the corresponding function names and trace file is rewritten (but a backup + is saved with `.BACKUP` extension). + + Linux + + third_party/catapult/tracing/bin/symbolize_trace <trace file> + + Android + + third_party/catapult/tracing/bin/symbolize_trace --output-directory out/Release <trace file> + + (note `--output-directory` and make sure it's right for your setup) + + 5. Load the trace file in `chrome://tracing`. Locate a purple ![M][m-purple] + dot, and continue from step *3* from the instructions above. Native stack + traces will be shown in the _Heap Details_ pane. + +## Heap Details + +The heap details view contains a tree that represents the heap. The size of the +root node corresponds to the selected allocator cell. + +*** aside +The size value in the heap details view will not match the value in the selected +analysis view cell exactly. There are three reasons for this. First, the heap +profiler reports the memory that _the program requested_, whereas the allocator +reports the memory that it _actually allocated_ plus its own bookkeeping +overhead. Second, allocations that happen early --- before Chrome knows that +heap profiling is enabled --- are not captured by the heap profiler, but they +are reported by the allocator. Third, tracing overhead is not discounted by the +heap profiler. +*** + +The heap can be broken down in two ways: by _backtrace_ (marked with an ƒ), and +by _type_ (marked with a Ⓣ). When tracing is enabled, Chrome records trace +events, most of which appear in the flame chart in timeline view. At every +point in time these trace events form a pseudo stack, and a vertical slice +through the flame chart is like a backtrace. This corresponds to the ƒ nodes in +the heap details view. Hence enabling more tracing categories will give a more +detailed breakdown of the heap. + +The other way to break down the heap is by object type. At the moment this is +only supported for PartitionAlloc. + +*** aside +In official builds, only the most common type names are included due to binary +size concerns. Development builds have full type information. +*** + +To keep the trace log small, uninteresting information is omitted from heap +dumps. The long tail of small nodes is not dumped, but grouped in an `<other>` +node instead. Note that altough these small nodes are insignificant on their +own, together they can be responsible for a significant portion of the heap. The +`<other>` node is large in that case. + +## Example + +In the trace below, `ParseAuthorStyleSheet` is called at some point. + +![ParseAuthorStyleSheet pseudo stack][pseudo-stack] + +The pseudo stack of trace events corresponds to the tree of ƒ nodes below. Of +the 23.5 MiB of memory allocated with PartitionAlloc, 1.9 MiB was allocated +inside `ParseAuthorStyleSheet`, either directly, or at a deeper level (like +`CSSParserImpl::parseStyleSheet`). + +![Memory Allocated in ParseAuthorStyleSheet][break-down-by-backtrace] + +By expanding `ParseAuthorStyleSheet`, we can see which types were allocated +there. Of the 1.9 MiB, 371 KiB was spent on `ImmutableStylePropertySet`s, and +238 KiB was spent on `StringImpl`s. + +![ParseAuthorStyleSheet broken down by type][break-down-by-type] + +It is also possible to break down by type first, and then by backtrace. Below +we see that of the 23.5 MiB allocated with PartitionAlloc, 1 MiB is spent on +`Node`s, and about half of the memory spent on nodes was allocated in +`HTMLDocumentParser`. + +![The PartitionAlloc heap broken down by type first and then by backtrace][type-then-backtrace] + +Heap dump diffs are fully supported by trace viewer. Select a heavy memory dump +(a purple dot), then with the control key select a heavy memory dump earlier in +time. Below is a diff of theverge.com before and in the middle of loading ads. +We can see that 4 MiB were allocated when parsing the documents in all those +iframes, almost a megabyte of which was due to JavaScript. (Note that this is +memory allocated by PartitionAlloc alone, the total renderer memory increase was +around 72 MiB.) + +![Diff of The Verge before and after loading ads][diff] + +[pseudo-stack]: https://storage.googleapis.com/chromium-docs.appspot.com/058e50350836f55724e100d4dbbddf4b9803f550 +[break-down-by-backtrace]: https://storage.googleapis.com/chromium-docs.appspot.com/ec61c5f15705f5bcf3ca83a155ed647a0538bbe1 +[break-down-by-type]: https://storage.googleapis.com/chromium-docs.appspot.com/2236e61021922c0813908c6745136953fa20a37b +[type-then-backtrace]: https://storage.googleapis.com/chromium-docs.appspot.com/c5367dde11476bdbf2d5a1c51674148915573d11 +[diff]: https://storage.googleapis.com/chromium-docs.appspot.com/802141906869cd533bb613da5f91bd0b071ceb24 diff --git a/chromium/docs/memory-infra/heap_profiler_internals.md b/chromium/docs/memory-infra/heap_profiler_internals.md new file mode 100644 index 00000000000..d1019c8d212 --- /dev/null +++ b/chromium/docs/memory-infra/heap_profiler_internals.md @@ -0,0 +1,184 @@ +# Heap Profiler Internals + +This document describes how the heap profiler works and how to add heap +profiling support to your allocator. If you just want to know how to use it, +see [Heap Profiling with MemoryInfra](heap_profiler.md) + +[TOC] + +## Overview + +The heap profiler consists of tree main components: + + * **The Context Tracker**: Responsible for providing context (pseudo stack + backtrace) when an allocation occurs. + * **The Allocation Register**: A specialized hash table that stores allocation + details by address. + * **The Heap Dump Writer**: Extracts the most important information from a set + of recorded allocations and converts it into a format that can be dumped into + the trace log. + +These components are designed to work well together, but to be usable +independently as well. + +When there is a way to get notified of all allocations and frees, this is the +normal flow: + + 1. When an allocation occurs, call + [`AllocationContextTracker::GetInstanceForCurrentThread()->GetContextSnapshot()`][context-tracker] + to get an [`AllocationContext`][alloc-context]. + 2. Insert that context together with the address and size into an + [`AllocationRegister`][alloc-register] by calling `Insert()`. + 3. When memory is freed, remove it from the register with `Remove()`. + 4. On memory dump, collect the allocations from the register, call + [`ExportHeapDump()`][export-heap-dump], and add the generated heap dump to + the memory dump. + +[context-tracker]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/heap_profiler_allocation_context_tracker.h +[alloc-context]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/heap_profiler_allocation_context.h +[alloc-register]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/heap_profiler_allocation_register.h +[export-heap-dump]: https://chromium.googlesource.com/chromium/src/+/master/base/trace_event/heap_profiler_heap_dump_writer.h + +*** aside +An allocator can skip step 2 and 3 if it is able to store the context itself, +and if it is able to enumerate all allocations for step 4. +*** + +When heap profiling is enabled (the `--enable-heap-profiling` flag is passed), +the memory dump manager calls `OnHeapProfilingEnabled()` on every +`MemoryDumpProvider` as early as possible, so allocators can start recording +allocations. This should be done even when tracing has not been started, +because these allocations might still be around when a heap dump happens during +tracing. + +## Context Tracker + +The [`AllocationContextTracker`][context-tracker] is a thread-local object. Its +main purpose is to keep track of a pseudo stack of trace events. Chrome has +been instrumented with lots of `TRACE_EVENT` macros. These trace events push +their name to a thread-local stack when they go into scope, and pop when they +go out of scope, if all of the following conditions have been met: + + * A trace is being recorded. + * The category of the event is enabled in the trace config. + * Heap profiling is enabled (with the `--enable-heap-profiling` flag). + +This means that allocations that occur before tracing is started will not have +backtrace information in their context. + +A thread-local instance of the context tracker is initialized lazily when it is +first accessed. This might be because a trace event pushed or popped, or because +`GetContextSnapshot()` was called when an allocation occurred. + +[`AllocationContext`][alloc-context] is what is used to group and break down +allocations. Currently `AllocationContext` has the following fields: + + * Backtrace: filled by the context tracker, obtained from the thread-local + pseudo stack. + * Type name: to be filled in at a point where the type of a pointer is known, + set to _[unknown]_ by default. + +It is possible to modify this context after insertion into the register, for +instance to set the type name if it was not known at the time of allocation. + +## Allocation Register + +The [`AllocationRegister`][alloc-register] is a hash table specialized for +storing `(size, AllocationContext)` pairs by address. It has been optimized for +Chrome's typical number of unfreed allocations, and it is backed by `mmap` +memory directly so there are no reentrancy issues when using it to record +`malloc` allocations. + +The allocation register is threading-agnostic. Access must be synchronised +properly. + +## Heap Dump Writer + +Dumping every single allocation in the allocation register straight into the +trace log is not an option due to the sheer volume (~300k unfreed allocations). +The role of the [`ExportHeapDump()`][export-heap-dump] function is to group +allocations, striking a balance between trace log size and detail. + +See the [Heap Dump Format][heap-dump-format] document for more details about the +structure of the heap dump in the trace log. + +[heap-dump-format]: https://docs.google.com/document/d/1NqBg1MzVnuMsnvV1AKLdKaPSPGpd81NaMPVk5stYanQ + +## Instrumenting an Allocator + +Below is an example of adding heap profiling support to an allocator that has +an existing memory dump provider. + +```cpp +class FooDumpProvider : public MemoryDumpProvider { + + // Kept as pointer because |AllocationRegister| allocates a lot of virtual + // address space when constructed, so only construct it when heap profiling is + // enabled. + scoped_ptr<AllocationRegister> allocation_register_; + Lock allocation_register_lock_; + + static FooDumpProvider* GetInstance(); + + void InsertAllocation(void* address, size_t size) { + AllocationContext context = AllocationContextTracker::GetInstanceForCurrentThread()->GetContextSnapshot(); + AutoLock lock(allocation_register_lock_); + allocation_register_->Insert(address, size, context); + } + + void RemoveAllocation(void* address) { + AutoLock lock(allocation_register_lock_); + allocation_register_->Remove(address); + } + + // Will be called as early as possible by the memory dump manager. + void OnHeapProfilingEnabled(bool enabled) override { + AutoLock lock(allocation_register_lock_); + allocation_register_.reset(new AllocationRegister()); + + // At this point, make sure that from now on, for every allocation and + // free, |FooDumpProvider::GetInstance()->InsertAllocation()| and + // |RemoveAllocation| are called. + } + + bool OnMemoryDump(const MemoryDumpArgs& args, + ProcessMemoryDump& pmd) override { + // Do regular dumping here. + + // Dump the heap only for detailed dumps. + if (args.level_of_detail == MemoryDumpLevelOfDetail::DETAILED) { + TraceEventMemoryOverhead overhead; + hash_map<AllocationContext, size_t> bytes_by_context; + + { + AutoLock lock(allocation_register_lock_); + if (allocation_register_) { + // Group allocations in the register into |bytes_by_context|, but do + // no additional processing inside the lock. + for (const auto& alloc_size : *allocation_register_) + bytes_by_context[alloc_size.context] += alloc_size.size; + + allocation_register_->EstimateTraceMemoryOverhead(&overhead); + } + } + + if (!bytes_by_context.empty()) { + scoped_refptr<TracedValue> heap_dump = ExportHeapDump( + bytes_by_context, + pmd->session_state()->stack_frame_deduplicator(), + pmb->session_state()->type_name_deduplicator()); + pmd->AddHeapDump("foo_allocator", heap_dump); + overhead.DumpInto("tracing/heap_profiler", pmd); + } + } + + return true; + } +}; + +``` + +*** aside +The implementation for `malloc` is more complicated because it needs to deal +with reentrancy. +*** diff --git a/chromium/docs/memory-infra/memory_infra_startup_tracing.md b/chromium/docs/memory-infra/memory_infra_startup_tracing.md new file mode 100644 index 00000000000..c036c05c386 --- /dev/null +++ b/chromium/docs/memory-infra/memory_infra_startup_tracing.md @@ -0,0 +1,74 @@ +# Startup Tracing with MemoryInfra + +[MemoryInfra](README.md) supports startup tracing. + +## The Simple Way + +Start Chrome as follows: + + $ chrome --no-sandbox \ + --trace-startup=-*,disabled-by-default-memory-infra \ + --trace-startup-file=/tmp/trace.json \ + --trace-startup-duration=7 + +On Android, enable startup tracing and start Chrome as follows: + + $ build/android/adb_chrome_public_command_line \ + --trace-startup=-*,disabled-by-default-memory-infra \ + --trace-startup-file=/sdcard/Download/trace.json \ + --trace-startup-duration=7 + + $ build/android/adb_run_chrome_public + + $ adb pull /sdcard/Download/trace.json # After tracing. + +Note that startup tracing will be enabled upon every Chrome launch until you +delete the command-line flags: + + $ build/android/adb_chrome_public_command_line "" + +This will use the default configuration: one memory dump every 250 ms with a +detailed dump ever two seconds. + +## The Advanced Way + +If you need more control over the granularity of the memory dumps, you can +specify a custom trace config file as follows: + + $ cat > /tmp/trace.config + { + "startup_duration": 4, + "result_file": "/tmp/trace.json", + "trace_config": { + "included_categories": ["disabled-by-default-memory-infra"], + "excluded_categories": ["*"], + "memory_dump_config": { + "triggers": [ + { "mode": "light", "periodic_interval_ms": 50 }, + { "mode": "detailed", "periodic_interval_ms": 1000 } + ] + } + } + } + + $ chrome --no-sandbox --trace-config-file=/tmp/trace.config + +On Android, the config file has to be pushed to a fixed file location: + + $ adb root + $ adb push /tmp/trace.config /data/local/chrome-trace-config.json + + $ build/android/adb_run_chrome_public + + $ adb pull /sdcard/Download/trace.json # After tracing. + +Make sure that the "result_file" location is writable by the Chrome process on +Android (e.g. "/sdcard/Download/trace.json"). Note that startup tracing will be +enabled upon every Chrome launch until you delete the config file: + + $ adb shell rm /data/local/chrome-trace-config.json + +## Related Pages + + * [General information about startup tracing](https://sites.google.com/a/chromium.org/dev/developers/how-tos/trace-event-profiling-tool/recording-tracing-runs) + * [Memory tracing with MemoryInfra](README.md) diff --git a/chromium/docs/memory-infra/probe-cc.md b/chromium/docs/memory-infra/probe-cc.md new file mode 100644 index 00000000000..1fb9f013ef5 --- /dev/null +++ b/chromium/docs/memory-infra/probe-cc.md @@ -0,0 +1,125 @@ +# Memory Usage in CC + +This document gives an overview of memory usage in the CC component, as well as +information on how to analyze that memory. + +[TOC] + +## Types of Memory in Use + +CC uses a number of types of memory: + +1. Malloc Memory - Standard system memory used for all manner of objects in CC. +2. Discardable Memory - Memory allocated by the discardable memory system. + Designed to be freeable by the system at any time (under memory pressure). + In most cases, only pinned discardable memory should be considered to + have a cost; however, the implementation of discardable memory is platform + dependent, and on certain platforms unpinned memory can contribute to + memory pressure to some degree. +3. Shared Memory - Memory which is allocated by the Browser and can safely + be transferred between processes. This memory is allocated by the browser + but may count against a renderer process depending on who logically "owns" + the memory. +4. GPU Memory - Memory which is allocated on the GPU and typically does not + count against system memory. This mainly includes OpenGL objects. + +## Categories Of Memory + +Memory-infra tracing will grab dumps of CC memory in several categories. + +### CC Category + +The CC category contains resource allocations made by ResourceProvider. All +resource allocations are enumerated under cc/resource_memory. A subset of +resources are used as tile memory, and are also enumerated under cc/tile_memory. +For resources that appear in both cc/tile_memory and cc/resource_memory, the +size will be attributed to cc/tile_memory (effective_size of cc/resource_memory +will not include these resources). + +If the one-copy tile update path is in use, the cc category will also enumerate +staging resources used as intermediates when drawing tiles. These resources are +like tile_memory, in that they are shared with cc/resource_memory. + +Note that depending on the path being used, CC memory may be either shared +memory or GPU memory: + +Path | Tile Memory Type | Staging Memory Type +-------------|------------------------------------------- +Bitmap | Shared Memory | N/A +One Copy | GPU Memory | Shared Memory +Zero Copy | GPU or Shared Memory | N/A +GPU | GPU Memory | N/A + +Note that these values can be determined from a memory-infra dump. For a given +resource, hover over the small green arrow next to it's "size". This will show +the other allocations that this resource is aliased with. If you see an +allocation in the GPU process, the memory is generally GPU memory. Otherwise, +the resource is typically Shared Memory. + +Tile and Staging memory managers are set up to evict any resource not used +within 1s. + +### GPU Category + +This category lists the memory allocations needed to support CC's GPU path. +Despite the name, the data in this category (within a Renderer process) is not +GPU memory but Shared Memory. + +Allocations tracked here include GL command buffer support allocations such as: + +1. Command Buffer Memory - memory used to send commands across the GL command + buffer. This is backed by Shared Memory. +2. Mapped Memory - memory used in certain image upload paths to share data + with the GPU process. This is backed by Shared Memory. +3. Transfer Buffer Memory - memory used to transfer data to the GPU - used in + different paths than mapped memory. Also backed by Shared Memory. + +### Discardable Category + +Cached images make use of Discardable memory. These allocations are managed by +Skia and a better summary of these allocations can likely be found in the Skia +category. + +### Malloc Category + +The malloc category shows a summary of all memory allocated through malloc. + +Currently the information here is not granular enough to be useful, and a +good project would be to track down and instrument any large pools of memory +using malloc. + +Some Skia caches also make use of malloc memory. For these allocations, a better +summary can be seen in the Skia category. + +### Skia Category + +The Skia category shows all resources used by the Skia rendering system. These +can be divided into a few subcategories. skia/gpu_resources/* includes all +resources using GPU memory. All other categories draw from either Shared or +malloc memory. To determine which type of memory a resource is using, hover +over the green arrow next to its size. This will show the other allocations +which the resource is aliased with. + +## Other Areas of Interest + +Many of the allocations under CC are aliased with memory in the Browser or GPU +process. When investigating CC memory it may be worth looking at the following +external categories: + +1. GPU Process / GPU Category - All GPU resources allocated by CC have a + counterpart in the GPU/GPU category. This includes GL Textures, buffers, and + other GPU backed objects such as Native GPU Memory Buffers. +2. Browser Process / GpuMemoryBuffer Category - Resources backed by Shared + Memory GpuMemoryBuffers are allocated by the browser and will be tracked + in this category. +3. Browser Process / SharedMemory Category - Resources backed by Bitmap and + Shared Memory GpuMemoryBuffer objects are allocated by the browser and will + also tracked in this category. + +## Memory TODOs + +The following areas have insufficient memory instrumentation. + +1. DisplayLists - DisplayLists can be quite large and are currently + un-instrumented. These use malloc memory and currently contribute to + malloc/allocated_objects/<unspecified>. [BUG](http://crbug.com/567465) diff --git a/chromium/docs/memory-infra/probe-gpu.md b/chromium/docs/memory-infra/probe-gpu.md new file mode 100644 index 00000000000..17fa5a8a3bc --- /dev/null +++ b/chromium/docs/memory-infra/probe-gpu.md @@ -0,0 +1,93 @@ +# GPU Memory Tracing + +This is an overview of the GPU column in [MemoryInfra][memory-infra]. + +[TOC] + +## Quick Start + +If you want an overview of total GPU memory usage, select the GPU process' GPU +category and look at the _size_ column. (Not _effective size_.) + +![Look at the size column for total GPU memory][gpu-size-column] + +[memory-infra]: README.md +[gpu-size-column]: https://storage.googleapis.com/chromium-docs.appspot.com/c7d632c18d90d99e393ad0ade929f96e7d8243fe + +## In Depth + +GPU Memory in Chrome involves several different types of allocations. These +include, but are not limited to: + + * **Raw OpenGL Objects**: These objects are allocated by Chrome using the + OpenGL API. Chrome itself has handles to these objects, but the actual + backing memory may live in a variety of places (CPU side in the GPU process, + CPU side in the kernel, GPU side). Because most OpenGL operations occur over + IPC, communicating with Chrome's GPU process, these allocations are almost + always shared between a renderer or browser process and the GPU process. + * **GPU Memory Buffers**: These objects provide a chunk of writable memory + which can be handed off cross-process. While GPUMemoryBuffers represent a + platform-independent way to access this memory, they have a number of + possible platform-specific implementations (EGL surfaces on Linux, + IOSurfaces on Mac, or CPU side shared memory). Because of their cross + process use case, these objects will almost always be shared between a + renderer or browser process and the GPU process. + * **GLImages**: GLImages are a platform-independent abstraction around GPU + memory, similar to GPU Memory Buffers. In many cases, GLImages are created + from GPUMemoryBuffers. The primary difference is that GLImages are designed + to be bound to an OpenGL texture using the image extension. + +GPU Memory can be found across a number of different processes, in a few +different categories. + +Renderer or browser process: + + * **CC Category**: The CC category contains all resource allocations used in + the Chrome Compositor. When GPU rasterization is enabled, these resource + allocations will be GPU allocations as well. See also + [docs/memory-infra/probe-cc.md][cc-memory]. + * **Skia/gpu_resources Category**: All GPU resources used by Skia. + * **GPUMemoryBuffer Category**: All GPUMemoryBuffers in use in the current + process. + +GPU process: + + * **GPU Category**: All GPU allocations, many shared with other processes. + * **GPUMemoryBuffer Category**: All GPUMemoryBuffers. + +## Example + +Many of the objects listed above are shared between multiple processes. +Consider a GL texture used by CC --- this texture is shared between a renderer +and the GPU process. Additionally, the texture may be backed by a GLImage which +was created from a GPUMemoryBuffer, which is also shared between the renderer +and GPU process. This means that the single texture may show up in the memory +logs of two different processes multiple times. + +To make things easier to understand, each GPU allocation is only ever "owned" +by a single process and category. For instance, in the above example, the +texture would be owned by the CC category of the renderer process. Each +allocation has (at least) two sizes recorded --- _size_ and _effective size_. +In the owning allocation, these two numbers will match: + +![Matching size and effective size][owner-size] + +Note that the allocation also gives information on what other processes it is +shared with (seen by hovering over the green arrow). If we navigate to the +other allocation (in this case, gpu/gl/textures/client_25/texture_216) we will +see a non-owning allocation. In this allocation the size is the same, but the +_effective size_ is 0: + +![Effective size of zero][non-owner-size] + +Other types, such as GPUMemoryBuffers and GLImages have similar sharing +patterns. + +When trying to get an overview of the absolute memory usage tied to the GPU, +you can look at the size column (not effective size) of just the GPU process' +GPU category. This will show all GPU allocations, whether or not they are owned +by another process. + +[cc-memory]: /docs/memory-infra/probe-cc.md +[owner-size]: https://storage.googleapis.com/chromium-docs.appspot.com/a325c4426422e53394a322d31b652cfa34231189 +[non-owner-size]: https://storage.googleapis.com/chromium-docs.appspot.com/b8cf464636940d0925f29a102e99aabb9af40b13 diff --git a/chromium/docs/ozone_overview.md b/chromium/docs/ozone_overview.md new file mode 100644 index 00000000000..b32fd7ba67e --- /dev/null +++ b/chromium/docs/ozone_overview.md @@ -0,0 +1,306 @@ +# Ozone Overview + +Ozone is a platform abstraction layer beneath the Aura window system that is +used for low level input and graphics. Once complete, the abstraction will +support underlying systems ranging from embedded SoC targets to new +X11-alternative window systems on Linux such as Wayland or Mir to bring up Aura +Chromium by providing an implementation of the platform interface. + +## Guiding Principles + +Our goal is to enable chromium to be used in a wide variety of projects by +making porting to new platforms easy. To support this goal, ozone follows the +following principles: + +1. **Interfaces, not ifdefs**. Differences between platforms are handled by + calling a platform-supplied object through an interface instead of using + conditional compilation. Platform internals remain encapsulated, and the + public interface acts as a firewall between the platform-neutral upper + layers (aura, blink, content, etc) and the platform-specific lower layers. + The platform layer is relatively centralized to minimize the number of + places ports need to add code. +2. **Flexible interfaces**. The platform interfaces should encapsulate just what + chrome needs from the platform, with minimal constraints on the platform's + implementation as well as minimal constraints on usage from upper layers. An + overly prescriptive interface is less useful for porting because fewer ports + will be able to use it unmodified. Another way of stating is that the + platform layer should provide mechanism, not policy. +3. **Runtime binding of platforms**. Avoiding conditional compilation in the + upper layers allows us to build multiple platforms into one binary and bind + them at runtime. We allow this and provide a command-line flag to select a + platform (`--ozone-platform`) if multiple are enabled. Each platform has a + unique build define (e.g. `ozone_platform_foo`) that can be turned on or off + independently. +4. **Easy out-of-tree platforms**. Most ports begin as forks. Some of them + later merge their code upstream, others will have an extended life out of + tree. This is OK, and we should make this process easy to encourage ports, + and to encourage frequent gardening of chromium changes into the downstream + project. If gardening an out-of-tree port is hard, then those projects will + simply ship outdated and potentially insecure chromium-derived code to users. + One way we support these projects is by providing a way to inject additional + platforms into the build by only patching one `ozone_extra.gni` file. + +## Ozone Platform Interface + +Ozone moves platform-specific code behind the following interfaces: + +* `PlatformWindow` represents a window in the windowing system underlying + chrome. Interaction with the windowing system (resize, maximize, close, etc) + as well as dispatch of input events happens via this interface. Under aura, a + `PlatformWindow` corresponds to a `WindowTreeHost`. Under mojo, it corresponds + to a `NativeViewport`. On bare hardware, the underlying windowing system is + very simple and a platform window corresponds to a physical display. +* `SurfaceFactoryOzone` is used to create surfaces for the Chrome compositor to + paint on using EGL/GLES2 or Skia. +* `GpuPlatformSupportHost` provides the platform code + access to IPC between the browser & GPU processes. Some platforms need this + to provide additional services in the GPU process such as display + configuration. +* `CursorFactoryOzone` is used to load & set platform cursors. +* `OverlayManagerOzone` is used to manage overlays. +* `InputController` allows to control input devices such as keyboard, mouse or + touchpad. +* `SystemInputInjector` converts input into events and injects them to the + Ozone platform. +* `NativeDisplayDelegate` is used to support display configuration & hotplug. + +## Ozone in Chromium + +Our implementation of Ozone required changes concentrated in these areas: + +* Cleaning up extensive assumptions about use of X11 throughout the tree, + protecting this code behind the `USE_X11` ifdef, and adding a new `USE_OZONE` + path that works in a relatively platform-neutral way by delegating to the + interfaces described above. +* a `WindowTreeHostOzone` to send events into Aura and participate in display + management on the host system, and +* an Ozone-specific flavor of `GLSurfaceEGL` which delegates allocation of + accelerated surfaces and refresh syncing to the provided implementation of + `SurfaceFactoryOzone`. + +## Porting with Ozone + +Users of the Ozone abstraction need to do the following, at minimum: + +* Write a subclass of `PlatformWindow`. This class (I'll call it + `PlatformWindowImpl`) is responsible for window system integration. It can + use `MessagePumpLibevent` to poll for events from file descriptors and then + invoke `PlatformWindowDelegate::DispatchEvent` to dispatch each event. +* Write a subclass of `SurfaceFactoryOzone` that handles allocating accelerated + surfaces. I'll call this `SurfaceFactoryOzoneImpl`. +* Write a subclass of `CursorFactoryOzone` to manage cursors, or use the + `BitmapCursorFactoryOzone` implementation if only bitmap cursors need to be + supported. +* Write a subclass of `OverlayManagerOzone` or just use `StubOverlayManager` if + your platform does not support overlays. +* Write a subclass of `NativeDisplayDelegate` if necessary or just use + `FakeDisplayDelegate`. +* Write a subclass of `GpuPlatformSupportHost` or just use + `StubGpuPlatformSupportHost`. +* Write a subclass of `InputController` or just use `StubInputController`. +* Write a subclass of `SystemInputInjector` if necessary. +* Write a subclass of `OzonePlatform` that owns instances of + the above subclasses and provide a static constructor function for these + objects. This constructor will be called when + your platform is selected and the returned objects will be used to provide + implementations of all the ozone platform interfaces. + If your platform does not need some of the interfaces then you can just + return a `Stub*` instance or a `nullptr`. + +## Adding an Ozone Platform to the build (instructions for out-of-tree ports) + +The recommended way to add your platform to the build is as follows. This walks +through creating a new ozone platform called `foo`. + +1. Fork `chromium/src.git`. +2. Add your implementation in `ui/ozone/platform/` alongside internal platforms. +3. Patch `ui/ozone/ozone_extra.gni` to add your `foo` platform. + +## Building with Ozone + +### ChromeOS - ([waterfall](http://build.chromium.org/p/chromium.chromiumos/waterfall?builder=Linux+ChromiumOS+Ozone+Builder&builder=Linux+ChromiumOS+Ozone+Tests+%281%29&builder=Linux+ChromiumOS+Ozone+Tests+%282%29&reload=none)) + +To build `chrome`, do this from the `src` directory: + +``` shell +gn args out/OzoneChromeOS --args="use_ozone=true target_os=\"chromeos\"" +ninja -C out/OzoneChromeOS chrome +``` + +Then to run for example the X11 platform: + +``` shell +./out/OzoneChromeOS/chrome --ozone-platform=x11 +``` + +### Embedded + +The following targets are currently working for embedded builds: + +* `content_shell` +* various unit tests + +The following targets are currently NOT supported: + +* `ash_shell_with_content` +* `chrome` + +To build `content_shell`, do this from the `src` directory: + +``` shell +gn args out/OzoneEmbedded --args="use_ozone=true toolkit_views=false" +ninja -C out/OzoneEmbedded content_shell +``` + +Then to run for example the headless platform: + +``` shell +./out/OzoneEmbedded/content_shell --ozone-platform=headless \ + --ozone-dump-file=/tmp/ +``` + +### Linux Desktop - ([waterfall](https://build.chromium.org/p/chromium.fyi/builders/Ozone%20Linux/)) +Support for Linux Desktop is currently [in-progress](http://crbug.com/295089). + +The following targets are currently working: + +* various unit tests +* `chrome` + +To build `chrome`, do this from the `src` directory: + +``` shell +gn args out/OzoneLinuxDesktop --args="use_ozone=true enable_package_mash_services=true" +ninja -C out/OzoneLinuxDesktop chrome +``` +Then to run for example the X11 platform: + +``` shell +./out/OzoneLinuxDesktop/chrome --ozone-platform=x11 \ + --mash +``` + +Note: You may need to apply [this patch](https://codereview.chromium.org/2485673002/) to avoid missing ash resources during chrome execution. + +### GN Configuration notes + +You can turn properly implemented ozone platforms on and off by setting the +corresponding flags in your GN configuration. For example +`ozone_platform_headless=false ozone_platform_gbm=false` will turn off the +headless and DRM/GBM platforms. +This will result in a smaller binary and faster builds. To turn ALL platforms +off by default, set `ozone_auto_platforms=false`. + +You can also specify a default platform to run by setting the `ozone_platform` +build parameter. For example `ozone_platform="x11"` will make X11 the +default platform when `--ozone-platform` is not passed to the program. +If `ozone_auto_platforms` is true then `ozone_platform` is set to `headless` +by default. + +## Running with Ozone + +Specify the platform you want to use at runtime using the `--ozone-platform` +flag. For example, to run `content_shell` with the GBM platform: + +``` shell +content_shell --ozone-platform=gbm +``` + +Caveats: + +* `content_shell` always runs at 800x600 resolution. +* For the GBM platform, you may need to terminate your X server (or any other + display server) prior to testing. +* During development, you may need to configure + [sandboxing](linux_sandboxing.md) or to disable it. + +## Ozone Platforms + +### Headless + +This platform +draws graphical output to a PNG image (no GPU support; software rendering only) +and will not output to the screen. You can set +the path of the directory where to output the images +by specifying `--ozone-dump-file=/path/to/output-directory` on the +command line: + +``` shell +content_shell --ozone-platform=headless \ + --ozone-dump-file=/tmp/ +``` + +### DRM/GBM + +This is Linux direct rending with acceleration via mesa GBM & linux DRM/KMS +(EGL/GLES2 accelerated rendering & modesetting in GPU process) and is in +production use on [ChromeOS](http://www.chromium.org/chromium-os). + +Note that all ChromeOS builds of Chrome will compile and attempt to use this. +See [Building Chromium for Chromium OS](https://www.chromium.org/chromium-os/how-tos-and-troubleshooting/building-chromium-browser) for build instructions. + +### Cast + +This platform is used for +[Chromecast](https://www.google.com/intl/en_us/chromecast/). + +### X11 + +This platform provides support for the [X window system](https://www.x.org/). + +### Wayland + +This platform provides support for the +[Wayland](http://wayland.freedesktop.org/) display protocol. It was +initially developed by Intel as +[a fork of chromium](https://github.com/01org/ozone-wayland) +and then partially upstreamed. +It is still actively being developed in the chromium tree, feel free to discuss +with us on freenode.net, `#ozone-wayland` channel or on `ozone-dev`. + +Below are some quick build & run instructions. It is assumed that you are +launching `chrome` from a Wayland environment such as `weston`. Apply +[this patch](https://codereview.chromium.org/2485673002/) and execute the +following commands: + +``` shell +gn args out/OzoneWayland --args="use_ozone=true enable_package_mash_services=true" +ninja -C out/OzoneWayland chrome +./out/OzoneWayland/chrome --ozone-platform=wayland \ + --mash +``` + +### Caca + +This platform +draws graphical output to text using +[libcaca](http://caca.zoy.org/wiki/libcaca) +(no GPU support; software +rendering only). In case you ever wanted to test embedded content shell on +tty. +It has been +[removed from the tree](https://codereview.chromium.org/2445323002/) and is no +longer maintained but you can +[build it as an out-of-tree port](https://github.com/fred-wang/ozone-caca). + +Alternatively, you can try the latest revision known to work. First, install +libcaca shared library and development files. Next, move to the git revision +`0e64be9cf335ee3bea7c989702c5a9a0934af037` +(you will probably need to synchronize the build dependencies with +`gclient sync --with_branch_heads`). Finally, build and run the caca platform +with the following commands: + +``` shell +gn args out/OzoneCaca \ + --args="use_ozone=true ozone_platform_caca=true use_sysroot=false ozone_auto_platforms=false toolkit_views=false" +ninja -C out/OzoneCaca content_shell +./out/OzoneCaca/content_shell +``` + + Note: traditional TTYs are not the ideal browsing experience.<br/> +  + +## Communication + +There is a public mailing list: +[ozone-dev@chromium.org](https://groups.google.com/a/chromium.org/forum/#!forum/ozone-dev) diff --git a/chromium/docs/testing/identifying_tests_that_depend_on_order.md b/chromium/docs/testing/identifying_tests_that_depend_on_order.md new file mode 100644 index 00000000000..e62ef1fca22 --- /dev/null +++ b/chromium/docs/testing/identifying_tests_that_depend_on_order.md @@ -0,0 +1,80 @@ + +# Fixing layout test flakiness + +We'd like to stamp out all the tests that have ordering dependencies. This helps +make the tests more reliable and, eventually, will make it so we can run tests +in a random order and avoid new ordering dependencies being introduced. To get +there, we need to weed out and fix all the existing ordering dependencies. + +## Diagnosing test ordering flakiness + +These are steps for diagnosing ordering flakiness once you have a test that you +believe depends on an earlier test running. + +### Bisect test ordering + +1. Run the tests such that the test in question fails. +2. Run `./Tools/Scripts/print-test-ordering` and save the output to a file. This + outputs the tests run in the order they were run on each content_shell + instance. +3. Create a file that contains only the tests run on that worker in the same + order as in your saved output file. The last line in the file should be the + failing test. +4. Run + `./Tools/Scripts/bisect-test-ordering --test-list=path/to/file/from/step/3` + +The bisect-test-ordering script should spit out a list of tests at the end that +causes the test to fail. + +*** promo +At the moment bisect-test-ordering only allows you to find tests that fail due +to a previous test running. It's a small change to the script to make it work +for tests that pass due to a previous test running (i.e. to figure out which +test it depends on running before it). Contact ojan@chromium if you're +interested in adding that feature to the script. +*** + +### Manual bisect + +Instead of running `bisect-test-ordering`, you can manually do the work of step +4 above. + +1. `run-webkit-tests --child-processes=1 --order=none --test-list=path/to/file/from/step/3` +2. If the test doesn't fail here, then the test itself is probably just flaky. + If it does, remove some lines from the file and repeat step 1. Continue + repeating until you've found the dependency. If the test fails when run by + itself, but passes on the bots, that means that it depends on another test to + pass. In this case, you need to generate the list of tests run by + `run-webkit-tests --order=natural` and repeat this process to find which test + causes the test in question to *pass* (e.g. + [crbug.com/262793](https://crbug.com/262793)). +3. File a bug and give it the + [LayoutTestOrdering](https://crbug.com/?q=label:LayoutTestOrdering) label, + e.g. [crbug.com/262787](https://crbug.com/262787) or + [crbug.com/262791](https://crbug.com/262791). + +### Finding test ordering flakiness + +#### Run tests in a random order and diagnose failures + +1. Run `run-webkit-tests --order=random --no-retry` +2. Run `./Tools/Scripts/print-test-ordering` and save the output to a file. This + outputs the tests run in the order they were run on each content_shell + instance. +3. Run the diagnosing steps from above to figure out which tests + +Run `run-webkit-tests --run-singly --no-retry`. This starts up a new +content_shell instance for each test. Tests that fail when run in isolation but +pass when run as part of the full test suite represent some state that we're not +properly resetting between test runs or some state that we're not properly +setting when starting up content_shell. You might want to run with +`--time-out-ms=60000` to weed out tests that timeout due to waiting on +content_shell startup time. + +#### Diagnose especially flaky tests + +1. Load + https://test-results.appspot.com/dashboards/overview.html#group=%40ToT%20Blink&flipCount=12 +2. Tweak the flakiness threshold to the desired level of flakiness. +3. Click on *webkit_tests* to get that list of flaky tests. +4. Diagnose the source of flakiness for that test. diff --git a/chromium/docs/testing/layout_test_expectations.md b/chromium/docs/testing/layout_test_expectations.md new file mode 100644 index 00000000000..746cadbc784 --- /dev/null +++ b/chromium/docs/testing/layout_test_expectations.md @@ -0,0 +1,298 @@ +# Layout Test Expectations and Baselines + + +The primary function of the LayoutTests is as a regression test suite; this +means that, while we care about whether a page is being rendered correctly, we +care more about whether the page is being rendered the way we expect it to. In +other words, we look more for changes in behavior than we do for correctness. + +[TOC] + +All layout tests have "expected results", or "baselines", which may be one of +several forms. The test may produce one or more of: + +* A text file containing JavaScript log messages. +* A text rendering of the Render Tree. +* A screen capture of the rendered page as a PNG file. +* WAV files of the audio output, for WebAudio tests. + +For any of these types of tests, there are files checked into the LayoutTests +directory named `-expected.{txt,png,wav}`. Lastly, we also support the concept +of "reference tests", which check that two pages are rendered identically +(pixel-by-pixel). As long as the two tests' output match, the tests pass. For +more on reference tests, see +[Writing ref tests](https://trac.webkit.org/wiki/Writing%20Reftests). + +## Failing tests + +When the output doesn't match, there are two potential reasons for it: + +* The port is performing "correctly", but the output simply won't match the + generic version. The usual reason for this is for things like form controls, + which are rendered differently on each platform. +* The port is performing "incorrectly" (i.e., the test is failing). + +In both cases, the convention is to check in a new baseline (aka rebaseline), +even though that file may be codifying errors. This helps us maintain test +coverage for all the other things the test is testing while we resolve the bug. + +*** promo +If a test can be rebaselined, it should always be rebaselined instead of adding +lines to TestExpectations. +*** + +Bugs at [crbug.com](https://crbug.com) should track fixing incorrect behavior, +not lines in +[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations). If a +test is never supposed to pass (e.g. it's testing Windows-specific behavior, so +can't ever pass on Linux/Mac), move it to the +[NeverFixTests](../../third_party/WebKit/LayoutTests/NeverFixTests) file. That +gets it out of the way of the rest of the project. + +There are some cases where you can't rebaseline and, unfortunately, we don't +have a better solution than either: + +1. Reverting the patch that caused the failure, or +2. Adding a line to TestExpectations and fixing the bug later. + +In this case, **reverting the patch is strongly preferred**. + +These are the cases where you can't rebaseline: + +* The test is a reference test. +* The test gives different output in release and debug; in this case, generate a + baseline with the release build, and mark the debug build as expected to fail. +* The test is flaky, crashes or times out. +* The test is for a feature that hasn't yet shipped on some platforms yet, but + will shortly. + +## Handling flaky tests + +The +[flakiness dashboard](https://test-results.appspot.com/dashboards/flakiness_dashboard.html) +is a tool for understanding a test’s behavior over time. +Originally designed for managing flaky tests, the dashboard shows a timeline +view of the test’s behavior over time. The tool may be overwhelming at first, +but +[the documentation](https://dev.chromium.org/developers/testing/flakiness-dashboard) +should help. Once you decide that a test is truly flaky, you can suppress it +using the TestExpectations file, as described below. + +We do not generally expect Chromium sheriffs to spend time trying to address +flakiness, though. + +## How to rebaseline + +Since baselines themselves are often platform-specific, updating baselines in +general requires fetching new test results after running the test on multiple +platforms. + +### Rebaselining using try jobs + +The recommended way to rebaseline for a currently-in-progress CL is to use +results from try jobs. To do this: + +1. Upload a CL with changes in Blink source code or layout tests. +2. Trigger Blink try jobs. The bots to use are the release builders on + [tryserver.blink](https://build.chromium.org/p/tryserver.blink/builders). + This can be done via the code review Web UI or via `git cl try`. +3. Wait for all try jobs to finish. +4. Run `third_party/WebKit/Tools/Scripts/webkit-patch rebaseline-cl` to fetch + new baselines. +5. Commit the new baselines and upload a new patch. + +This way, the new baselines can be reviewed along with the changes, which helps +the reviewer verify that the new baselines are correct. It also means that there +is no period of time when the layout test results are ignored. + +The tests which `webkit-patch rebaseline-cl` tries to download new baselines for +depends on its arguments. + +* By default, it tries to download all baselines for tests that failed in the + try jobs. +* If you pass `--only-changed-tests`, then only tests modified in the CL will be + considered. +* You can also explicitly pass a list of test names, and then just those tests + will be rebaselined. + +### Rebaselining with rebaseline-o-matic + +If the test is not already listed in +[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations), you +can mark it as `[ NeedsRebaseline ]`. The +[rebaseline-o-matic bot](https://build.chromium.org/p/chromium.infra.cron/builders/rebaseline-o-matic) +will automatically detect when the bots have cycled (by looking at the blame on +the file) and do the rebaseline for you. As long as the test doesn't timeout or +crash, it won't turn the bots red if it has a `NeedsRebaseline` expectation. +When all of the continuous builders on the waterfall have cycled, the +rebaseline-o-matic bot will commit a patch which includes the new baselines and +removes the `[ NeedsRebaseline ]` entry from TestExpectations. + +### Rebaselining manually + +1. If the tests is already listed in TestExpectations as flaky, mark the test + `NeedsManualRebaseline` and comment out the flaky line so that your patch can + land without turning the tree red. If the test is not in TestExpectations, + you can add a `[ Rebaseline ]` line to TestExpectations. +2. Run `third_party/WebKit/Tools/Scripts/webkit-patch rebaseline-expectations` +3. Post the patch created in step 2 for review. + +## Kinds of expectations files + +* [TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations): The + main test failure suppression file. In theory, this should be used for flaky + lines and `NeedsRebaseline`/`NeedsManualRebaseline` lines. +* [ASANExpectations](../../third_party/WebKit/LayoutTests/ASANExpectations): + Tests that fail under ASAN. +* [LeakExpectations](../../third_party/WebKit/LayoutTests/LeakExpectations): + Tests that have memory leaks under the leak checker. +* [MSANExpectations](../../third_party/WebKit/LayoutTests/MSANExpectations): + Tests that fail under MSAN. +* [NeverFixTests](../../third_party/WebKit/LayoutTests/NeverFixTests): Tests + that we never intend to fix (e.g. a test for Windows-specific behavior will + never be fixed on Linux/Mac). Tests that will never pass on any platform + should just be deleted, though. +* [SlowTests](../../third_party/WebKit/LayoutTests/SlowTests): Tests that take + longer than the usual timeout to run. Slow tests are given 5x the usual + timeout. +* [SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests): A small subset + of tests that we run on the Android bot. +* [StaleTestExpectations](../../third_party/WebKit/LayoutTests/StaleTestExpectations): + Platform-specific lines that have been in TestExpectations for many months. + They're moved here to get them out of the way of people doing rebaselines + since they're clearly not getting fixed anytime soon. +* [W3CImportExpectations](../../third_party/WebKit/LayoutTests/W3CImportExpectations): + A record of which W3C tests should be imported or skipped. +* [WPTServeExpectations](../../third_party/WebKit/LayoutTests/WPTServeExpectations): + Expectations for tests that fail differently when run under the W3C's wptserve + HTTP server with the `--enable-wptserve flag`. This is an experimental feature + at this time. + + +### Flag-specific expectations files + +It is possible to handle tests that only fail when run with a particular flag +being passed to `content_shell`. See +[LayoutTests/FlagExpectations/README.txt](../../third_party/WebKit/LayoutTests/FlagExpectations/README.txt) +for more. + +## Updating the expectations files + +### Ordering + +The file is not ordered. If you put new changes somewhere in the middle of the +file, this will reduce the chance of merge conflicts when landing your patch. + +### Syntax + +The syntax of the file is roughly one expectation per line. An expectation can +apply to either a directory of tests, or a specific tests. Lines prefixed with +`# ` are treated as comments, and blank lines are allowed as well. + +The syntax of a line is roughly: + +``` +[ bugs ] [ "[" modifiers "]" ] test_name [ "[" expectations "]" ] +``` + +* Tokens are separated by whitespace. +* **The brackets delimiting the modifiers and expectations from the bugs and the + test_name are not optional**; however the modifiers component is optional. In + other words, if you want to specify modifiers or expectations, you must + enclose them in brackets. +* Lines are expected to have one or more bug identifiers, and the linter will + complain about lines missing them. Bug identifiers are of the form + `crbug.com/12345`, `code.google.com/p/v8/issues/detail?id=12345` or + `Bug(username)`. +* If no modifiers are specified, the test applies to all of the configurations + applicable to that file. +* Modifiers can be one or more of `Mac`, `Mac10.9`, `Mac10.10`, `Mac10.11`, + `Retina`, `Win`, `Win7`, `Win10`, `Linux`, `Linux32`, `Precise`, `Trusty`, + `Android`, `Release`, `Debug`. +* Some modifiers are meta keywords, e.g. `Win` represents both `Win7` and + `Win10`. See the `CONFIGURATION_SPECIFIER_MACROS` dictionary in + [third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/port/base.py](../../third_party/WebKit/Tools/Scripts/webkitpy/layout_tests/port/base.py) + for the meta keywords and which modifiers they represent. +* Expectations can be one or more of `Crash`, `Failure`, `Pass`, `Rebaseline`, + `Slow`, `Skip`, `Timeout`, `WontFix`, `Missing`, `NeedsRebaseline`, + `NeedsManualRebaseline`. If multiple expectations are listed, the test is + considered "flaky" and any of those results will be considered as expected. + +For example: + +``` +crbug.com/12345 [ Win Debug ] fast/html/keygen.html [ Crash ] +``` + +which indicates that the "fast/html/keygen.html" test file is expected to crash +when run in the Debug configuration on Windows, and the tracking bug for this +crash is bug \#12345 in the [Chromium issue tracker](https://crbug.com). Note +that the test will still be run, so that we can notice if it doesn't actually +crash. + +Assuming you're running a debug build on Mac 10.9, the following lines are all +equivalent (in terms of whether the test is performed and its expected outcome): + +``` +fast/html/keygen.html [ Skip ] +fast/html/keygen.html [ WontFix ] +Bug(darin) [ Mac10.9 Debug ] fast/html/keygen.html [ Skip ] +``` + +### Semantics + +* `WontFix` implies `Skip` and also indicates that we don't have any plans to + make the test pass. +* `WontFix` lines always go in the + [NeverFixTests file]((../../third_party/WebKit/LayoutTests/NeverFixTests) as + we never intend to fix them. These are just for tests that only apply to some + subset of the platforms we support. +* `WontFix` and `Skip` must be used by themselves and cannot be specified + alongside `Crash` or another expectation keyword. +* `Slow` causes the test runner to give the test 5x the usual time limit to run. + `Slow` lines go in the + [SlowTests file ](../../third_party/WebKit/LayoutTests/SlowTests). A given + line cannot have both Slow and Timeout. + +Also, when parsing the file, we use two rules to figure out if an expectation +line applies to the current run: + +1. If the configuration parameters don't match the configuration of the current + run, the expectation is ignored. +2. Expectations that match more of a test name are used before expectations that + match less of a test name. + +For example, if you had the following lines in your file, and you were running a +debug build on `Mac10.10`: + +``` +crbug.com/12345 [ Mac10.10 ] fast/html [ Failure ] +crbug.com/12345 [ Mac10.10 ] fast/html/keygen.html [ Pass ] +crbug.com/12345 [ Win7 ] fast/forms/submit.html [ Failure ] +crbug.com/12345 fast/html/section-element.html [ Failure Crash ] +``` + +You would expect: + +* `fast/html/article-element.html` to fail with a text diff (since it is in the + fast/html directory). +* `fast/html/keygen.html` to pass (since the exact match on the test name). +* `fast/html/submit.html` to pass (since the configuration parameters don't + match). +* `fast/html/section-element.html` to either crash or produce a text (or image + and text) failure, but not time out or pass. + +*** promo +Duplicate expectations are not allowed within the file and will generate +warnings. +*** + +You can verify that any changes you've made to an expectations file are correct +by running: + +```bash +third_party/WebKit/Tools/Scripts/lint-test-expectations +``` + +which will cycle through all of the possible combinations of configurations +looking for problems. diff --git a/chromium/docs/testing/layout_tests.md b/chromium/docs/testing/layout_tests.md new file mode 100644 index 00000000000..19319012438 --- /dev/null +++ b/chromium/docs/testing/layout_tests.md @@ -0,0 +1,565 @@ +# Layout Tests + +Layout tests are used by Blink to test many components, including but not +limited to layout and rendering. In general, layout tests involve loading pages +in a test renderer (`content_shell`) and comparing the rendered output or +JavaScript output against an expected output file. + +[TOC] + +## Running Layout Tests + +### Initial Setup + +Before you can run the layout tests, you need to build the `blink_tests` target +to get `content_shell` and all of the other needed binaries. + +```bash +ninja -C out/Release blink_tests +``` + +On **Android** (layout test support +[currently limited to KitKat and earlier](https://crbug.com/567947)) you need to +build and install `content_shell_apk` instead. See also: +[Android Build Instructions](../android_build_instructions.md). + +```bash +ninja -C out/Default content_shell_apk +adb install -r out/Default/apks/ContentShell.apk +``` + +On **Mac**, you probably want to strip the content_shell binary before starting +the tests. If you don't, you'll have 5-10 running concurrently, all stuck being +examined by the OS crash reporter. This may cause other failures like timeouts +where they normally don't occur. + +```bash +strip ./xcodebuild/{Debug,Release}/content_shell.app/Contents/MacOS/content_shell +``` + +### Running the Tests + +TODO: mention `testing/xvfb.py` + +The test runner script is in +`third_party/WebKit/Tools/Scripts/run-webkit-tests`. + +To specify which build directory to use (e.g. out/Default, out/Release, +out/Debug) you should pass the `-t` or `--target` parameter. For example, to +use the build in `out/Default`, use: + +```bash +python third_party/WebKit/Tools/Scripts/run-webkit-tests -t Default +``` + +For Android (if your build directory is `out/android`): + +```bash +python third_party/WebKit/Tools/Scripts/run-webkit-tests -t android --android +``` + +Tests marked as `[ Skip ]` in +[TestExpectations](../../third_party/WebKit/LayoutTests/TestExpectations) +won't be run at all, generally because they cause some intractable tool error. +To force one of them to be run, either rename that file or specify the skipped +test as the only one on the command line (see below). + +Note that currently only the tests listed in +[SmokeTests](../../third_party/WebKit/LayoutTests/SmokeTests) +are run on the Android bots, since running all layout tests takes too long on +Android (and may still have some infrastructure issues). Most developers focus +their Blink testing on Linux. We rely on the fact that the Linux and Android +behavior is nearly identical for scenarios outside those covered by the smoke +tests. + +To run only some of the tests, specify their directories or filenames as +arguments to `run_webkit_tests.py` relative to the layout test directory +(`src/third_party/WebKit/LayoutTests`). For example, to run the fast form tests, +use: + +```bash +Tools/Scripts/run-webkit-tests fast/forms +``` + +Or you could use the following shorthand: + +```bash +Tools/Scripts/run-webkit-tests fast/fo\* +``` + +*** promo +Example: To run the layout tests with a debug build of `content_shell`, but only +test the SVG tests and run pixel tests, you would run: + +```bash +Tools/Scripts/run-webkit-tests -t Default svg +``` +*** + +As a final quick-but-less-robust alternative, you can also just use the +content_shell executable to run specific tests by using (for Windows): + +```bash +out/Default/content_shell.exe --run-layout-test --no-sandbox full_test_source_path +``` + +as in: + +```bash +out/Default/content_shell.exe --run-layout-test --no-sandbox \ + c:/chrome/src/third_party/WebKit/LayoutTests/fast/forms/001.html +``` + +but this requires a manual diff against expected results, because the shell +doesn't do it for you. + +To see a complete list of arguments supported, run: `run-webkit-tests --help` + +*** note +**Linux Note:** We try to match the Windows render tree output exactly by +matching font metrics and widget metrics. If there's a difference in the render +tree output, we should see if we can avoid rebaselining by improving our font +metrics. For additional information on Linux Layout Tests, please see +[docs/layout_tests_linux.md](../layout_tests_linux.md). +*** + +*** note +**Mac Note:** While the tests are running, a bunch of Appearance settings are +overridden for you so the right type of scroll bars, colors, etc. are used. +Your main display's "Color Profile" is also changed to make sure color +correction by ColorSync matches what is expected in the pixel tests. The change +is noticeable, how much depends on the normal level of correction for your +display. The tests do their best to restore your setting when done, but if +you're left in the wrong state, you can manually reset it by going to +System Preferences → Displays → Color and selecting the "right" value. +*** + +### Test Harness Options + +This script has a lot of command line flags. You can pass `--help` to the script +to see a full list of options. A few of the most useful options are below: + +| Option | Meaning | +|:----------------------------|:--------------------------------------------------| +| `--debug` | Run the debug build of the test shell (default is release). Equivalent to `-t Debug` | +| `--nocheck-sys-deps` | Don't check system dependencies; this allows faster iteration. | +| `--verbose` | Produce more verbose output, including a list of tests that pass. | +| `--no-pixel-tests` | Disable the pixel-to-pixel PNG comparisons and image checksums for tests that don't call `testRunner.dumpAsText()` | +| `--reset-results` | Write all generated results directly into the given directory, overwriting what's there. | +| `--new-baseline` | Write all generated results into the most specific platform directory, overwriting what's there. Equivalent to `--reset-results --add-platform-expectations` | +| `--renderer-startup-dialog` | Bring up a modal dialog before running the test, useful for attaching a debugger. | +| `--fully-parallel` | Run tests in parallel using as many child processes as the system has cores. | +| `--driver-logging` | Print C++ logs (LOG(WARNING), etc). | + +## Success and Failure + +A test succeeds when its output matches the pre-defined expected results. If any +tests fail, the test script will place the actual generated results, along with +a diff of the actual and expected results, into +`src/out/Default/layout_test_results/`, and by default launch a browser with a +summary and link to the results/diffs. + +The expected results for tests are in the +`src/third_party/WebKit/LayoutTests/platform` or alongside their respective +tests. + +*** note +Tests which use [testharness.js](https://github.com/w3c/testharness.js/) +do not have expected result files if all test cases pass. +*** + +A test that runs but produces the wrong output is marked as "failed", one that +causes the test shell to crash is marked as "crashed", and one that takes longer +than a certain amount of time to complete is aborted and marked as "timed out". +A row of dots in the script's output indicates one or more tests that passed. + +## Test expectations + +The +[TestExpectations](../../WebKit/LayoutTests/TestExpectations) file (and related +files, including +[skia_test_expectations.txt](../../skia/skia_test_expectations.txt)) +contains the list of all known layout test failures. See +[Test Expectations](./layout_test_expectations.md) +for more on this. + +## Testing Runtime Flags + +There are two ways to run layout tests with additional command-line arguments: + +* Using `--additional-driver-flag`: + + ```bash + run-webkit-tests --additional-driver-flag=--blocking-repaint + ``` + + This tells the test harness to pass `--blocking-repaint` to the + content_shell binary. + + It will also look for flag-specific expectations in + `LayoutTests/FlagExpectations/blocking-repaint`, if this file exists. The + suppressions in this file override the main TestExpectations file. + +* Using a *virtual test suite* defined in + [LayoutTests/VirtualTestSuites](https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/VirtualTestSuites). + A virtual test suite runs a subset of layout tests under a specific path with + additional flags. For example, you could test a (hypothetical) new mode for + repainting using the following virtual test suite: + + ```json + { + "prefix": "blocking_repaint", + "base": "fast/repaint", + "args": ["--blocking-repaint"], + } + ``` + + This will create new "virtual" tests of the form + `virtual/blocking_repaint/fast/repaint/...`` which correspond to the files + under `LayoutTests/fast/repaint` and pass `--blocking-repaint` to + content_shell when they are run. + + These virtual tests exist in addition to the original `fast/repaint/...` + tests. They can have their own expectations in TestExpectations, and their own + baselines. The test harness will use the non-virtual baselines as a fallback. + However, the non-virtual expectations are not inherited: if + `fast/repaint/foo.html` is marked `[ Fail ]`, the test harness still expects + `virtual/blocking_repaint/fast/repaint/foo.html` to pass. If you expect the + virtual test to also fail, it needs its own suppression. + + The "prefix" value does not have to be unique. This is useful if you want to + run multiple directories with the same flags (but see the notes below about + performance). Using the same prefix for different sets of flags is not + recommended. + +For flags whose implementation is still in progress, virtual test suites and +flag-specific expectations represent two alternative strategies for testing. +Consider the following when choosing between them: + +* The + [waterfall builders](https://dev.chromium.org/developers/testing/chromium-build-infrastructure/tour-of-the-chromium-buildbot) + and [try bots](https://dev.chromium.org/developers/testing/try-server-usage) + will run all virtual test suites in addition to the non-virtual tests. + Conversely, a flag-specific expectations file won't automatically cause the + bots to test your flag - if you want bot coverage without virtual test suites, + you will need to set up a dedicated bot for your flag. + +* Due to the above, virtual test suites incur a performance penalty for the + commit queue and the continuous build infrastructure. This is exacerbated by + the need to restart `content_shell` whenever flags change, which limits + parallelism. Therefore, you should avoid adding large numbers of virtual test + suites. They are well suited to running a subset of tests that are directly + related to the feature, but they don't scale to flags that make deep + architectural changes that potentially impact all of the tests. + +## Tracking Test Failures + +All bugs, associated with layout test failures must have the +[Test-Layout](https://crbug.com/?q=label:Test-Layout) label. Depending on how +much you know about the bug, assign the status accordingly: + +* **Unconfirmed** -- You aren't sure if this is a simple rebaseline, possible + duplicate of an existing bug, or a real failure +* **Untriaged** -- Confirmed but unsure of priority or root cause. +* **Available** -- You know the root cause of the issue. +* **Assigned** or **Started** -- You will fix this issue. + +When creating a new layout test bug, please set the following properties: + +* Components: a sub-component of Blink +* OS: **All** (or whichever OS the failure is on) +* Priority: 2 (1 if it's a crash) +* Type: **Bug** +* Labels: **Test-Layout** + +You can also use the _Layout Test Failure_ template, which will pre-set these +labels for you. + +## Writing Layout Tests + +### Pixel Tests + +TODO: Write documentation here. + +### Reference Tests + +TODO: Write documentation here. + +### Script Tests + +These tests use a JavaScript test harness and test cases written in script to +exercise features and make assertions about the behavior. Generally, new tests +are written using the [testharness.js](https://github.com/w3c/testharness.js/) +test harness, which is also heavily used in the cross-vendor +[web-platform-tests](https://github.com/w3c/web-platform-tests) project. Tests +written with testharness.js generally look something like the following: + +```html +<!DOCTYPE html> +<script src="/resources/testharness.js"></script> +<script src="/resources/testharnessreport.js"></script> +<script> +test(t => { + var x = true; + assert_true(x); +}, "Truth is true."); +</script> +``` + +Many older tests are written using the **js-test** +(`LayoutTests/resources/js-test.js`) test harness. This harness is +**deprecated**, and should not be used for new tests. The tests call +`testRunner.dumpAsText()` to signal that the page content should be dumped and +compared against an \*-expected.txt file, and optionally +`testRunner.waitUntilDone()` and `testRunner.notifyDone()` for asynchronous +tests. + +### Tests that use a HTTP Server + +By default, tests are loaded as if via `file:` URLs. Some web platform features +require tests served via HTTP or HTTPS, for example relative paths (`src=/foo`) +or features restricted to secure protocols. + +HTTP tests are those tests that are under `LayoutTests/http/tests` (or virtual +variants). Use a locally running HTTP server (Apache) to run. Tests are served +off of ports 8000, 8080 for HTTP and 8443 for HTTPS. If you run the tests using +`run-webkit-tests`, the server will be started automatically.To run the server +manually to reproduce or debug a failure: + +```bash +cd src/third_party/WebKit/Tools/Scripts +run-blink-httpd start +``` + +The layout tests will be served from `http://127.0.0.1:8000`. For example, to +run the test `http/tests/serviceworker/chromium/service-worker-allowed.html`, +navigate to +`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some +tests will behave differently if you go to 127.0.0.1 instead of localhost, so +use 127.0.0.1. + +To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill` +or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS. + +The test server sets up an alias to `LayoutTests/resources` directory. In HTTP +tests, you can access the testing framework at e.g. +`src="/js-test-resources/js-test.js"`. + +### Writing tests that need to paint, raster, or draw a frame of intermediate output + +A layout test does not actually draw frames of output until the test exits. If +it is required to generate a painted frame, then use +`window.testRunner.displayAsyncThen`, which will run the machinery to put up a +frame, then call the passed callback. There is also a library at +`fast/repaint/resources/text-based-repaint.js` to help with writing paint +invalidation and repaint tests. + +#### Layout test support for `testRunner` + +Some layout tests rely on the testRunner object to expose configuration for +mocking the platform. This is provided in content_shell, here's a UML diagram of +testRunner bindings configuring platform implementation: + +[](https://docs.google.com/drawings/d/1KNRNjlxK0Q3Tp8rKxuuM5mpWf4OJQZmvm9_kpwu_Wwg/edit) + +[Writing reliable layout tests](https://docs.google.com/document/d/1Yl4SnTLBWmY1O99_BTtQvuoffP8YM9HZx2YPkEsaduQ/edit) + +## Debugging Layout Tests + +After the layout tests run, you should get a summary of tests that pass or fail. +If something fails unexpectedly (a new regression), you will get a content_shell +window with a summary of the unexpected failures. Or you might have a failing +test in mind to investigate. In any case, here are some steps and tips for +finding the problem. + +* Take a look at the result. Sometimes tests just need to be rebaselined (see + below) to account for changes introduced in your patch. + * Load the test into a trunk Chrome or content_shell build and look at its + result. (For tests in the http/ directory, start the http server first. + See above. Navigate to `http://localhost:8000/` and proceed from there.) + The best tests describe what they're looking for, but not all do, and + sometimes things they're not explicitly testing are still broken. Compare + it to Safari, Firefox, and IE if necessary to see if it's correct. If + you're still not sure, find the person who knows the most about it and + ask. + * Some tests only work properly in content_shell, not Chrome, because they + rely on extra APIs exposed there. + * Some tests only work properly when they're run in the layout-test + framework, not when they're loaded into content_shell directly. The test + should mention that in its visible text, but not all do. So try that too. + See "Running the tests", above. +* If you think the test is correct, confirm your suspicion by looking at the + diffs between the expected result and the actual one. + * Make sure that the diffs reported aren't important. Small differences in + spacing or box sizes are often unimportant, especially around fonts and + form controls. Differences in wording of JS error messages are also + usually acceptable. + * `./run_webkit_tests.py path/to/your/test.html --full-results-html` will + produce a page including links to the expected result, actual result, and + diff. + * Add the `--sources` option to `run_webkit_tests.py` to see exactly which + expected result it's comparing to (a file next to the test, something in + platform/mac/, something in platform/chromium-win/, etc.) + * If you're still sure it's correct, rebaseline the test (see below). + Otherwise... +* If you're lucky, your test is one that runs properly when you navigate to it + in content_shell normally. In that case, build the Debug content_shell + project, fire it up in your favorite debugger, and load the test file either + from a file:// URL. + * You'll probably be starting and stopping the content_shell a lot. In VS, + to save navigating to the test every time, you can set the URL to your + test (file: or http:) as the command argument in the Debugging section of + the content_shell project Properties. + * If your test contains a JS call, DOM manipulation, or other distinctive + piece of code that you think is failing, search for that in the Chrome + solution. That's a good place to put a starting breakpoint to start + tracking down the issue. + * Otherwise, you're running in a standard message loop just like in Chrome. + If you have no other information, set a breakpoint on page load. +* If your test only works in full layout-test mode, or if you find it simpler to + debug without all the overhead of an interactive session, start the + content_shell with the command-line flag `--run-layout-test`, followed by the + URL (file: or http:) to your test. More information about running layout tests + in content_shell can be found [here](./layout_tests_in_content_shell.md). + * In VS, you can do this in the Debugging section of the content_shell + project Properties. + * Now you're running with exactly the same API, theme, and other setup that + the layout tests use. + * Again, if your test contains a JS call, DOM manipulation, or other + distinctive piece of code that you think is failing, search for that in + the Chrome solution. That's a good place to put a starting breakpoint to + start tracking down the issue. + * If you can't find any better place to set a breakpoint, start at the + `TestShell::RunFileTest()` call in `content_shell_main.cc`, or at + `shell->LoadURL() within RunFileTest()` in `content_shell_win.cc`. +* Debug as usual. Once you've gotten this far, the failing layout test is just a + (hopefully) reduced test case that exposes a problem. + +### Debugging HTTP Tests + +To run the server manually to reproduce/debug a failure: + +```bash +cd src/third_party/WebKit/Tools/Scripts +run-blink-httpd start +``` + +The layout tests will be served from `http://127.0.0.1:8000`. For example, to +run the test +`LayoutTest/http/tests/serviceworker/chromium/service-worker-allowed.html`, +navigate to +`http://127.0.0.1:8000/serviceworker/chromium/service-worker-allowed.html`. Some +tests will behave differently if you go to 127.0.0.1 vs localhost, so use +127.0.0.1. + +To kill the server, run `run-blink-httpd --server stop`, or just use `taskkill` +or the Task Manager on Windows, and `killall` or Activity Monitor on MacOS. + +The test server sets up an alias to `LayoutTests/resources` directory. In HTTP +tests, you can access testing framework at e.g. +`src="/js-test-resources/js-test.js"`. + +### Tips + +Check https://test-results.appspot.com/ to see how a test did in the most recent +~100 builds on each builder (as long as the page is being updated regularly). + +A timeout will often also be a text mismatch, since the wrapper script kills the +content_shell before it has a chance to finish. The exception is if the test +finishes loading properly, but somehow hangs before it outputs the bit of text +that tells the wrapper it's done. + +Why might a test fail (or crash, or timeout) on buildbot, but pass on your local +machine? +* If the test finishes locally but is slow, more than 10 seconds or so, that + would be why it's called a timeout on the bot. +* Otherwise, try running it as part of a set of tests; it's possible that a test + one or two (or ten) before this one is corrupting something that makes this + one fail. +* If it consistently works locally, make sure your environment looks like the + one on the bot (look at the top of the stdio for the webkit_tests step to see + all the environment variables and so on). +* If none of that helps, and you have access to the bot itself, you may have to + log in there and see if you can reproduce the problem manually. + +### Debugging Inspector Tests + +* Add `window.debugTest = true;` to your test code as follows: + + ```javascript + window.debugTest = true; + function test() { + /* TEST CODE */ + } + ``` + +* Do one of the following: + * Option A) Run from the chromium/src folder: + `blink/tools/run_layout_tests.sh + --additional_driver_flag='--remote-debugging-port=9222' + --time-out-ms=6000000` + * Option B) If you need to debug an http/tests/inspector test, start httpd + as described above. Then, run content_shell: + `out/Default/content_shell --remote-debugging-port=9222 --run-layout-test + http://127.0.0.1:8000/path/to/test.html` +* Open `http://localhost:9222` in a stable/beta/canary Chrome, click the single + link to open the devtools with the test loaded. +* You may need to replace devtools.html with inspector.html in your URL (or you + can use local chrome inspection of content_shell from chrome://inspect + instead) +* In the loaded devtools, set any required breakpoints and execute `test()` in + the console to actually start the test. + +## Rebaselining Layout Tests + +*** promo +To automatically re-baseline tests across all Chromium platforms, using the +buildbot results, see the +[Rebaselining keywords in TestExpectations](./layout_test_expectations.md) +and the +[Rebaselining Tool](https://trac.webkit.org/wiki/Rebaseline). +Alternatively, to manually run and test and rebaseline it on your workstation, +read on. +*** + +By default, text-only tests (ones that call `testRunner.dumpAsText()`) produce +only text results. Other tests produce both new text results and new image +results (the image baseline comprises two files, `-expected.png` and + `-expected.checksum`). So you'll need either one or three `-expected.\*` files +in your new baseline, depending on whether you have a text-only test or not. If +you enable `--no-pixel-tests`, only new text results will be produced, even for +tests that do image comparisons. + +```bash +cd src/third_party/WebKit +Tools/Scripts/run-webkit-tests --new-baseline foo/bar/test.html +``` + +The above command will generate a new baseline for +`LayoutTests/foo/bar/test.html` and put the output files in the right place, +e.g. +`LayoutTests/platform/chromium-win/LayoutTests/foo/bar/test-expected.{txt,png,checksum}`. + +When you rebaseline a test, make sure your commit description explains why the +test is being re-baselined. If this is a special case (i.e., something we've +decided to be different with upstream), please put a README file next to the new +expected output explaining the difference. + +## W3C Tests + +In addition to layout tests developed and run just by the Blink team, there are +also W3C conformance tests. For more info, see +[Importing the W3C Tests](https://www.chromium.org/blink/importing-the-w3c-tests). + +## Known Issues + +See +[bugs with the component Blink>Infra](https://bugs.chromium.org/p/chromium/issues/list?can=2&q=component%3ABlink%3EInfra) +for issues related to Blink tools, include the layout test runner. + +* Windows and Linux: Do not copy and paste while the layout tests are running, + as it may interfere with the editing/pasteboard and other clipboard-related + tests. (Mac tests swizzle NSClipboard to avoid any conflicts). +* If QuickTime is not installed, the plugin tests + `fast/dom/object-embed-plugin-scripting.html` and + `plugins/embed-attributes-setting.html` are expected to fail. diff --git a/chromium/docs/testing/layout_tests_in_content_shell.md b/chromium/docs/testing/layout_tests_in_content_shell.md new file mode 100644 index 00000000000..82cb51e465a --- /dev/null +++ b/chromium/docs/testing/layout_tests_in_content_shell.md @@ -0,0 +1,69 @@ +# Running layout tests using the content shell + +## Basic usage + +Layout tests can be run with `content_shell`. To just dump the render tree, use +the `--run-layout-test` flag: + +```bash +out/Default/content_shell --run-layout-test foo.html +``` + +### Compiling + +If you want to run layout tests, +[build the target `blink_tests`](layout_tests.md); this includes all the other +binaries required to run the tests. + +### Running + +You can run layout tests using `run-webkit-tests` (in +`src/third_party/WebKit/Tools/Scripts`). + +```bash +third_party/WebKit/Tools/Scripts/run-webkit-tests storage/indexeddb +``` + +or execute the shell directly: + +```bash +out/Default/content_shell --remote-debugging-port=9222 +``` + +This allows you see how your changes look in Chromium, and even connect with +devtools (by going to http://127.0.0.1:9222 from another window) to inspect your +freshly compiled Blink. + +*** note +On the Mac, use `Content Shell.app`, not `content_shell`. + +```bash +out/Default/Content\ Shell.app/Contents/MacOS/Content\ Shell --remote-debugging-port=9222 +``` +*** + +### Debugging Renderer Crashes + +To debug a renderer crash, ask Content Shell to wait for you to attach a +debugger once it spawns a renderer process by adding the +`--renderer-startup-dialog` flag: + +```bash +out/Default/content_shell --renderer-startup-dialog +``` + +Debugging workers and other subprocesses is simpler with +`--wait-for-debugger-children`, which can have one of two values: `plugin` or +`renderer`. + +## Future Work + +### Reusing existing testing objects + +To avoid writing (and maintaining!) yet another test controller, it is desirable +to reuse an existing test controller. A possible solution would be to change +DRT's test controller to not depend on DRT's implementation of the Blink +objects, but rather on the Blink interfaces. In addition, we would need to +extract an interface from the test shell object that can be implemented by +content shell. This would allow for directly using DRT's test controller in +content shell. diff --git a/chromium/docs/testing/using_breakpad_with_content_shell.md b/chromium/docs/testing/using_breakpad_with_content_shell.md new file mode 100644 index 00000000000..0fcfadfbbfd --- /dev/null +++ b/chromium/docs/testing/using_breakpad_with_content_shell.md @@ -0,0 +1,118 @@ +# Using breakpad with content shell + +When running layout tests, it is possible to use +[breakpad](../../breakpad/breakpad/) to capture stack traces on crashes while +running without a debugger attached and with the sandbox enabled. + +## Setup + +On all platforms, build the target `blink_tests`. + +*** note +**Mac:** Add `enable_dsyms = 1` to your +[gn build arguments](../../tools/gn/docs/quick_start.md) before building. This +slows down linking several minutes, so don't just always set it by default. +*** + +*** note +**Linux:** Add `use_debug_fission = true` to your +[gn build arguments](../../tools/gn/docs/quick_start.md) before building. +*** + +Then, create a directory where the crash dumps will be stored: + +* Linux/Mac: + ```bash + mkdir /tmp/crashes + ``` +* Android: + ```bash + adb shell mkdir /data/local/tmp/crashes + ``` +* Windows: + ```bash + mkdir %TEMP%\crashes + out\Default\content_shell_crash_service.exe --dumps-dir=%TEMP%\crashes + ``` + +## Running content shell with breakpad + +Breakpad can be enabled by passing `--enable-crash-reporter` and +`--crash-dumps-dir` to content shell: + +* Linux: + ```bash + out/Debug/content_shell --enable-crash-reporter \ + --crash-dumps-dir=/tmp/crashes chrome://crash + ``` +* Mac: + ```bash + out/Debug/Content\ Shell.app/Contents/MacOS/Content\ Shell \ + --enable-crash-reporter --crash-dumps-dir=/tmp/crashes chrome://crash + ``` +* Windows: + ```bash + out\Default\content_shell.exe --enable-crash-reporter ^ + --crash-dumps-dir=%TEMP%\crashes chrome://crash + ``` +* Android: + ```bash + build/android/adb_install_apk.py out/Default/apks/ContentShell.apk + build/android/adb_content_shell_command_line --enable-crash-reporter \ + --crash-dumps-dir=/data/local/tmp/crashes chrome://crash + build/android/adb_run_content_shell + ``` + +## Retrieving the crash dump + +On Linux and Android, we first have to retrieve the crash dump. On Mac and +Windows, this step can be skipped. + +* Linux: + ```bash + components/crash/content/tools/dmp2minidump.py /tmp/crashes/*.dmp /tmp/minidump + ``` +* Android: + ```bash + adb pull $(adb shell ls /data/local/tmp/crashes/*) /tmp/chromium-renderer-minidump.dmp + components/breakpad/tools/dmp2minidump /tmp/chromium-renderer-minidump.dmp /tmp/minidump + ``` + +## Symbolizing the crash dump + +On all platforms except for Windows, we need to convert the debug symbols to a +format that breakpad can understand. + +* Linux: + ```bash + components/crash/content/tools/generate_breakpad_symbols.py \ + --build-dir=out/Default --binary=out/Default/content_shell \ + --symbols-dir=out/Default/content_shell.breakpad.syms --clear --jobs=16 + ``` +* Mac: + ```bash + components/crash/content/tools/generate_breakpad_symbols.py \ + --build-dir=out/Default \ + --binary=out/Default/Content\ Shell.app/Contents/MacOS/Content\ Shell \ + --symbols-dir=out/Default/content_shell.breakpad.syms --clear --jobs=16 + ``` +* Android: + ```bash + components/crash/content/tools/generate_breakpad_symbols.py \ + --build-dir=out/Default \ + --binary=out/Default/lib/libcontent_shell_content_view.so \ + --symbols-dir=out/Default/content_shell.breakpad.syms --clear + ``` + +Now we can generate a stack trace from the crash dump. Assuming the crash dump +is in minidump.dmp: + +* Linux/Android/Mac: + ```bash + out/Default/minidump_stackwalk minidump.dmp out/Debug/content_shell.breakpad.syms + ``` +* Windows: + ```bash + "c:\Program Files (x86)\Windows Kits\8.0\Debuggers\x64\cdb.exe" ^ + -y out\Default -c ".ecxr;k30;q" -z minidump.dmp + ``` diff --git a/chromium/docs/updating_clang.md b/chromium/docs/updating_clang.md index 72e1082b007..e41a61f4bca 100644 --- a/chromium/docs/updating_clang.md +++ b/chromium/docs/updating_clang.md @@ -13,7 +13,7 @@ git cl try -m tryserver.chromium.mac -b mac_chromium_asan_rel_ng && git cl try -m tryserver.chromium.linux -b linux_chromium_chromeos_dbg_ng \ -b linux_chromium_chromeos_asan_rel_ng -b linux_chromium_msan_rel_ng && - git cl try -m tryserver.blink -b linux_precise_blink_rel + git cl try -m tryserver.blink -b linux_trusty_blink_rel ``` 1. Commit roll CL from the first step 1. The bots will now pull the prebuilt binary, and goma will have a matching diff --git a/chromium/docs/user_handle_mapping.md b/chromium/docs/user_handle_mapping.md index 5fce3447d72..38c8608f11d 100644 --- a/chromium/docs/user_handle_mapping.md +++ b/chromium/docs/user_handle_mapping.md @@ -89,6 +89,7 @@ For Chromium contributors that have different nicks on other domains. | satish | satish\_ | satish | | scheglov | | scheglov | | scottbyer | sbyer | scottbyer | +| sdy | sdy, sidney, Sidnicious | sdy | | shans | | shanestephens | | shrike | shrike | shrike | | smut | Sana | smut | diff --git a/chromium/docs/windows_build_instructions.md b/chromium/docs/windows_build_instructions.md index 357a4ab6c11..bccc225889f 100644 --- a/chromium/docs/windows_build_instructions.md +++ b/chromium/docs/windows_build_instructions.md @@ -15,7 +15,7 @@ represented in the current code page." ### Setting up the environment for Visual Studio -You must build with Visual Studio 2015 Update 2; no other version is +You must build with Visual Studio 2015 Update 3; no other version is supported. You must have Windows 7 x64 or later. x86 OSs are unsupported. @@ -28,7 +28,7 @@ Follow the appropriate path below: As of March 11, 2016 Chromium requires Visual Studio 2015 to build. -Install Visual Studio 2015 Update 2 or later - Community Edition +Install Visual Studio 2015 Update 3 or later - Community Edition should work if its license is appropriate for you. Use the Custom Install option and select: |