summaryrefslogtreecommitdiff
path: root/chromium/docs/website/site/developers/design-documents/network-stack
diff options
context:
space:
mode:
Diffstat (limited to 'chromium/docs/website/site/developers/design-documents/network-stack')
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/Chromium HTTP Network Request Diagram.svg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/Chromium Network Stack.svg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance-new.tiff.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.jpeg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.pdf.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.jpg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/index.md180
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/debugging-net-proxy/index.md14
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc2.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-benchmarking/index.md93
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-v3/index.md475
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files2.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files3.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files4.PNG.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/index.md434
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/very-simple-backend/index.md151
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/http-authentication-throttling/index.md183
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/http-cache/index.md174
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/http-cache/t.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/http-pipelining/index.md51
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/index.md298
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/netlog/NetLog1.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/netlog/index.md223
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/index.md97
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/salient-bug-list/index.md69
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/index.md16
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/network-stack-objectives/index.md592
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/network-stack-use-in-chromium/index.md106
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/preconnect/index.md93
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/fox-proxy-settings.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.dot9
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.dot27
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.dot7
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-server-settings.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-settings.png.sha11
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/index.md76
-rw-r--r--chromium/docs/website/site/developers/design-documents/network-stack/socks-proxy/index.md74
45 files changed, 0 insertions, 3465 deletions
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/Chromium HTTP Network Request Diagram.svg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/Chromium HTTP Network Request Diagram.svg.sha1
deleted file mode 100644
index 5ff54a204a6..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/Chromium HTTP Network Request Diagram.svg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-0190b16307751004d2d10bd9519d85009c0954fd \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/Chromium Network Stack.svg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/Chromium Network Stack.svg.sha1
deleted file mode 100644
index 3c1f20937f3..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/Chromium Network Stack.svg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-78b5e11ba74f571a46a2fbdae07c259724e11d48 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance-new.tiff.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance-new.tiff.sha1
deleted file mode 100644
index 8b1ee175e79..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance-new.tiff.sha1
+++ /dev/null
@@ -1 +0,0 @@
-b2cd6bf3fc1ad9dfa659887bfb3f03d026f61ede \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.jpeg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.jpeg.sha1
deleted file mode 100644
index acb216a3bb1..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.jpeg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-60028ff1cd422e447b8cf505c215ee82c7136751 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.pdf.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.pdf.sha1
deleted file mode 100644
index a61e799937d..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.pdf.sha1
+++ /dev/null
@@ -1 +0,0 @@
-bce296b04ef9057fa380fcf250d747f44b19045a \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.png.sha1
deleted file mode 100644
index b42dde6101a..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-ec7cc8f382cc015a154f6f6cde2a4e919049e426 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg.sha1
deleted file mode 100644
index 274318f1277..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-6f691a151c0e47248c2a1cac5371b7a0ecb77ec5 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.jpg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.jpg.sha1
deleted file mode 100644
index a5fd0fc3f11..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.jpg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-201687cb5f00526e18fc7f129a2a80f583ec69fe \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg.sha1
deleted file mode 100644
index f08b3c5ade7..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg.sha1
+++ /dev/null
@@ -1 +0,0 @@
-06f01838891f13fd1123152cd5ee92f97f9732de \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/index.md
deleted file mode 100644
index bf6f5afe735..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/cookiemonster/index.md
+++ /dev/null
@@ -1,180 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: cookiemonster
-title: CookieMonster
----
-
-The CookieMonster is the class in Chromium which handles in-browser storage,
-management, retrieval, expiration, and eviction of cookies. It does not handle
-interaction with the user or cookie policy (i.e. which cookies are accepted and
-which are not). The code for the CookieMonster is contained in
-net/base/cookie_monster.{h,cc}.
-
-CookieMonster requirements are, in theory, specified by various RFCs. [RFC
-6265](http://datatracker.ietf.org/doc/rfc6265/) is currently controlling, and
-supersedes [RFC 2965](http://datatracker.ietf.org/doc/rfc2965/). However, most
-browsers do not actually follow those RFCs, and Chromium has compatibility with
-existing browsers as a higher priority than RFC compliance. An RFC that more
-closely describes how browsers normally handles cookies is being considered by
-the RFC; it is available at
-<http://tools.ietf.org/html/draft-ietf-httpstate-cookie>. The various RFCs
-should be examined to understand basic cookie behavior; this document will only
-describe variations from the RFCs.
-
-The CookieMonster has the following responsibilities:
-
-* When a server response is received specifying a cookie, it confirms
- that the cookie is valid, and stores it if so. Tests for validity
- include:
- * The domain of the cookie must be .local or a subdomain of a
- public suffix (see <http://publicsuffix.org/>--also known as an
- extended Top Level Domain).
- * The domain of the cookie must be a suffix of the domain from
- which the response was received.
-
-> We do not enforce the RFC path or port restrictions
-
-* When a client request is being generated, a cookie in the store is
- included in that request if:
- * The cookie domain is a suffix of the server hostname.
- * The path of the cookie is a prefix of the request path.
- * The cookie must be unexpired.
-* It enforces limits (both per-origin and globally) on how many
- cookies will be stored.
-
-**CookieMonster Structure**
-
-The important data structure relationships inside of and including the
-CookieMonster are sketched out in the diagram below.
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg">](/developers/design-documents/network-stack/cookiemonster/CM-inheritance.svg)
-
-In the above diagram, type relationships are represented as follows:
-
-* Reference counted thread safe: Red outline
-* Abstract base class: Dashed outline
-* Inheritance or typedef: Dotted line with arrow, subclass to
- superclass
-* Member variable contained in type: Line with filled diamond, diamond
- on the containing type end.
-* Pointer to object contained in type: Line with open diamond, diamond
- on the containing type end.
-
-The three most important classes in this diagram are:
-
-* CookieStore. This defines the interface to anything that stores
- cookies (currently only CookieMonster), which are a set of Set, Get,
- and Delete options.
-* CookieMonster. This adds the capability of specifying a backing
- store (PersistentCookieStore) and Delegate which will be notified of
- cookie changes.
-* SQLitePersistentCookieStore. This implements the
- PersistentCookieStore interface, and provides the persistence for
- non-session cookies.
-
-The central data structure of a CookieMonster is the cookies_ member, which is a
-multimap (multiple values allowed for a single key) from a domain to some set of
-cookies. Each cookie is represented by a CanonicalCookie(), which contains all
-of the information that can be specified in a cookie (see diagram and RFC 2695).
-When set, cookies are placed into this data structure, and retrieval involves
-searching this data structure. The key to this data structure is the most
-inclusive domain (shortest dot delimited suffix) of the cookie domain that does
-not name a domain registrar (i.e. "google.com" or "bbc.co.uk", but not "co.uk"
-or "com"). This is also known as the Effective Top Level Domain plus one, or
-eTLD+1, for short.
-
-Persistence is implemented by the SQLitePersistentCookieStore, which mediates
-access to an on-disk SQLite database. On startup, all cookies are read from this
-database into the cookie monster, and this database is kept updated as the
-CookieMonster is modified. The CookieMonster notifies its associated
-PersistentCookieStore of all changes, and the SQLitePersistentCookieStore
-batches those notifications and updates the database every thirty seconds (when
-it has something to update it about) or if the queue gets too large. It does
-this by queuing an operation to an internal queue when notified of a change by
-the CookieMonster, and posting a task to the database thread to drain that
-queue. The backing database uses the cookie creation time as its primary key,
-which imposes a requirement on the cookie monster not to allow cookies with
-duplicate creation times.
-
-The internal code of the cookie monster falls into four paths: The setter path,
-the getter path, the deletion path, and the garbage collection path.
-
-The setter path validates its input, creates a canonical cookie, deletes any
-cookies in the store that are identical to the newly created one, and (if the
-cookie is not immediately expired) inserts it into the store. The getter path
-computes the relevant most inclusive domain for the incoming request, and
-searches that section of the multimap for cookies that match. The deletion path
-is similar to the getter path, except the matching cookies are deleted rather
-than returned.
-
-Garbage collection occurs when a cookie is set (this is expected to happen much
-less often than retrieving cookies). Garbage collection makes sure that the
-number of cookies per-eTLD+1 does not exceed some maximum, and similarly for the
-total number of cookies. The algorithm is both in flux and subtle; see the
-comments in cookie_monster.h for details.
-
-This class is currently used on many threads, so it is reference counted, and
-does all of its operations under a per-instance lock. It is initialized from the
-backing store (if such exists) on first use.
-
-**Non-Obvious Uses of CookieMonster**
-
-In this writeup, the CookieMonster has been spoken of as if it were a singleton
-class within Chrome. In reality, CookieMonster is \*not\* a singleton (and this
-has produced some bugs). Separate CookieMonster instances are created for:
-
-* Use in standard browsing
-* Incognito mode (this version has no backing store)
-* Extensions (this version has its own backing store; map keys are
- extension ids)
-* Incognito for Extensions.
-
-To decide in each case what kinds of URLs it is appropriate to store cookies
-for, a CookieMonster has a notion of "cookieable schemes"--the schemes for which
-cookies will be stored in that monster. That list defaults to "http" and
-"https". For extensions it is set to "chrome-extension".
-
-When file cookies are enabled via --enable-file-cookies, the default list will
-include "file", and file cookies will be stored in the main CookieMonster. For
-file cookies, the key will be either null, or (for UNC names, e.g.
-//host.on.my.network/path/on/that/host) the hostname in the file cookie.
-Currently, the store does not distinguish carefully between file cookies and
-network cookies.
-
-**Implementation Details; Subject to Change Without Notice**
-
-The call graph of the routines used in the CookieMonster is included below. It
-is correct as of revision 59070, but may not apply to later versions.
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg">](/developers/design-documents/network-stack/cookiemonster/CM-method-calls-new.svg)
-
-Key:
-
-* Green fill: CookieMonster/CookieStore Public method.
-* Grey fill: CookieMonster Private method.
-* Red ring: Method take instance lock.
-* Dark blue fill: CookieMonster static method.
-* Light blue fill: cookie_monster.cc static function
-* Yellow fill: CanonicalCookie method.
-
-The CookieMonster is referenced from many areas of the code, including
-
-but not limited to:
-
-* URLRequestHttpJob: Main path for HTTP request/response.
-* WebSocketJob
-* Automation Provider
-* BrowsingDataRemover: For deleting browsing data.
-* ExtensionDataDeleter
-* CookieTreeModel: Allows the user to examine and delete specific
- cookies.
-* Plugins
-* HttpBridge: For synchronization \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/debugging-net-proxy/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/debugging-net-proxy/index.md
deleted file mode 100644
index eb4f7ee39ab..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/debugging-net-proxy/index.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: debugging-net-proxy
-title: Debugging problems with the network proxy
----
-
-This documentation [has moved
-here](https://chromium.googlesource.com/chromium/src/+/HEAD/net/docs/proxy.md#Capturing-a-Net-Log-for-debugging-proxy-resolution-issues). \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc.PNG.sha1
deleted file mode 100644
index ba662ec469f..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-72d1e91abecf1ce59504ddd5479bd40f2e86dbaf \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc2.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc2.PNG.sha1
deleted file mode 100644
index 3ae0a864094..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/alloc2.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-66ccf9a40ae7938572b56bccbe1abd0574579d3e \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-benchmarking/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-benchmarking/index.md
deleted file mode 100644
index 33a266493d6..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-benchmarking/index.md
+++ /dev/null
@@ -1,93 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-- - /developers/design-documents/network-stack/disk-cache
- - Disk Cache
-page_name: disk-cache-benchmarking
-title: Disk Cache Benchmarking & Performance Tracking
----
-
-## Summary
-
-More backend work in the disk cache is demanding good tools for ongoing disk
-cache performance tracking. The [Very Simple
-Backend](/developers/design-documents/network-stack/disk-cache/very-simple-backend)
-implementation needs continuous A/B testing to show its implementation is
-advancing in speed. To track progress on this backend, and also to permit
-comparisons between alternative backends provided in net, two ongoing
-methodologies are proposed.
-
-## Proxy Backend with Replay
-
-The Proxy Backend is a simple [Disk Cache
-Backend](/developers/design-documents/network-stack/disk-cache) and Entry
-implementation that pass through to an underlying Entry and Backend, but
-recording a short log with parameter information & timing information to allow
-replay.
-
-This log can then be replayed in a standalone replay application which takes the
-log, constructs a backend (perhaps a standard blockfile backend, a [very simple
-backend](/developers/design-documents/network-stack/disk-cache/very-simple-backend)
-or a log structured backend), and performs the same operations at the same
-delays, all calls as if from the IO thread. The average latency of calls, as
-well as system load during the test can then permit A/B comparison between
-backends.
-
-Pros:
-
-* Well suited to use on developer workstations.
-* Very simple to collect logs and run.
-* Very fast to run tests.
-* Low level, includes very little noise from outside of the disk
- cache.
-* Using logs with multiple versions of the same backend allows
- tracking progress over time of a particular backend.
-
-Cons:
-
-* Sensitive to evictions: if two different backends have different
- eviction algorithms, the same operations on two backends can result
- in different logs. For instance an OpenEntry() on the foo backend
- could find an entry that then is Read/Written to when the same log
- played on a bar backend.
-* Compares all low level operations equally: backend operations in the
- critical path of requests block launching requests. Other backend
- operations (like WriteData) almost never occur in the critical path,
- and so performance may impact rendering less. Without a full
- renderer, this impact is hard to measure.
-* Does not track system resource consumption. Besides answering
- requests, the backend is consuming finite system resources (RAM,
- buffer cache, file handles, etc...), competing with the renderer for
- resources. This impact isn't very well measured in the replay
- benchmark.
-
-## browser_test with corpus
-
-Starting up with a backend, the browser_test loads a large corpus of pages
-(either from a local server, or a web server simulating web latency, or possibly
-even the actual web), with an initially empty cache, and then runs either that
-same corpus or a second corpus immediately afterwards with the warm cache. This
-can be repeated using different backends, to permit A/B comparisons.
-
-The browser test should introspect on UMA; outputs should include HttpCache.\*
-benchmarks, as well as PLT.BeginToFirstPaint/PLT.BeginToFinish.
-
-Pros:
-
-* Well suited to ongoing performance dashboards.
-* Tracks a metric more closely connected to user experience.
-* Allows comparison of backends with wildly different eviction
- behaviour.
-
-Cons:
-
-* Noisier, since it includes much, much more than just the Chrome
- renderer.
-* Slower to run.
-* Because of ongoing renderer changes, comparisons of the same backend
- over time are problematic without a lot of patch cherry picking. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-v3/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-v3/index.md
deleted file mode 100644
index 68b80696bb7..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/disk-cache-v3/index.md
+++ /dev/null
@@ -1,475 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-- - /developers/design-documents/network-stack/disk-cache
- - Disk Cache
-page_name: disk-cache-v3
-title: Disk Cache 3.0
----
-
-## [TOC]
-
-## Introduction
-
-This version of the disk cache is an evolution of the previous design, so some
-familiarity with the [previous
-design](/developers/design-documents/network-stack/disk-cache) (implemented in
-versions 1 and 2) is recommended. Functionality specific to the new version is
-implemented by files located under src/net/disk_cache/v3.
-
-## Requirements
-
-From the implementation of version 2 (file format 2.1, V2 from this point on),
-in no particular order:
-
-### Survive system level crashes
-
-The design of V2 assumes that system level-crashes are rare enough to justify
-throwing away everything when corruption caused by a system-level crash happens.
-That assumption turned out to be wrong.
-
-### Allow storing significantly more entries for the same total disk space
-
-One characteristic of Chrome’s disk cache is that it keeps track of entries’
-reuse. It even keeps track of entries for some time after a given entry is
-evicted from the cache, which allows it to correctly identify important content.
-The new design should allow keeping track of a lot of entries (most of them
-already evicted) without having to scale the space used as if all entries were
-present.
-
-### Allow growing the main index
-
-There are multiple users of the disk cache, some of which require very little
-disk space. The cache should be able to start with a small footprint and grow by
-itself as needed.
-
-### Allow memory-only access time to determine a Hit or Miss
-
-The cache should minimize the number of disk operations required to identify if
-an entry is stored or not. It may be too expensive to do that all the time
-(think about the memory footprint required to handle collisions with half a
-million entries), but the probability of requiring a disk access should be much
-lower than what it is today (if V2 were scaled to the same number of entries).
-
-### Keep disk synchronization to a minimum
-
-Disk synchronization (for lack of a better name) refers to actions that attempt
-to synchronize, snapshot or flush the current state of the cache so that we know
-everything is in a consistent state. This is bad because there is no real
-guarantee anywhere that a file flush will actually write to the disk (the OS
-usually lies to the application, and the disk usually lies to the OS), and flush
-operations are basically synchronous operations that induce delays and lack of
-responsiveness.
-
-Another important characteristic to preserve is minimal synchronization at
-shutdown, and almost zero cost to recover from user level crashes.
-
-### Disk space / memory use
-
-This is not really a requirement, but the use of disk and memory is an important
-consideration.
-
-## Detailed Design
-
-### Memory Mapped files
-
-The cache continues using memory mapped files, but to be Linux friendly, the
-mapped section will expand the whole file, so parts that are accessed through
-regular IO are stored on a dedicated file. That means that the bitmap of a block
-file is separated from the bulk of the file.
-
-Decent memory mapped files behavior is still considered a requirement of the
-backend, and a platform where memory mapped files is not available or
-underperforms should consider one of the alternative implementations.
-
-### Batching
-
-As implied by the previous section, the cache will not attempt to batch
-operations to a restricted period of time in an effort to do some
-synchronization between the backup files and the normal files. Operations will
-continue as needed (insertions, deletions, evictions etc).
-
-### Reduced address space
-
-The number of entries stored by the cache will be artificially reduced by the
-implementation. A typical cache will have 22-bit addresses so it will be limited
-to about 4 million entries. This seems more than enough for a regular HTTP cache
-and other current uses of the interface.
-
-However, this limit can be increased in future versions with very little effort.
-In fact, as stated in [section
-2.3](/developers/design-documents/network-stack/disk-cache/disk-cache-v3#TOC-Allow-growing-the-main-index)
-the cache will grow as needed, and the initial address space will be much lower
-than 22 bits. For very small caches the cache will start with a total address
-space of just 16 bits.
-
-If the cache is performing evictions, there is an extra implied addressing bit
-by the fact that evicted entries belong to their own independent address space,
-because they will not be stored alongside alive entries.
-
-### Entry hash
-
-An entry is identified by the hash of the key. The purpose of this hash is to
-quickly locate a key on the index table, so this is a hash-table hash, not a
-cryptographic hash. The whole hash is available through the cache index, and
-we’ll use a standard 32 bit hash.
-
-Note that this means that there will be collisions, so the hash doesn’t become
-the unique identifier of an entry either. In fact, there is no unique identifier
-as it is possible to store exactly the same key more than once.
-
-UMA data from Chrome Stable channel indicates that the top 0.05% of users have
-bursts of activity that if sustained would result in about 39000 requests per
-hour. For the users at the 80 percentile mark, the sustained rate would be about
-4800 requests per hour. Assuming a cache that stores about half a million
-entries when full, and uses a 32 bit hash with uniform distribution, the
-expected number of collisions in an hour would be about 4.7 for the 99.5
-percentile and 0.56 for the 80 percentile. This means that the expected time
-between collisions would be a little over 12 minutes for the 99.5 percentile and
-at least 106 minutes for 80% of the users.
-
-The effect of having a collision would be having to read something from disk in
-order to confirm if the entry is there or not. In other words, a collision goes
-against [requirement
-2.4](#TOC-Allow-memory-only-access-time-to-determine-a-Hit-or-Miss), but
-requiring one extra disk access every hour is well within reason. In order to
-completely eliminate collisions, the hash would have to be increased to at least
-128 bits (and maybe move away from a standard hash table hashing) which goes
-against [requirement 2.6](#TOC-Disk-space-memory-use) by substantially
-increasing the size of the index and it would actually make keeping track of
-in-progress deletions much harder.
-
-### Data backtrace to the entry
-
-All data stored individually somewhere by the cache should contain a way to
-validate that it belongs to a given entry. The identifier would be the hash of
-the key. If the hash matches, it is assumed that everything is fine; if it
-doesn’t, the conclusion would be that this is a record that belonged to a stale
-entry that was not correctly detected before.
-
-### Index
-
-The whole index is memory mapped, has a header with relevant details about the
-whole backend, and the bitmap and header are backed up periodically using only
-regular IO. The actual table is divided in two parts: the main table (stored as
-“index_tb1”) always grows by a factor of 2, and the overflow or extra table
-(stored as “index_tb2”) which grows by smaller increments. (See
-[buckets](TOC-Buckets) for more details).
-
-#### Bitmap
-
-Every cell of the hash table has an implied number (the location of the cell).
-The first cell of the first bucket is cell 0, the next one is number one etc.
-There is a bitmap that identifies used cells, using regular semantics. The
-bitmap is stored right after the header, in a file named “index” (backed up by
-“index_bak”).
-
-#### Control state
-
-Each cell of the index table can be in one of the following states:
-
-Free - State 0 (bitmap 0)
-
-New - State 1 (bitmap 1). An entry was added recently
-
-Open - State 2 (bitmap 1)
-
-Modified - State 3 (bitmap 1)
-
-Deleted - State 4 (bitmap 0)
-
-Used - State 6 (bitmap 1). A stored entry that has not seen recent activity.
-
-The state is represented by three bits, and state 7 is invalid, while 5 is used
-temporarily for entries that are invalid while they are fixed. Each time the
-state changes to something else, the corresponding bit on the bitmap should be
-adjusted as needed.
-
-##### Insert transitions
-
-Let’s say that the full state of an entry is determined by \[state - bitmap -
-backup\] (see [section 3.10](#TOC-Backups) for information about backups).
-
-Inserting an entry requires the following transitions:
-
- \[free - 0 - 0\]
-
- \[new - 1 - 0\]. Pointers and data added.
-
- \[new - 1 - 1\]
-
- \[used - 1 - 1\]
-
-A system crash between 2 and 3 can result in either 1 or 2 being persisted to
-disk. If it is 1, there’s no information about the new entry anywhere, so in
-fact the entry was not inserted. If it is 2, the mismatch between the bitmap and
-the backup (and the fact that the state is “new”) allows detection of the crash
-impacting this entry, so the actual data should be verified.
-
-A system crash between 3 and 4 may end up in state 3 after restart, and we know
-that the entry was being inserted. If for some reason the relevant page was not
-saved to disk, the state could be \[free - 0 - 1\], in which case we know that
-something was wrong, but we don’t have the address or has of the entry which was
-being inserted, so effectively nothing was added to the index. In that case, we
-may end up with some data pointing back to some hash that doesn’t live in the
-table, so we can detect the issue if we attempt to use the record.
-
-##### Remove transitions
-
-Removing an entry requires the following transitions (for example):
-
- \[used - 1 - 1\]
-
- \[deleted - 0 - 1\]
-
- \[deleted - 0 - 0\]. Actual data removed at this point.
-
- \[free - 0 - 0\]. Pointers deleted.
-
-A system crash between 2 and 3 can result in either 1 or 2 being persisted to
-disk. If it is 2, we know that something is wrong. If it is 1, to prevent the
-case of an entry pointing to data that was deleted (although the case will be
-recognized when actually reading the data), the deletion should be postponed
-until after the backup is performed. Note that deleting and creating again an
-entry right away is not affected by this pattern because the index can store
-exactly the same hash right away, on a different table cell. At regular run time
-it’s easy to know that we are in the process of deleting an entry, while at the
-same time there’s a replacement in the table.
-
-A system crash between 3 and 4 will either see the entry “deleted”, or “used”
-with the bitmap mismatching the backup, so the problem will be detected.
-
-#### Entry
-
-The cache stores enough information to drive evictions and enumeration without
-having to rebuild the whole list in memory. An entry on the hash table has:
-
- Address (22 bits). Points to the actual entry record.
-
- State (3 bits). See [section 3.6.2](#TOC-Control-state)
-
- Group (3 bits). Up to 8 different groups of entries: reused, deleted etc.
- (old Lists).
-
- Reuse (4 bits). Reuse counter for moving entries to a different group.
-
- Hash (18 bits). Enough to recover the whole hash with a big table.
-
- Timestamp (20 bits).
-
- Checksum (2 bits). To verify self consistency of the record.
-
-For a total of 9 bytes.
-
-The limited timestamp deserves a more detailed explanation. There is a running
-timer of 30 seconds that is the basis of a bunch of measurements on the cache.
-Stamping each entry with a value from that counter, and keeping 20 bits,
-provides about one full year of continuous run time before the counter
-overflows.
-
-The idea of the timestamp is to track the last time an entry is modified so that
-we can build some form of LRU policy. Ignoring overflow for a second, in order
-to get a set of entries for eviction all we have to do is keep track of the last
-timestamp evicted, and walk the table looking for entries on the next set: all
-entries that have the closest timestamp to the previous one, and on the desired
-Group. This gives us all entries that were modified on the same 30 seconds
-interval, with the desired characteristics.
-
-At some point when we are getting close to reach overflow on the timestamp all
-we have to do is rebase all timestamps on the table. Note that the backend keeps
-a full resolution time counter, so we just have to track the range of entries
-stored by the cache. This, of course, limits the range of last-modified entries
-storable by the cache to about one year, assuming that the browser is never shut
-down. Keeping an entry alive for that long without accessing it sounds like more
-than enough for a cache.
-
-The expected pattern is that most entries tracked by the cache will be “deleted”
-entries. For those entries, the timestamp is not really needed because deletion
-of evicted entries is not controlled by the timestamp but simply by discarding a
-bunch of entries that are stored together.
-
-A small cache (say for 256 buckets) needs to store 24 bits of the hash in order
-to recover the whole value. In that case, the extra 6 bits come from the address
-field that shrinks to 16 bits, enough to cover 64K entries. The overhead for
-very small repositories with 1024 entries minimum would be: 10KB for the table,
-4KB for the header/bitmap + 4KB for the backup.
-
-#### Buckets
-
-A bucket is simply a group of 4 entries that share the same high order bits of
-the hash, plus a few extra bits for book keeping.
-
-For example, a hash table with 1024 cells has 256 buckets where each bucket has
-4 entries. In that case, the low order byte of the hash is shared by the four
-entries inside the bucket, but all entries can have anything on the high order
-bits of the hash (even the same value).
-
-When the four entries of a bucket are used, the next entry with a matching hash
-will cause a new bucket to be reserved, and the old bucket will point to the new
-one. New buckets are used on demand, until the number of extra buckets gets
-close to the number of buckets on the main table. At that point the table size
-is doubled, and all extra buckets are assimilated into the new table. Note that
-the process of doubling the size and remapping entries to cells is very simple
-and can be performed even while entries are added or removed from the table,
-which is quite valuable for large tables.
-
-Using the baseline entry described in [section 3.6.3](#TOC-Entry), the
-corresponding bucket will look like:
-
- Entry 1 (9 bytes)
-
- Entry 2 (9 bytes)
-
- Entry 3 (9 bytes)
-
- Entry 4 (9 bytes)
-
- Next bucket (4 bytes, the address of a cell as referenced by the bitmap of
- [section 3.6.1](#TOC-Bitmap))
-
- Hash (up to 4 bytes, for buckets outside of the main table)
-
-For a total size of 44 bytes.
-
-Continuing with the example of a 1024-cell table, this is a representation of
-the buckets:
-
-<img alt="image"
-src="https://lh6.googleusercontent.com/2plliNfEX6wsjVzF8qav6SUR9fMgLZK7TRQ4APc45IUYvfD0XR8Fb48TU1vZe60Cg9Iefa9_NVrSNLJ-I7hgvd2KJ2CgI1ZCJGz2g51D48GjYI-ZJOONTCWG"
-height=389px; width=634px;>
-
-Note that all entries that share the same lower 8 bits belong to the same
-bucket, and that there can be empty slots in the middle of a bucket. The initial
-table size would be 256 buckets, so entry 1024 would be the first cell of an
-extra bucket, not part of the main table.
-
-The presence of chained buckets allows a gradual growth of the table, because
-new buckets get allocated when needed, as more entries are added to the table.
-It also allows keeping multiple instances of the same entry in the table, as
-needed by the system crash correction code.
-
-### EntryRecord
-
-A regular entry has (104 bytes in total):
-
- Hash (32 bits). Padded to 64 bits
-
- Reuse count (8 bits)
-
- Refetch count (8 bits)
-
- State (8 bits)
-
- Times (creation, last modified, last access) (64 bits x 3)
-
- Key Length (8 bits)
-
- Flags (32 bits)
-
- Data size (32 bits x 4 streams)
-
- Data address (32 bits x 4)
-
- Data hash (32 bits x 4)
-
- Self hash (32 bits)
-
- Extra padding
-
-Note that the whole key is now part of the first stream, just before user
-provided data:
-
-Stream 0: owner hash (4 bytes) + key + user data.
-
-Stream 1..3: owner hash (4 bytes) + user data.
-
-A small data hash (fast, not cryptographic) is kept for each data stream, and it
-allows finding consistency issues when a straight, from start to end pattern is
-used to write and read the stream.
-
-For an evicted entry the cache keeps (48 bytes in total):
-
- Hash (32 bits)
-
- Reuse count (8 bits)
-
- Refetch count (8 bits)
-
- State (8 bits)
-
- Last access time (64 bits)
-
- Key length (32 bits)
-
- Key hash (160 bits)
-
- Self hash (32 bits)
-
-In this case all the data streams are deleted, and that means that the key
-itself is not preserved. That’s the reason to rely on a much stronger key hash.
-Note that there is no harm if a URL is built to collide with another one: the
-worst that would happen is that the second URL may have better chances of being
-preserved by the cache, as long as the first URL was already evicted.
-
-### Block files
-
-The structure of a block file remains the same. The consistency of the block
-file is taken care by detecting stales entry at the index layer and following
-the data records to the block files, while verifying the backtrace hash. The
-only difference is that now a block file is backed by two actual files: the
-header + bitmap (a file named “data_xx”) and the actual data (a file named
-“data_xx_d”).
-
-In terms of number of files, the rankings file is gone and now there is a
-dedicated file for live entries, and another one for deleted entries. External
-files are also tracked by a dedicated block file.
-
-As a future improvement, we can consider moving entries to dedicated files once
-they reach the highly reused group.
-
-### External files
-
-External files will be tracked by a block file that basically just have the
-bitmap of used files and the backtrace data to the owning entry (the hash). The
-actual number of files per directory is limited to 1024.
-
-### Backups
-
-There is a backup file of the index bitmap (+ header), updated using regular,
-buffered IO at a given interval (every 30 seconds), without any flushing or
-double-file with replacing.
-
-It is assumed that the operating system will flush the memory mapped sections
-from time to time, so there is no explicit action to attempt to write the main
-bitmap to disk. In general, it is expected that memory mapped data will reach
-disk within the refresh interval of the backup, so that a complete write cycle
-may take from one to two backup cycles. Note that this doesn’t imply that the
-memory mapped data write has to be synchronized in any way with the backup
-cycle.
-
-For example, consider a typical cycle for removing an entry from the index (as
-explained in [section 3.6.2.2](#TOC-Remove-transitions)):
-
- \[used - 1 - 1\]
-
- \[deleted - 0 - 1\]
-
- \[deleted - 0 - 0\]. Actual data removed at this point.
-
-4. \[free - 0 - 0\]. Pointers deleted.
-
-Step 2 happens at a random place during backup cycle n. Step 3 happens at the
-start of backup cycle n+1, but the backup data itself will reach disk at some
-point after that (being buffered by the operating system). Step 4 takes place at
-the start of backup cycle n+2, so roughly a full cycle after the backup file was
-saved and the data deleted from the cache.
-
-If the slot is reused for another entry after step 4, ideally the intermediate
-step (2-3) reaches disk before the slot transitions again to a used state.
-However, if that doesn’t happen we’ll still be able to detect any problem when
-we see a mismatched backtrace hash. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files.PNG.sha1
deleted file mode 100644
index b980e2dbe6c..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-c65be4e92340e725d0d8be6310e2eb97de31bbc9 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files2.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files2.PNG.sha1
deleted file mode 100644
index 16798bbdd43..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files2.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-75485bf5d4e2af1ddb348ee352faa0d338c0151c \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files3.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files3.PNG.sha1
deleted file mode 100644
index 16798bbdd43..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files3.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-75485bf5d4e2af1ddb348ee352faa0d338c0151c \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files4.PNG.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files4.PNG.sha1
deleted file mode 100644
index 72b868fc2b0..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/files4.PNG.sha1
+++ /dev/null
@@ -1 +0,0 @@
-85d7e24fbcfe1069e423cc7de1a2679db7711130 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/index.md
deleted file mode 100644
index 332a3394401..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/index.md
+++ /dev/null
@@ -1,434 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: disk-cache
-title: Disk Cache
----
-
-[TOC]
-
-## Overview
-
-The disk cache stores resources fetched from the web so that they can be
-accessed quickly at a latter time if needed. The main characteristics of
-Chromium disk cache are:
-
-* The cache should not grow unbounded so there must be an algorithm
- for deciding when to remove old entries.
-* While it is not critical to lose some data from the cache, having to
- discard the whole cache should be minimized. The current design
- should be able to gracefully handle application crashes, no matter
- what is going on at that time, only discarding the resources that
- were open at that time. However, if the whole computer crashes while
- we are updating the cache, everything on the cache probably will be
- discarded.
-* Access to previously stored data should be reasonably efficient, and
- it should be possible to use synchronous or asynchronous operations.
-* We should be able to avoid conflicts that prevent us from storing
- two given resources simultaneously. In other words, the design
- should avoid cache trashing.
-* It should be possible to remove a given entry from the cache, and
- keep working with a given entry while at the same time making it
- inaccessible to other requests (as if it was never stored).
-* The cache should not be using explicit multithread synchronization
- because it will always be called from the same thread. However,
- callbacks should avoid reentrancy problems so they must be issued
- through the thread's message loop.
-
-A [new version of the
-cache](/developers/design-documents/network-stack/disk-cache/disk-cache-v3) is
-under development, and some sections of this documents are going to be stale
-soon. In particular, the description of a cache entry, and the big picture
-diagram (sections 3.4 and 3.5) are only valid for files saved with version 2.x
-
-Note that on Android we don't use this implementation; we use the [simple
-cache](/developers/design-documents/network-stack/disk-cache/very-simple-backend)
-instead.
-
-## External Interface
-
-Any implementation of Chromium's cache exposes two interfaces:
-disk_cache::Backend and disk_cache::Entry. (see
-[src/net/disk_cache/disk_cache.h](http://src.chromium.org/viewvc/chrome/trunk/src/net/disk_cache/disk_cache.h?view=markup)).
-The Backend provides methods to enumerate the resources stored on the cache
-(a.k.a Entries), open old entries or create new ones etc. Operations specific to
-a given resource are handled with the Entry interface.
-
-An entry is identified by its key, which is just the name of the resource (for
-example http://www.google.com/favicon.ico ). Once an entry is created, the data
-for that particular resource is stored in separate chunks or data streams: one
-for the HTTP headers and another one for the actual resource data, so the index
-for the required stream is an argument to the Entry::ReadData and
-Entry::WriteData methods.
-
-## Disk Structure
-
-All the files that store Chromium’s disk cache live in a single folder (you
-guessed it, it is called cache), and every file inside that folder is considered
-to be part of the cache (so it may be deleted by Chromium at some point!).
-
-Chromium uses at least five files: one index file and four data files. If any of
-those files is missing or corrupt, the whole set of files is recreated. The
-index file contains the main hash table used to locate entries on the cache, and
-the data files contain all sorts of interesting data, from bookkeeping
-information to the actual HTTP headers and data of a given request. These data
-files are also known as block-files, because their file format is optimized to
-store information on fixed-size “blocks”. For instance, a given block-file may
-store blocks of 256 bytes and it will be used to store data that can span from
-one to four such blocks, in other words, data with a total size of 1 KB or less.
-
-When the size of a piece of data is bigger than disk_cache::kMaxBlockSize (16
-KB), it will no longer be stored inside one of our standard block-files. In this
-case, it will be stored in a “separate file”, which is a file that has no
-special headers and contains only the data we want to save. The name of a
-separate file follows the form f_xx, where xx is just the hexadecimal number
-that identifies the file.
-
-### Cache Address
-
-Every piece of data stored by the disk cache has a given “cache address”. The
-cache address is simply a 32-bit number that describes exactly where the data is
-actually located.
-
-A cache entry will have an address; the HTTP headers will have another address,
-the actual request data will have a different address, the entry name (key) may
-have another address and auxiliary information for the entry (such as the
-rankings info for the eviction algorithm) will have another address. This allows
-us to reuse the same infrastructure to efficiently store different types of data
-while at the same time keeping frequently modified data together, so that we can
-leverage the underlying operating system to reduce access latency.
-
-The structure of a cache address is defined on
-[disk_cache/addr.h](http://src.chromium.org/viewvc/chrome/trunk/src/net/disk_cache/addr.h?view=markup),
-and basically tells if the required data is stored inside a block-file or as a
-separate file and the number of the file (block file or otherwise). If the data
-is part of a block-file, the cache address also has the number of the first
-block with the data, the number of blocks used and the type of block file.
-
-These are few examples of valid addresses:
-
-0x00000000: not initialized
-
-0x8000002A: external file f_00002A
-
-0xA0010003: block-file number 1 (data_1), initial block number 3, 1 block of
-length.
-
-### Index File Structure
-
-The index file structure is specified on
-[disk_cache/disk_format.h.](http://src.chromium.org/viewvc/chrome/trunk/src/net/disk_cache/disk_format.h?view=markup)
-Basically, it is just an disk_cache::IndexHeader structure followed by the
-actual hash table. The number of entries in the table is at least
-disk_cache::kIndexTablesize (65536), but the actual size is controlled by the
-table_len member of the header.
-
-The whole file is memory mapped to allow fast translation between the hash of
-the name of a resource (the key), and the cache address that stores the
-resource. The low order bits of the hash are used to index the table, and the
-content of the table is the address of the first stored resource with the same
-low order bits on the hash.
-
-One of the things that must be verified when dealing with the disk cache files
-(the index and every block-file) is that the magic number on the header matches
-the expected value, and that the version is correct. The version has a mayor and
-a minor part, and the expected behavior is that any change on the mayor number
-means that the format is now incompatible with older formats.
-
-### Block File Structure
-
-The block-file structure is specified on
-[disk_cache/disk_format.h](http://src.chromium.org/viewvc/chrome/trunk/src/net/disk_cache/disk_format.h?view=markup).
-Basically, it is just a file header (disk_cache::BlockFileHeader) followed by a
-variable number of fixed-size data blocks. Block files are named data_n, where n
-is the decimal file number.
-
-The header of the file (8 KB) is memory mapped to allow efficient creation and
-deletion of elements of the file. The bulk of the header is actually a bitmap
-that identifies used blocks on the file. The maximum number of blocks that can
-be stored on a single file is thus a little less than 64K.
-
-Whenever there are not enough free blocks on a file to store more data, the file
-is grown by 1024 blocks until the maximum number of blocks is reached. At that
-moment, a new block-file of the same type is created, and the two files are
-linked together using the next_file member of the header. The type of the
-block-file is simply the size of the blocks that the file stores, so all files
-that store blocks of the same size are linked together. Keep in mind that even
-if there are multiple block-files chained together, the cache address points
-directly to the file that stores a given record. The chain is only used when
-looking for space to allocate a new record.
-
-To simplify allocation of disk space, it is only possible to store records that
-use from one to four actual blocks. If the total size of the record is bigger
-than that, another type of block-file must be used. For example, to store a
-string of 2420 bytes, three blocks of 1024 bytes are needed, so that particular
-string will go to the block-file that has blocks of 1KB.
-
-Another simplification of the allocation algorithm is that a piece of data is
-not going to cross the four block alignment boundary. In other words, if the
-bitmap says that block 0 is used, and everything else is free (state A), and we
-want to allocate space for four blocks, the new record will use blocks 4 through
-7 (state B), leaving three unused blocks in the middle. However, if after that
-we need to allocate just two blocks instead of four, the new record will use
-blocks 1 and 2 (state C).
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/disk-cache/alloc2.PNG">](/developers/design-documents/network-stack/disk-cache/alloc2.PNG)
-
-There are a couple of fields on the header to help the process of allocating
-space for a new record. The empty field stores counters of available space per
-block type and hints stores the last scanned location per block type. In this
-context, a block type is the number of blocks requested for the allocation. When
-a file is empty, it can store up to X records of four blocks each (X being close
-to 64K / 4). After a record of one block is allocated, it is able be able to
-store X-1 records of four blocks, and one record of three blocks. If after that,
-a record of two blocks is allocated, the new capacity is X-1 records of four
-blocks and one record of one block, because the space that was available to
-store the record of three blocks was used to store the new record (two blocks),
-leaving one empty block.
-
-It is important to realize that once a record has been allocated, its size
-cannot be increased. The only way to grow a record that was already saved is to
-read it, then delete it from the file and allocate a new record of the required
-size.
-
-From the reliability point of view, having the header memory mapped allows us to
-detect scenarios when the application crashes while we are in the middle of
-modifying the allocation bitmap. The updating field of the header provides a way
-to signal that we are updating something on the headers, so that if the field is
-set when the file is open, the header must be checked for consistency.
-
-### Cache Entry
-
-An entry is basically a complete entity stored by the cache. It is divided in
-two main parts: the disk_cache::EntryStore stores the part that fully identifies
-the entry and doesn’t change very often, and the disk_cache::RankingsNode stores
-the part that changes often and is used to implement the eviction algorithm.
-The RankingsNode is always the same size (36 bytes), and it is stored on a
-dedicated type of block files (with blocks of 36 bytes). On the other hand, the
-EntryStore can use from one to four blocks of 256 bytes each, depending on the
-actual size of the key (name of the resource). In case the key is too long to be
-stored directly as part of the EntryStore structure, the appropriate storage
-will be allocated and the address of the key will be saved on the long_key
-field, instead of the full key.
-
-The other things stored within EntryStore are addresses of the actual data
-streams associated with this entry, the key’s hash and a pointer to the next
-entry that has the same low-order hash bits (and thus shares the same position
-on the index table).
-
-Whenever an entry is in use, its RankingsNode is marked as in-use so that when a
-new entry is read from disk we can tell if it was properly closed or not.
-
-### The Big Picture
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/disk-cache/files4.PNG">](/developers/design-documents/network-stack/disk-cache/files4.PNG)
-
-This diagram shows a disk cache with 7 files on disk: the index file, 5
-block-files and one separate file. *data_1* and *data_4* are chained together so
-they store blocks of the same size (256 bytes), while *data_2* stores blocks of
-1KB and *data_3* stores blocks of 4 KB. The depicted entry has its key stored
-outside the EntryStore structure, and given that it uses two blocks, it must be
-between one and two kilobytes. This entry also has two data streams, one for the
-HTTP headers (less than 256 bytes) and another one for the actual payload (more
-than 16 KB so it lives on a dedicated file). All blue arrows indicate that a
-cache address is used to locate another piece of data.
-
-## Implementation Notes
-
-Chromium has two different implementations of the cache interfaces: while the
-main one is used to store info on a given disk, there is also a very simple
-implementation that doesn’t use a hard drive at all, and stores everything in
-memory. The in-memory implementation is used for the Incognito mode so that even
-if the application crashes it will be quite difficult to extract any information
-that was accessed while browsing in that mode.
-
-There are a few different types of caches (see
-[net/base/cache_type.h](http://src.chromium.org/viewvc/chrome/trunk/src/net/base/cache_type.h?view=markup)),
-mainly defined by their intended use: there is a media specific cache, the
-general purpose disk cache, and another one that serves as the back end storage
-for AppCache, in addition to the in-memory type already mentioned. All types of
-caches behave in a similar way, with the exception that the eviction algorithm
-used by the general purpose cache is not the same LRU used by the others.
-
-The regular cache implementation is located on disk_cache/backend_impl.cc and
-disk_cache/entry_impl.cc. Most of the files on that folder are actually related
-to the main implementation, except for a few that implement the in-memory cache:
-disk_cache/mem_backend_impl.cc and disk_cache/entry_impl.cc.
-
-### Lower Interface
-
-The lower interface of the disk cache (the one that deals with the OS) is
-handled mostly by two files: disk_cache/file.h and disk_cache/mapped_file.h,
-with separate implementations per operating system. The most notable requirement
-is support for partially memory-mapped files, but asynchronous interfaces and a
-decent file system level cache go a long way towards performance (we don’t want
-to replicate the work of the OS).
-
-To deal with all the details about block-file access, the disk cache keeps a
-single object that deals with all of them: a disk_cache::BlockFiles object. This
-object enables allocation and deletion of disk space, and provides
-disk_cache::File object pointers to other people so that they can access the
-information that they need.
-
-A StorageBlock is a simple template that represents information stored on a
-block-file, and it provides methods to load and store the required data from
-disk (based on the record’s cache address). We have two instantiations of the
-template, one for dealing with the EntryStore structure and another one for
-dealing with the RankingsNode structure. With this template, it is common to
-find code like entry-&gt;rankings()-&gt;Store().
-
-### Eviction
-
-Support for the eviction algorithm of the cache is implemented on
-disk_cache/rankings (and mem_rankings for the in-memory one), and the eviction
-itself is implemented on
-[disk_cache/eviction](http://src.chromium.org/viewvc/chrome/trunk/src/net/disk_cache/eviction.cc?view=markup).
-Right now we have a simple Least Recently Used algorithm that just starts
-deleting old entries once a certain limit is exceeded, and a second algorithm
-that takes reuse and age into account before evicting an entry. We also have the
-concept of transaction when one of the the lists is modified so that if the
-application crashes in the middle of inserting or removing an entry, next time
-we will roll the change back or forward so that the list is always consistent.
-
-In order to support reuse as a factor for evictions, we keep multiple lists of
-entries depending on their type: not reused, with low reuse and highly reused.
-We also have a list of recently evicted entries so that if we see them again we
-can adjust their eviction next time we need the space. There is a time-target
-for each list and we try to avoid eviction of entries without having the chance
-to see them again. If the cache uses only LRU, all lists except the not-reused
-are empty.
-
-### Buffering
-
-When we start writing data for a new entry we allocate a buffer of 16 KB where
-we keep the first part of the data. If the total length is less than the buffer
-size, we only write the information to disk when the entry is closed; however,
-if we receive more than 16 KB, then we start growing that buffer until we reach
-a limit for this stream (1 MB), or for the total size of all the buffers that we
-have. This scheme gives us immediate response when receiving small entries (we
-just copy the data), and works well with the fact that the total record size is
-required in order to create a new cache address for it. It also minimizes the
-number of writes to disk so it improves performance and reduces disk
-fragmentation.
-
-### Deleting Entries
-
-To delete entries from the cache, one of the Doom\*() methods can be used. All
-that they do is to mark a given entry to be deleted once all users have closed
-the entry. Of course, this means that it is possible to open a given entry
-multiple times (and read and write to it simultaneously). When an entry is
-doomed (marked for deletion), it is removed from the index table so that any
-attempt to open it again will fail (and creating the entry will succeed), even
-when an already created Entry object can still be used to read and write the old
-entry.
-
-When two objects are open at the same time, both users will see what the other
-is doing with the entry (there is only one “real” entry, and they see a
-consistent state of it). That’s even true if the entry is doomed after it was
-open twice. However, once the entry is created after it was doomed, we end up
-with basically two separate entries, one for the old, doomed entry, and another
-one for the newly created one.
-
-### Enumerations
-
-A good example of enumerating the entries stored by the cache is located at
-[src/net/url_request/url_request_view_cache_job.cc](http://src.chromium.org/viewvc/chrome/trunk/src/net/url_request/url_request_view_cache_job.cc?view=markup)
-. It should be noted that this interface is not making any statements about the
-order in which the entries are enumerated, so it is not a good idea to make
-assumptions about it. Also, it could take a long time to go through all the info
-stored on disk.
-
-### Sparse Data
-
-An entry can be used to store sparse data instead of a single, continuous
-stream. In this case, only two streams can be stored by the entry, a regular one
-(the first one), and a sparse one (the second one). Internally, the cache will
-distribute sparse chunks among a set of dedicated entries (child entries) that
-are linked together from the main entry (the parent entry). Each child entry
-will store a particular range of the sparse data, and inside that range we could
-have "holes" that have not been written yet. This design allows the user to
-store files of any length (even bigger than the total size of the cache), while
-the cache is in fact simultaneously evicting parts of that file, according to
-the regular eviction policy. Most of this logic is implemented on
-disk_cache/sparse_control (and disk_cache/mem_entry_impl for the in-memory
-case).
-
-### Dedicated Thread
-
-We have a dedicated thread to perform most of the work, while the public API is
-called on the regular network thread (the browser's IO thread).
-
-The reason for this dedicated thread is to be able to remove **any** potentially
-blocking call from the IO thread, because that thread serves the IPC messages
-with all the renderer and plugin processes: even if the browser's UI remains
-responsive when the IO thread is blocked, there is no way to talk to any
-renderer so tabs look unresponsive. On the other hand, if the computer's IO
-subsystem is under heavy load, any disk access can block for a long time.
-
-Note that it may be possible to extend the use of asynchronous IO and just keep
-using the same thread. However, we are not really using asynchronous IO for
-Posix (due to compatibility issues), and even in Windows, not every operation
-can be performed asynchronously; for instance, opening and closing a file are
-always synchronous operations, so they are subject to significant delays under
-the proper set of circumstances.
-
-Another thing to keep in mind is that we tend to perform a large number of IO
-operations, knowing that most of the time they just end up being completed by
-the system's cache. It would have been possible to use asynchronous operations
-all the time, but the code would have been much harder to understand because
-that means a lot of very fragmented state machines. And of course that doesn't
-solve the problem with Open/Close.
-
-As a result, we have a mechanism to post tasks from the main thread (IO thread),
-to a background thread (Cache thread), and back, and we forward most of the API
-to the actual implementation that runs on the background thread. See
-disk_cache/in_flight_io and disk_cache/in_flight_backend_io. There are a few
-methods that are not forwarded to the dedicated thread, mostly because they
-don't interact with the files, and only provide state information. There is no
-locking to access the cache, so these methods are generally racing with actual
-modifications, but that non-racy guarantee is not made by the API. For example,
-getting the size of a data stream (*entry::GetDataSize*()) is racing with any
-pending *WriteData* operation, so it may return the value before of after the
-write completes.
-
-Note that we have multiple instances of disk-caches, and they all have the same
-background thread.
-
-## Data Integrity
-
-There is a balance to achieve between performance and crash resilience. At one
-extreme, every unexpected failure will lead to unrecoverable corrupt information
-and at the other extreme every action has to be flushed to disk before moving on
-to be able to guarantee the correct ordering of operations. We didn’t want to
-add the complexity of a journaling system given that the data stored by the
-cache is not critical by definition, and doing that implies some performance
-degradation.
-
-The current system relies heavily on the presence of an OS-wide file system
-cache that provides adequate performance characteristics at the price of losing
-some deterministic guarantees about when the data actually reaches the disk (we
-just know that at some point, some part of the OS will actually decide that it
-is time to write the information to disk, but not if page X will get to disk
-before page Y).
-
-Some critical parts of the system are directly memory mapped so that, besides
-providing optimum performance, even if the application crashes the latest state
-will be flushed to disk by the system. Of course, if the computer crashes we’ll
-end up on a pretty bad state because we don’t know if some part of the
-information reached disk or not (each memory page can be in a different state).
-
-The most common problem if the system crashes is that the lists used by the
-eviction algorithm will be corrupt because some pages will have reached disk
-while others will effectively be on a “previous” state, still linking to entries
-that were removed etc. In this case, the corruption will not be detected at
-start up (although individual dirty entries will be detected and handled
-correctly), but at some point we’ll find out and proceed to discard the whole
-cache. It could be possible to start “saving” individual good entries from the
-cache, but the benefit is probably not worth the increased complexity. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/very-simple-backend/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/very-simple-backend/index.md
deleted file mode 100644
index 60a49c6a921..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/disk-cache/very-simple-backend/index.md
+++ /dev/null
@@ -1,151 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-- - /developers/design-documents/network-stack/disk-cache
- - Disk Cache
-page_name: very-simple-backend
-title: Very Simple Backend
----
-
-[TOC]
-
-## Summary
-
-Proposed is a new backend for the disk cache, conforming to the interface in
-[Disk Cache](/developers/design-documents/network-stack/disk-cache). The new
-backend is purposefully very simple, using one file per cache entry, plus an
-index file. This backend will be useful as a testing baseline, as well as
-dealing with IO bottlenecks impairing mobile browsing performance on some
-platforms.
-
-Compared to the standard blockfile cache, this new design has benefits and
-goals:
-
-*It is more resilient under corruption from system crashes.* The new design
-periodically flushes its entire index, and swaps it in atomically. This single
-atomic operation, together with cache entries being in separate files similarly
-swapped in makes the implications of system crash much less serious; after
-system crash Chrome will start with a stale cache, rather than have to drop its
-entire cache as many users experience with the blockfile cache.
-*It does not delay launching network requests.* The current cache requires
-multiple context switches, and possibly blocking disk IO before launching
-requests that ultimately use the network. These delays average over 25ms on
-requests using the network, and about 14ms averaged over all requests on
-Windows. Without context switches or blocking disk IO, this can be entirely
-eliminated. On the Android platform, the slower flash controllers make these
-delays significantly slower, increasing this benefit of a very simple backend.
-*Lower resident set pressure and fewer IO operations.* Our disk format has
-256-512byte per entry records, plus rankings & index information of ~100bytes
-per entry in resident set pressure. Not all entries that are heavily used are
-contiguous, so paging pressure is in practice larger. The very simple cache
-stores only SMALLNUM bytes per entry in memory, contiguously, and does not
-normally access the disk in the critical path of a request where not required.
-As a result, the Very Simple Backend should maintain good performance even
-without good OS buffer cache.
-
-*Simpler.* The very simple cache explicitly avoids implementing a filesystem in
-chrome. Files are opened and either read or written from the beginning to end.
-No reallocations within files take place. The cache thread component of the very
-simple cache should be shorter and easier to maintain than the
-filesystem-in-chrome approach.
-Together with the above benefits and goals, the new design has some non-goals:
-*It is not a log structured cache.* While the IO performed by the Very Simple
-Cache is mostly sequential, it is not fundamentally log structured; in
-particular, the filesystem operations it performs are not log structured unless
-used on a filesystem that itself is log structured.
-*It is not a filesystem.* The disk cache delegates filesystem operations to the
-filesystem. As filesystems improve, or change for different devices, the disk
-cache will benefit simultaneously.
-
-## Status
-
-See [Bug 173381](https://code.google.com/p/chromium/issues/detail?id=173381) for
-status on this implementation, or [track the Internals-Network-Cache-Simple
-issue in
-crbug](https://code.google.com/p/chromium/issues/list?q=label:Internals-Network-Cache-Simple).
-For related designs covering performance tracking, see [Disk Cache Benchmarking
-& Performance
-Tracking](/developers/design-documents/network-stack/disk-cache/disk-cache-benchmarking).
-
-## External Interface
-
-[See the main disk cache design
-document.](/developers/design-documents/network-stack/disk-cache#TOC-External-Interface)
-
-## Structure on Disk
-
-Like the blockfile disk cache, the disk cache is stored in a single directory.
-There is one index file, and each entry is stored in a single file in that
-directory.
-An Entry Hash is a relatively short hash of the url, used for storage efficiency
-in index and entry naming. Two entries with the same Entry Hash cannot be
-stored. With a 40 bit per-user-salted SHA-2 of the url; collisions would occur
-only at one in a million probability with a million entry cache. Each cache
-entry is stored in a file named by the Entry Hash in hexadecimal, an underscore,
-and the backend stream number.
-The cache is stored in a single directory, with a file file 00index that
-contains data for initializing an in memory index used for faster cache
-performance. The index consists of entry hashes for records, together with
-simple eviction information.
-Format of Entry Files:
-
-* A magic number and version field.
-* The full url, length prefixed.
-* Chunks of data, length prefixed, compressed and checksummed.
-* An EOF record.
-
-## Implementation
-
-### IO Thread Operations
-
-The public API is called on the IO thread; and the cache maintains the index
-data on the IO thread as well. This allows for cache misses, and cache create
-operations to be performed at low latency. The index is updated in the IO thread
-as well.
-
-### Worker Pool Operations
-
-All IO operations in the simple cache are performed asynchronously on a worker
-thread pool.
-
-To reduce the critical path cost of creating new entries, the simple cache will
-keep a pool of newly created entries ready to move into final place; this
-permits the IO thread to update its index and provide a new entry at zero
-latency, with some cost in renames-into-place occurring later in the entries
-life cycle.
-
-### Index Flushing and Consistency Checking
-
-The index is flushed on shutdown, and periodically while running to guard
-against system crashes. On recovery from a system crash, the browser will have a
-stale index, and thus will need to periodically iterate over the cache directory
-to find entries in the directory not mentioned in the idex.
-
-### Operation without Index
-
-If startup speeds and startup IO is too costly, note that the simple backend can
-operate without the IO thread index by directly opening files in the directory.
-
-### Code Location
-
-[src/net/disk_cache/simple](https://chromium.googlesource.com/chromium/src/+/HEAD/net/disk_cache/simple/)
-
-## Potential Improvements
-
-### HTTP Metadata in Index
-
-Request conditionalization, freshness lifetime, and HTTP validator information
-could be stored in the index on the IO thread, greatly reducing latency required
-to serve requests.
-
-### Single File per Entry
-
-It's arguably not a very simple idea, but in practice combining the headers and
-the entry data into the same file makes atomic filesystem operations (renames,
-etc...) easier, while also making writes/reads of a single entry generally more
-sequential. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/http-authentication-throttling/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/http-authentication-throttling/index.md
deleted file mode 100644
index af9a4074fbc..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/http-authentication-throttling/index.md
+++ /dev/null
@@ -1,183 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: http-authentication-throttling
-title: HTTP Authentication Throttling
----
-
-## Problem statement
-
-Some users get locked out of proxies or servers after entering an invalid
-username/password combination because Chrome will immediately reissue all
-pending requests with the same invalid credentials.
-
-To prevent this from happening, only one request should be reissued. If
-authentication succeeds, all other pending requests can be reissued. If it
-fails, the user will be presented with another login prompt. If the request is
-cancelled, another pending request will be reissued.
-
-The downside is that this will introduce an RTT penalty for the other pending
-requests. However, that penalty will likely be dwarfed by the time to enter
-username/password, and the penalty is severe enough that the cost is likely
-worth it.
-
-**Implementation Choices**
-
-There are two general approaches for implementing this.
-a) Do the throttling in the network stack, with no changes to the URLRequest
-delegates. All pending requests are reissued from the perspective of
-ResourceDispatcherHost and LoginHandler, but are queued someplace such as
-HttpAuthController::MaybeGenerateAuthToken \[which can complete
-asynchronously\], with the state residing in the HttpAuthCache::Entry. If the
-authentication request succeeds, all pending requests continue down the network
-stack. If it fails, pretend that all of the other requests also were rejected by
-the server or proxy and send a 401 or 407 back up the network stack using the
-same auth challenge as before.
-
-b) Do the throttling in the LoginHandler, and only restart one URLRequest. To
-make this happen, we'll need to change LoginHandler so there is one per (domain,
-scheme, realm) tuple instead of one per URLRequest, and add a LoginHandlerTable
-to do the discovery/creation. If the outstanding request succeeds, reissue all
-other pending requests. If it fails, re-show all possible dialogs \[one per
-tab\]. If it is cancelled, reissue another pending request or kill the
-LoginHandler if none remain.
-
-Initially a) seemed like a more attractive option. All users of the networking
-stack would be able to take advantage of the behavior without having to
-implement their own throttling mechanism. It's also less of a change: we already
-do grouping of (domain, scheme, realm) tuples in the HttpAuthCache and have a
-natural queuing location at HttpAuthController::MaybeGenerateAuthToken.
-
-However, it has a number of issues which make it seem like the wrong approach:
-
-* If authentication fails, and the user cancels authentication, the
- pending requests will not contain the correct body of the 401 or 407
- response.
-* The NetLog and developer tools may show a number of requests which
- were not actually issued.
-* It's possible that not all consumers of the network stack want this
- behavior.
-* Only does throttling for HTTP, not FTP \[not sure if this is good or
- bad\].
-
-As a result, doing the throttling at the LoginHandler level makes more sense.
-It's also a more natural match for what's actually going on.
-
-**LoginHandler throttling**
-
-Instead of one LoginHandler per request, there will be one LoginHandler per
-(domain, scheme, realm). A LoginHandlerDirectory will maintain a map of
-LoginHandler's and manage their lifetime, and the LoginHandlerDirectory will be
-owned by the ResourceDispatcherHost.
-
-The LoginHandler interface will look like
-
-class LoginHandler {
-
-public:
-
-// Adds a request to be handled
-
-void AddRequest(URLRequest\* request);
-
-// Removes a request from being handled, done on cancellation.
-
-void RemoveRequest(URLRequest\* request);
-
-// Called when the user provides auth credentials.
-
-void OnCredentialsSupplied();
-
-// Called when the user cancels entering auth credentials.
-
-void OnUserCancel();
-
-// Called when authentication succeeds on a request.
-
-void OnAuthSucceeded(URLRequest\* request);
-
-// Called when authentication fails on a request.
-
-void OnAuthFailed(URLRequest\* request);
-
-// Called by LoginHandlerDirectory to see if it should free.
-
-bool IsEmpty() const;
-
-};
-
-The LoginHandler is a state machine, with four states.
-
-WAITING_FOR_USER_INPUT
-
-WAITING_FOR_USER_INPUT_COMPLETE TRYING_REQUEST
-
-TRYING_REQUEST_COMPLETE
-
-with perhaps a Nil or initialization state.
-
-AddRequest() will always queue the request, but it should not already be in the
-set of requests.
-
-If RemoveRequest() is called while in the WAITING_FOR_USER_INPUT or
-WAITING_FOR_USER_INPUT_COMPLETE state, it will remove from the set and remove a
-dialog if it is the only request for a particular tab. It will also remove the
-LoginHandler if it is the last request. If in the TRYING_REQUEST or
-TRYING_REQUEST_COMPLETE state, the request is simply removed from the set if it
-is not the currently attempted request. If it is the currently attempted
-request, then TRYING_REQUEST is re-entered with a different request.
-
-OnCredentialsSupplied() must be called during WAITING_FOR_USER_INPUT and will
-transition to WAITING_FOR_USER_INPUT_COMPLETE. This will also choose a request
-to try and enter TRYING_REQUEST.
-
-OnUserCancel() must be called during WAITING_FOR_USER_INPUT and will transition
-to WAITING_FOR_USER_INPUT_COMPLETE. This will cancel auth on all of the pending
-requests and display the contents of the 401/407 body.
-
-OnAuthSucceeded() must be called during the TRYING_REQUEST state, and will
-transition to TRYING_REQUEST_COMPLETE. This will reissue all other pending
-requests and close out the LoginHandler.
-
-OnAuthFailed() must be called during the TRYING_REQUEST state, will enter
-TRYING_REQUEST_COMPLETE, and will go back to WAITING_FOR_USER_INPUT. The pending
-request is moved back to the main set of pending requests.
-
-**Delaying credential entering into HttpAuthCache**
-
-Although the LoginHandler changes described above will throttle most unconfirmed
-authentication requests, there is still the chance that some will get through.
-
-While the LoginHandler is in the TRYING_REQUEST state, the username/password are
-entered into the HttpAuthCache::Entry before hearing back from the server or
-proxy about whether the results are successful. Any other URLRequest's issued
-during this period of time will use the unconfirmed username/password.
-
-If the credentials are entered on a successful response, then this problem goes
-away. ResourceDispatcherHost issued URLRequest's will likely fail, get a 401,
-and get added to the queue of pending requests in the appropriate LoginHandler.
-Other dispatched ones such as URLFetcher will simply fail.
-
-There is a race I just thought about which might annoy owners, and I don't have
-a good answer for. Assume that there is an outstanding request A which is
-waiting to hear back from the proxy if the authentication succeeded. A second
-request B comes in, notices no username/password in the HttpAuthCache and so
-does not add preemptive authentication and issues a raw GET. Request A completes
-successfully, fills in the HttpAuthCache with the credentials, goes back to the
-LoginHandler, reissues all requests and closes the LoginHandler. Then, Request B
-returns from the proxy with a 407 response code, since it provided no
-Authorization header. Since the LoginHandler has been destroyed, a new one is
-created and the user is presented with a login prompt again.
-
-One way to fix that case is to add a PENDING state to the HttpAuthCache::Entry
-when Request A is issued, and removed when it hears back from the proxy. Request
-B will be stalled at the MaybeGenerateAuthToken state when the entry is pending,
-and adds itself to a list in the entry. When it completes, all issues are
-continued either using preemptive authentication with the credentials in the
-cache entry \[if Request A succeeded\] or issued with no credentials if Request
-A failed. That still may result in a race in the failure case, however. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/index.md
deleted file mode 100644
index 13269592e80..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/index.md
+++ /dev/null
@@ -1,174 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: http-cache
-title: HTTP Cache
----
-
-[TOC]
-
-**Overview**
-
-The HTTP Cache is the module that receives HTTP(S) requests and decides when and
-how to fetch data from the [Disk
-Cache](/developers/design-documents/network-stack/disk-cache) or from the
-network. The cache lives in the browser process, as part of the network stack.
-It should not be confused with Blink's in-memory cache, which lives in the
-renderer process and it's tightly coupled with the resource loader.
-
-Logically the cache sits between the content-encoding logic and the
-transfer-encoding logic, which means that it deals with transfer-encoding
-properties and stores resources with the content-encoding set by the server.
-
-The cache implements the
-[HttpTransactionFactory](https://chromium.googlesource.com/chromium/src/+/HEAD/net/http/http_transaction_factory.h)
-interface, so an
-[HttpCache::Transaction](https://chromium.googlesource.com/chromium/src/+/HEAD/net/http/http_cache_transaction.h)
-(which is an implementation of
-[HttpTransaction](https://chromium.googlesource.com/chromium/src/+/HEAD/net/http/http_transaction.h))
-will be the transaction associated with the
-[URLRequestJob](https://chromium.googlesource.com/chromium/src/+/HEAD/net/url_request/url_request_job.h)
-used to fetch most
-[URLRequests](https://chromium.googlesource.com/chromium/src/+/HEAD/net/url_request/url_request.h).
-
-There's an instance of an
-[HttpCache](https://chromium.googlesource.com/chromium/src/+/HEAD/net/http/http_cache.h)
-for every profile (and for every isolated app). In fact, a profile may contain
-two instances of the cache: one for regular requests and another one for media
-requests.
-
-Note that because the HttpCache is the one in charge of serving requests either
-from disk or from the network, it actually owns the HttpTransactionFactory that
-creates network transactions, and the
-[disk_cache::Backend](https://chromium.googlesource.com/chromium/src/+/HEAD/net/disk_cache/disk_cache.h)
-that is used to serve requests from disk. When the HttpCache is destroyed
-(usually when the profile data goes away), both the disk backend and the network
-layer (HttpTransactionFactory) go away.
-
-There may be code outside of the cache that keeps a copy of the pointer to the
-disk cache backend. In that case, it is a requirement that the real ownership is
-maintained at all times, which means that such code has to be owned transitively
-by the cache (so that backend destruction happen synchronously with the
-destruction of the code that kept the pointer).
-
-## **Operation**
-
-The cache is responsible for:
-
-* Create and manage the disk cache backend.
-
-> This is mostly an initialization problem. The cache is created without a
-> backend (but with a backend factory), and the backend is created on-demand by
-> the first request that needs one. The HttpCache has all the logic to queue
-> requests until the backend is created.
-
-* Create HttpCache::Transactions.
-
-* Create and manage ActiveEntries that are used by
- HttpCache::Transactions to interact with the disk backend.
-
-> An ActiveEntry is a small object that represents a disk cache entry and all
-> the transactions that have access to it. The Writer, the list of Readers and
-> the list of pending transactions (waiting to become Writer or Readers) are
-> part of the ActiveEntry.
-
-> The cache has the code to create or open disk cache entries and place them on
-> an ActiveEntry. It also has all the logic to attach and remove a transaction
-> to and from ActiveEntry.
-
-* Enforce the cache lock.
-
-> The cache implements a single writer - multiple reader lock so that only one
-> network request for the same resource is in flight at any given time.
-
-> Note that the existence of the cache lock means that no bandwidth is wasted
-> re-fetching the same resource simultaneously. On the other hand, it forces
-> requests to wait until a previous request finishes downloading a resource (the
-> Writer) before they can start reading from it, which is particularly
-> troublesome for long lived requests. Simply bypassing the cache for subsequent
-> requests is not a viable solution as it will introduce consistency problems
-> when a renderer experiences the effect of going back in time, as in receiving
-> a version of the resource that is older than a version that it already
-> received (but which skipped the browser cache).
-
-The bulk of the logic of the HTTP cache is actually implemented by the cache
-transaction.
-
-## **Sparse Entries**
-
-The HTTP Cache supports using spares entries for any resource. Sparse entries
-are generally used by media resources (think large video or audio files), and
-the general idea is to be able to store only some parts of the resource, and
-being able to serve those parts back from disk.
-
-The mechanism that is used to tell the cache that it should create a sparse
-entry instead of a regular entry is by issuing a byte-range request from the
-caller. That tells the cache that the caller is prepared to deal with byte
-ranges, so the cache may store byte ranges. Note that if the cache already has a
-resource stored for the requested URL, issuing a byte range request will not
-"upgrade" that resource to be a sparse entry; in fact, in general there is no
-way to transform a regular entry into a sparse entry or vice-versa.
-
-Once the HttpCache creates a sparse entry, the disk cache backend will be in
-charge of storing the byte ranges in an efficient way, and it will be able to
-evict part of a resource without throwing the whole entry away. For example,
-when watching a long video, the backend can discard the first part of the movie
-while still storing the part that is currently being received (and presented to
-the user). If the user goes back a few minutes, content can be served from the
-cache. If the user seeks to a portion that was already evicted, that part the
-video can be fetched again.
-
-At any given time, it is possible for the cache to have stored a set of sections
-of a resource (which don't necessarily match any actual byte-range requested by
-the user) interspersed with missing data. In order to fulfill a given request,
-the HttpCache may have to issue a series of byte-range network requests for the
-missing parts, while returning data as needed either from disk or from the
-network. In other words, when dealing with sparse entries, the
-HttpCache::Transaction will synthesize network byte-range requests as needed.
-
-## **Truncated Entries**
-
-A second scenario where the cache will generate byte-range request is when a
-regular entry (not sparse) was not completely received before the connection was
-lost (or the caller cancelled the request). In that case, the cache will attempt
-to serve the first part of the resource from disk, and issue a byte range
-request for the remainder of the resource. A large part of the logic to handle
-truncated entries is the same logic needed to support spares entries.
-
-## **Byte-Range Requests**
-
-As explained above, byte-range requests are used to trigger the creation of
-sparse entries (if the resource was not previously stored). From the user point
-of view, the cache will transparently fulfill any combination of byte-range
-requests and regular requests either from sparse, truncated or normal entries.
-Needless to say, if a client uses byte-range requests it should be prepared to
-deal with the implications of that request, as having to determine when requests
-can be combined together, what a range applies to (over the wire bytes) etc.
-
-## **HttpCache::Transaction**
-
-The bulk of the cache logic is implemented by the cache transaction. At the
-center of the implementation there is a very large state machine (probably the
-most common pattern in the network stack, given the asynchronous nature of the
-problem). Note that there's a block of comments that document the most common
-flow patterns for the state machine, just before the main switch implementation.
-
-This is a general (not exhaustive) diagram of the state machine:
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/http-cache/t.png">](/developers/design-documents/network-stack/http-cache/t.png)
-
-This diagram is not meant to track the latest version of the code, but rather to
-provide a rough overview of what the state machine transitions look like. The
-flow is relatively straight forward for regular entries, but the fact that the
-cache can generate a number of network requests to fulfill a single request that
-involves sparse entries make it so that there is a big loop going back to
-START_PARTIAL_CACHE_VALIDATION. Remember that each individual network request
-can fail, or the server may have a more recent version of the resource...
-although in general, that kind of server behavior while we are working with a
-request will result in an error condition. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/t.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/t.png.sha1
deleted file mode 100644
index 183df4b1ce9..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/http-cache/t.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-0b7ce019fae7eb25fa5d54246b34093128efcd05 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/http-pipelining/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/http-pipelining/index.md
deleted file mode 100644
index 3916011b017..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/http-pipelining/index.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: http-pipelining
-title: HTTP Pipelining
----
-
-## Objective
-
-Speed up Chrome's network stack by enabling HTTP Pipelining. Pipelining issues
-multiple requests over a single connection without waiting for a response.
-
-## Risks
-
-* Broken servers. Servers may ignore pipelined requests or corrupt the
- responses.
-* Broken proxies. May cause the same problems. Some users are behind
- "transparent proxies," where the requests are proxied even though
- the user has not explicitly specified a proxy in their system
- configuration.
-* Front of queue blocking. The first request in a pipeline may block
- other requests in the pipeline. The net result of pipelining may be
- slower page loads.
-
-## Mitigation
-
-Response headers must have the following properties:
-
-* HTTP/1.1
-* Determinable content length, either through explicit Content-Length
- or chunked encoding
-* A keep-alive connection (implicit with HTTP/1.1)
-* No authentication
-
-Pipelining does not begin until these criteria have been met for an origin (host
-and port pair). If at any point one of these fail, the origin is black-listed in
-the client. If an origin has successfully pipelined before, it is remembered and
-pipelining begins immediately on next use.
-
-## Status
-
-The option to enable pipelining has been removed from Chrome, as there are known
-crashing bugs and known front-of-queue blocking issues. There are also a large
-number of servers and middleboxes that behave badly and inconsistently when
-pipelining is enabled. Until these are resolved, it's recommended nobody uses
-pipelining. Doing so currently requires a custom build of Chromium. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/index.md
deleted file mode 100644
index 6143c26d7f3..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/index.md
+++ /dev/null
@@ -1,298 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-page_name: network-stack
-title: Network Stack
----
-
-**Warning:** This document is somewhat outdated. See
-<https://chromium.googlesource.com/chromium/src/+/HEAD/net/docs/life-of-a-url-request.md>
-for more modern information.
-
-[TOC]
-
-## Overview
-
-The network stack is a mostly single-threaded cross-platform library primarily
-for resource fetching. Its main interfaces are `URLRequest` and
-`URLRequestContext`. `URLRequest`, as indicated by its name, represents the
-request for a [URL](http://en.wikipedia.org/wiki/URL). `URLRequestContext`
-contains all the associated context necessary to fulfill the URL request, such
-as [cookies](http://en.wikipedia.org/wiki/HTTP_cookie), host resolver, proxy
-resolver, [cache](/developers/design-documents/network-stack/http-cache), etc.
-Many URLRequest objects may share the same URLRequestContext. Most `net` objects
-are not threadsafe, although the disk cache can use a dedicated thread, and
-several components (host resolution, certificate verification, etc.) may use
-unjoined worker threads. Since it primarily runs on a single network thread, no
-operation on the network thread is allowed to block. Therefore we use
-non-blocking operations with asynchronous callbacks (typically
-`CompletionCallback`). The network stack code also logs most operations to
-`NetLog`, which allows the consumer to record said operations in memory and
-render it in a user-friendly format for debugging purposes.
-
-Chromium developers wrote the network stack in order to:
-
-* Allow coding to cross-platform abstractions
-* Provide greater control than would be available with higher-level
- system networking libraries (e.g. WinHTTP or WinINET)
- * Avoid bugs that may exist in system libraries
- * Enable greater opportunity for performance optimizations
-
-## Code Layout
-
-* net/base - Grab bag of `net` utilities, such as host resolution,
- cookies, network change detection,
- [SSL](http://en.wikipedia.org/wiki/Transport_Layer_Security).
-* net/disk_cache - [Cache for web
- resources](/developers/design-documents/network-stack/disk-cache).
-* net/ftp - [FTP](http://en.wikipedia.org/wiki/File_Transfer_Protocol)
- implementation. Code is primarily based on the old HTTP
- implementation.
-* net/http -
- [HTTP](http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol)
- implementation.
-* net/ocsp -
- [OCSP](http://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol)
- implementation when not using the system libraries or if the system
- does not provide an OCSP implementation. Currently only contains an
- NSS based implementation.
-* net/proxy - Proxy ([SOCKS](http://en.wikipedia.org/wiki/SOCKS) and
- HTTP) configuration, resolution, script fetching, etc.
-* net/quic - [QUIC](/quic) implementation.
-* net/socket - Cross-platform implementations of
- [TCP](http://en.wikipedia.org/wiki/Transmission_Control_Protocol)
- sockets, "SSL sockets", and socket pools.
-* net/socket_stream - socket streams for WebSockets.
-* net/spdy - HTTP2 (and its predecessor) [SPDY](/spdy) implementation.
-* net/url_request - `URLRequest`, `URLRequestContext`, and
- `URLRequestJob` implementations.
-* net/websockets -
- [WebSockets](http://en.wikipedia.org/wiki/WebSockets)
- implementation.
-
-## Anatomy of a Network Request (focused on HTTP)
-
-[<img alt="image"
-src="/developers/design-documents/network-stack/Chromium%20HTTP%20Network%20Request%20Diagram.svg">](/developers/design-documents/network-stack/Chromium%20HTTP%20Network%20Request%20Diagram.svg)
-
-### URLRequest
-
-```none
-class URLRequest {
- public:
-  // Construct a URLRequest for |url|, notifying events to |delegate|.
-  URLRequest(const GURL& url, Delegate* delegate);
-  
-  // Specify the shared state
-  void set_context(URLRequestContext* context);
-  // Start the request. Notifications will be sent to |delegate|.
-  void Start();
-  // Read data from the request.
-  bool Read(IOBuffer* buf, int max_bytes, int* bytes_read);
-};
-class URLRequest::Delegate {
- public:
-  // Called after the response has started coming in or an error occurred.
-  virtual void OnResponseStarted(...) = 0;
-  // Called when Read() calls complete.
-  virtual void OnReadCompleted(...) = 0;
-};
-```
-
-When a `URLRequest` is started, the first thing it does is decide what type of
-`URLRequestJob` to create. The main job type is the `URLRequestHttpJob` which is
-used to fulfill http:// requests. There are a variety of other jobs, such as
-`URLRequestFileJob` (file://), `URLRequestFtpJob` (ftp://), `URLRequestDataJob`
-(data://), and so on. The network stack will determine the appropriate job to
-fulfill the request, but it provides two ways for clients to customize the job
-creation: `URLRequest::Interceptor` and `URLRequest::ProtocolFactory`. These are
-fairly redundant, except that `URLRequest::Interceptor`'s interface is more
-extensive. As the job progresses, it will notify the `URLRequest` which will
-notify the `URLRequest::Delegate` as needed.
-
-### URLRequestHttpJob
-
-URLRequestHttpJob will first identify the cookies to set for the HTTP request,
-which requires querying the `CookieMonster` in the request context. This can be
-asynchronous since the CookieMonster may be backed by an
-[sqlite](http://en.wikipedia.org/wiki/SQLite) database. After doing so, it will
-ask the request context's `HttpTransactionFactory` to create a
-`HttpTransaction`. Typically, the
-`[HttpCache](/developers/design-documents/network-stack/http-cache)` will be
-specified as the `HttpTransactionFactory`. The `HttpCache` will create a
-`HttpCache::Transaction` to handle the HTTP request. The
-`HttpCache::Transaction` will first check the `HttpCache` (which checks the
-[disk cache](/developers/design-documents/network-stack/disk-cache)) to see if
-the cache entry already exists. If so, that means that the response was already
-cached, or a network transaction already exists for this cache entry, so just
-read from that entry. If the cache entry does not exist, then we create it and
-ask the `HttpCache`'s `HttpNetworkLayer` to create a `HttpNetworkTransaction` to
-service the request. The `HttpNetworkTransaction` is given a
-`HttpNetworkSession` which contains the contextual state for performing HTTP
-requests. Some of this state comes from the `URLRequestContext`.
-
-### HttpNetworkTransaction
-
-```none
-class HttpNetworkSession {
- ...
- private:
-  // Shim so we can mock out ClientSockets.
-  ClientSocketFactory* const socket_factory_;
-  // Pointer to URLRequestContext's HostResolver.
-  HostResolver* const host_resolver_;
-  // Reference to URLRequestContext's ProxyService
-  scoped_refptr<ProxyService> proxy_service_;
-  // Contains all the socket pools.
-  ClientSocketPoolManager socket_pool_manager_;
-  // Contains the active SpdySessions.
-  scoped_ptr<SpdySessionPool> spdy_session_pool_;
-  // Handles HttpStream creation.
-  HttpStreamFactory http_stream_factory_;
-};
-```
-
-`HttpNetworkTransaction` asks the `HttpStreamFactory` to create a `HttpStream`.
-The `HttpStreamFactory` returns a `HttpStreamRequest` that is supposed to handle
-all the logic of figuring out how to establish the connection, and once the
-connection is established, wraps it with a HttpStream subclass that mediates
-talking directly to the network.
-
-```none
-class HttpStream {
- public:
-  virtual int SendRequest(...) = 0;
-  virtual int ReadResponseHeaders(...) = 0;
-  virtual int ReadResponseBody(...) = 0;
-  ...
-};
-```
-
-Currently, there are only two main `HttpStream` subclasses: `HttpBasicStream`
-and `SpdyHttpStream`, although we're planning on creating subclasses for [HTTP
-pipelining](http://en.wikipedia.org/wiki/HTTP_pipelining). HttpBasicStream
-assumes it is reading/writing directly to a socket. SpdyHttpStream reads and
-writes to a `SpdyStream`. The network transaction will call methods on the
-stream, and on completion, will invoke callbacks back to the
-`HttpCache::Transaction` which will notify the `URLRequestHttpJob` and
-`URLRequest` as necessary. For the HTTP pathway, the generation and parsing of
-http requests and responses will be handled by the `HttpStreamParser`. For the
-SPDY pathway, request and response parsing are handled by `SpdyStream` and
-`SpdySession`. Based on the HTTP response, the `HttpNetworkTransaction` may need
-to perform [HTTP
-authentication](/developers/design-documents/http-authentication). This may
-involve restarting the network transaction.
-
-### HttpStreamFactory
-
-`HttpStreamFactory` first does proxy resolution to determine whether or not a
-proxy is needed. The endpoint is set to the URL host or the proxy server.
-`HttpStreamFactory` then checks the `SpdySessionPool` to see if we have an
-available `SpdySession` for this endpoint. If not, then the stream factory
-requests a "socket" (TCP/proxy/SSL/etc) from the appropriate pool. If the socket
-is an SSL socket, then it checks to see if
-[NPN](https://tools.ietf.org/id/draft-agl-tls-nextprotoneg-01.txt) indicated a
-protocol (which may be SPDY), and if so, uses the specified protocol. For SPDY,
-we'll check to see if a `SpdySession` already exists and use that if so,
-otherwise we'll create a new `SpdySession` from this SSL socket, and create a
-`SpdyStream` from the `SpdySession`, which we wrap a `SpdyHttpStream` around.
-For HTTP, we'll simply take the socket and wrap it in a `HttpBasicStream`.
-
-#### Proxy Resolution
-
-`HttpStreamFactory` queries the `ProxyService` to return the `ProxyInfo` for the
-GURL. The proxy service first needs to check if it has an up-to-date proxy
-configuration. If not, it uses the `ProxyConfigService` to query the system for
-the current proxy settings. If the proxy settings are set to no proxy or a
-specific proxy, then proxy resolution is simple (we return no proxy or the
-specific proxy). Otherwise, we need to run a [PAC
-script](http://en.wikipedia.org/wiki/Proxy_auto-config) to determine the
-appropriate proxy (or lack thereof). If we don't already have the PAC script,
-then the proxy settings will indicate we're supposed to use [WPAD
-auto-detection](http://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol),
-or a custom PAC url will be specified, and we'll fetch the PAC script with the
-`ProxyScriptFetcher`. Once we have the PAC script, we'll execute it via the
-`ProxyResolver`. Note that we use a shim `MultiThreadedProxyResolver` object to
-dispatch the PAC script execution to threads, which run a `ProxyResolverV8`
-instance. This is because PAC script execution may block on host resolution.
-Therefore, in order to prevent one stalled PAC script execution from blocking
-other proxy resolutions, we allow for executing multiple PAC scripts
-concurrently (caveat: [V8](http://en.wikipedia.org/wiki/V8_(JavaScript_engine))
-is not threadsafe, so we acquire locks for the javascript bindings, so while one
-V8 instance is blocked on host resolution, it releases the lock so another V8
-instance can execute the PAC script to resolve the proxy for a different URL).
-
-#### Connection Management
-
-After the `HttpStreamRequest` has determined the appropriate endpoint (URL
-endpoint or proxy endpoint), it needs to establish a connection. It does so by
-identifying the appropriate "socket" pool and requesting a socket from it. Note
-that "socket" here basically means something that we can read and write to, to
-send data over the network. An SSL socket is built on top of a transport
-([TCP](http://en.wikipedia.org/wiki/Transmission_Control_Protocol)) socket, and
-encrypts/decrypts the raw TCP data for the user. Different socket types also
-handle different connection setups, for HTTP/SOCKS proxies, SSL handshakes, etc.
-Socket pools are designed to be layered, so the various connection setups can be
-layered on top of other sockets. `HttpStream` can be agnostic of the actual
-underlying socket type, since it just needs to read and write to the socket. The
-socket pools perform a variety of functions-- They implement our connections per
-proxy, per host, and per process limits. Currently these are set to 32 sockets
-per proxy, 6 sockets per destination host, and 256 sockets per process (not
-implemented exactly correctly, but good enough). Socket pools also abstract the
-socket request from the fulfillment, thereby giving us "late binding" of
-sockets. A socket request can be fulfilled by a newly connected socket or an
-idle socket ([reused from a previous http
-transaction](http://en.wikipedia.org/wiki/HTTP_persistent_connection)).
-
-#### Host Resolution
-
-Note that the connection setup for transport sockets not only requires the
-transport (TCP) handshake, but probably already requires host resolution.
-`HostResolverImpl` uses assorted mechanisms including getaddrinfo() to perform
-host resolutions, which is a blocking call, so the resolver invokes these calls
-on unjoined worker threads. Typically host resolution usually involves
-[DNS](http://en.wikipedia.org/wiki/Domain_Name_System) resolution, but may
-involve non-DNS namespaces such as
-[NetBIOS](http://en.wikipedia.org/wiki/NetBIOS)/[WINS](http://en.wikipedia.org/wiki/Windows_Internet_Name_Service).
-Note that, as of time of writing, we cap the number of concurrent host
-resolutions to 8, but are looking to optimize this value. `HostResolverImpl`
-also contains a `HostCache` which caches up to 1000 hostnames.
-
-#### SSL/TLS
-
-SSL sockets require performing SSL connection setup as well as certificate
-verification. Except on iOS, Chromium uses
-[BoringSSL](https://boringssl.googlesource.com/boringssl/) to handle the SSL
-connection logic. However, we use platform specific APIs for certificate
-verification. We are moving towards using a certificate verification cache as
-well, which will consolidate multiple requests for certificate verification of
-the same certificate into a single certificate verification job and cache the
-results for a period of time.
-
-*Danger: Outdated*
-
-`SSLClientSocketNSS` roughly follows this sequence of events (ignoring advanced
-features like [Snap
-Start](http://tools.ietf.org/html/draft-agl-tls-snapstart-00) or
-[DNSSEC](http://en.wikipedia.org/wiki/Domain_Name_System_Security_Extensions)
-based certificate verification):
-
-* Connect() is called. We set up NSS's SSL options based on
- `SSLConfig` specified configuration or preprocessor macros. Then we
- kickoff the handshake.
-* Handshake completes. Assuming we didn't hit any errors, we proceed
- to verify the server's certificate using `CertVerifier`. Certificate
- verification may take some amount of time, so `CertVerifier` uses
- the `WorkerPool` to actually call `X509Certificate::Verify()`, which
- is implemented using platform specific APIs.
-
-Note that Chromium has its own NSS patches which support some advanced features
-which aren't necessarily in the system's NSS installation, such as support for
-[NPN](http://tools.ietf.org/html/draft-agl-tls-nextprotoneg-00), [False
-Start](http://tools.ietf.org/search/draft-bmoeller-tls-falsestart-00), Snap
-Start , [OCSP stapling](http://en.wikipedia.org/wiki/OCSP_Stapling), etc.
-
-TODO: talk about network change notifications \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/netlog/NetLog1.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/netlog/NetLog1.png.sha1
deleted file mode 100644
index 2735d9b4906..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/netlog/NetLog1.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-3bd5a144a3671fb4066522f00220016c002a64dc \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/netlog/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/netlog/index.md
deleted file mode 100644
index edcd6c083c8..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/netlog/index.md
+++ /dev/null
@@ -1,223 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: netlog
-title: "NetLog: Chrome\u2019s network logging system"
----
-
-**Eric Roman**
-
-**Matt Menke**
-
-**Overview**
-**NetLog is an event logging mechanism for Chrome’s network stack to help debug problems and analyze performance. It enables a “capture → dump → analyze” style of workflow similar to tools like tcpdump.**
-**Here is a typical use case:**
-
-1. **User enables network logging**
-2. **User reproduces a problem**
-3. **User uploads the log file to a bug report**
-4. **Developer analyzes the log file to see what happened**
-
-**Under the hood Chrome’s network stack has been instrumented to emit events at various interesting trace points. This event stream can be watched by observers to do interesting things with the data. Some of Chrome’s NetLog observers are:**
-
-* **([file_net_log_observer.cc](https://cs.chromium.org/chromium/src/net/log/file_net_log_observer.h))
- Serializes the event stream to a JSON file that can be loaded by the
- [Catapult NetLog
- Viewer](https://chromium.googlesource.com/catapult/+/HEAD/netlog_viewer/).**
-* **(net_internals_ui.cc) \[[Removed in
- 2018](https://bugs.chromium.org/p/chromium/issues/detail?id=678391&desc=2#c18)\]
- ~~Forwards events to the JavaScript application
- chrome://net-internals. This app can visualize the data and also
- export it to a file~~**
-
-**Goals of NetLog**
-
-* **Net Logging is off by default**
-* **Net Logging does not affect performance when off**
-* **Net Logging support ships with official builds of Chrome**
-* **Events are easily serialized/deserialized to disk**
-
-**Non-Goals**
-**NetLog is intended solely for logging.**
-**Features needing reliable network information should never be built on top of NetLog. It may be tempting to add an observer of NetLog as a quick hack to glean some internal state (since you circumvent having to create a proper interface and plumb it deep into the network stack). But that is wrong, and it will break. Instead, you should add an interface to NetworkDelegate, complete with regression tests and documentation.**
-**Think of NetLog as a fancy way of writing:**
-**LOG(INFO) &lt;&lt; “Some network event; bla bla bla”;**
-**It would be silly if any feature broke simply because you moved around such a logging statement or changed its format slightly!**
-**Similarly, we want net logging to be easy to add/edit so developers can instrument problem areas as needed.**
-**Terminology**
-**There are a couple of terms Chrome engineers use when talking about this domain. They sound similar but have slightly different meanings!**
-
-* **“netlog”: The C++ mechanism that exposes the network event stream
- (in particular, net::NetLog).**
-* **“net internals”: This refers specifically to the
- chrome://net-internals application. It *used to be* a visualizer for
- the network event stream. This is also called “about net-internals”
- verbally (in reference to the URL about:net-internals).**
-* **“netinternals dump”, or *"netlog dump"*: A JSON file containing a
- serialized event stream. The usual way to create these dumps is
- using the export functionality on chrome://net-export. However dumps
- can also be generated by launching Chrome with the command line flag
- --log-net-log=*FILENAME*.**
-
-**Structure of a NetLog event**
-**On the C++ side of things, events are represented by a net::NetLog::Entry. This in-memory representation is intended to be easily serialized to JSON.**
-**We try not to serialize events to JSON until it is absolutely necessary (most
-of the time there are no consumers of the events so we don't have to). When we
-do need to serialize the events, this is the JSON format that we use:**
-
-**<table>**
-**<tr>**
-**<td>Field</td>**
-**<td>Type</td>**
-**<td>Description</td>**
-**</tr>**
-**<tr>**
-**<td>time</td>**
-**<td>string</td>**
-**<td>The time in milliseconds when the event occurred.</td>**
-**<td> This is a time tick count and not a unix timestamp. However it is easily convertible given the time tick offset (time ticks are independent of system clock adjustments, guaranteeing that our timestamps don't go backwards)</td>**
-**<td> Another oddity is that despite being a numeric quantity, we encode time as a string. This is to avoid precision loss due to Chrome’s JSON stringifier.</td>**
-**</tr>**
-**<tr>**
-**<td>type</td>**
-**<td>number</td>**
-**<td>The ID of the event type. These are enumerated in <a href="https://cs.chromium.org/chromium/src/net/log/net_log_event_type_list.h">net_log_event_type_list.h</a></td>**
-**</tr>**
-**<tr>**
-**<td>source</td>**
-**<td>object</td>**
-**<td>This field identifies which entity emitted the event. For instance this might identify a particular URLRequest.</td>**
-**<td> The event stream is a flat sequence of events, which has intermixed starts and completions for all sorts of asynchronous events (potentially happening across multiple threads).</td>**
-**<td> The usefulness of source is to be able to group these events into logical blocks with a serial control flow.</td>**
-**<td> The source object itself is comprised of two sub-fields “id” and “type”. The “id” is unique across all types. The type field is included as a convenience so that processing the event stream can be done in a stateless manner.</td>**
-**<td> The list of possible source types is enumerated in <a href="https://cs.chromium.org/chromium/src/net/log/net_log_source_type_list.h?q=net_log_source_type_list.h&sq=package:chromium&dr">net_log_source_type_list.h</a></td>**
-**</tr>**
-**<tr>**
-**<td>phase</td>**
-**<td>number</td>**
-**<td>This enumeration can be one of BEGIN, END, NONE.</td>**
-**<td> Let's say you wanted to log the duration of a URLRequest by logging the start and then the end:</td>**
-**<td> One way to do that would be to define two event types: URL_REQUEST_BEGIN and URL_REQUEST_END. This certainly works, however it is inconvenient when doing automated analysis or hierarchical grouping, since it requires knowing which event pairs complement each other.</td>**
-**<td> To address this, we prefer to define a single event type, say URL_REQUEST, and emit the starting event with phase=BEGIN, and the terminating event with phase=END</td>**
-**<td> (phase=END events are assumed to terminate the most recent phase=BEGIN of the same type, within the same source.)</td>**
-**</tr>**
-**<tr>**
-**<td>params</td>**
-**<td>object</td>**
-**<td>This is an optional field.</td>**
-**<td> When provided, it represents event-specific parameters. For example, when starting a URL request, we smuggle the load flags, URL and priority of the request into the params structure. The visualizer knows how to pretty print the parameters for certain event types. For everything else it just dumps the JSON in a readable fashion.</td>**
-**</tr>**
-**</table>**
-
-**How network events are emitted**
-**net::NetLog is the interface for emitting NetLog events.**
-**Code throughout src/net/\* that needs to emit network events must be passed a
-pointer to a net::NetLog to send the events to.**
-
-**[<img alt="image"
-src="/developers/design-documents/network-stack/netlog/NetLog1.png">](/developers/design-documents/network-stack/netlog/NetLog1.png)**
-
-**Most commonly, this net::NetLog dependency is passed via a net::BoundNetLog parameter rather than directly as a net::NetLog\*. This is a wrapper to “bind” the same source parameter to each event output to the stream. You can think of it like creating a private event stream for a single entity. Read more about this in [net_log.h](https://cs.chromium.org/chromium/src/net/log/net_log.h?type=cs&sq=package:chromium&g=0)**
-**Ultimately though, net::NetLog is just a boring interface and doesn’t actually do anything.**
-**In the Chrome browser, the concrete implementation of net::NetLog used is [ChromeNetLog](https://cs.chromium.org/chromium/src/components/net_log/chrome_net_log.h). We configure things so that all network logging events for all browser profiles flow through a single instance of ChromeNetLog. ChromeNetLog is responsible for forwarding this event stream on to other interested parties via an observer mechanism.**
-**As was alluded to earlier in the overview, [FileNetLogObserver ](https://cs.chromium.org/chromium/src/net/log/file_net_log_observer.h)is implemented as one such observer (which serializes the network events to a JSON file).**
-**How custom parameters are attached to events**
-**Custom parameters are specified by a base::Value\*. Value is used to represent a hierarchy of nodes, that maps directly into JSON; it has all the expected building blocks -- dictionaries, strings, numbers, arrays. See [values.h](https://cs.chromium.org/chromium/src/base/values.h) for more details.**
-**For the sake of efficiency, you do not directly create a base::Value\* when emitting events. Rather, you pass in a Callback which knows how to build the Value\*, in case one is needed.**
-**This decoupling allows deferring the creation of Values until really necessary. This is good since, in the common case when not exporting events, we simply don’t need that data. Creating the custom Value parameters is not free since it involves copying internal state into a new Value\* hierarchy.**
-makes it easy to piece things together without needing to define helper structures!** **You are guaranteed that the parameter callback will only be invoked synchronously before the return of the logging function. We use a** **Callback for convenience, since** **Bind
-Here is an example of how to emit an event with custom parameters:**
-net_log_.BeginEvent(**
-NetLog::TYPE_URL_REQUEST_START_JOB,**
-base::Bind(&NetLogURLRequestStartCallback,**
-&url(), &method_, load_flags_, priority_));**
-By the time BeginEvent returns, NetLogURLRequestStartCallback() will have been invoked if-and-only-if the Value\* parameter was needed. That is why it is safe for the callback to take pointers to url() and method_ (pointers are preferred in this case to avoid making unnecessary copies).**
-Here is what the bound function might look like:**
-Value\* NetLogURLRequestStartCallback(const GURL\* url, const std::string\* method, int load_flags, RequestPriority priority, NetLog::LogLevel) {**
-DictionaryValue\* dict = new DictionaryValue(); dict-&gt;SetString("url", url-&gt;possibly_invalid_spec()); dict-&gt;SetString("method", \*method); dict-&gt;SetInteger("load_flags", load_flags); dict-&gt;SetInteger("priority", static_cast&lt;int&gt;(priority)); return dict; }**
-What does NetLog actually log?**
-The logging is subject to change, but in general we try to log whatever is useful for debugging.**
-Admittedly that description isn’t very helpful.**
-… let me give some examples of what we currently log:**
-
-* Queueing delay to schedule DNS resolves to threads**
-* Stalls due to exceeding socket pool limits**
-* Attempts to do a TCP connect to an IP address**
-* Speculative DNS resolves**
-* Proxy resolution**
-* Cache hits for DNS resolves**
-* Reads/writes from disk cache**
-* Network change events**
-* Proxy configuration change events**
-* Stalls due to Chrome extensions pausing requests**
-* Errors**
-
-The sorts of events emitted are application specific. NetLog does not aim to replace lower-level network tools like packet captures (i.e. tcpdump). Rather it focuses on application level logic and caches which cannot possibly be known at the lower layers. There is some overlap though, since NetLog can optionally capture bytes sent/received over sockets, as well as the decrypted bytes for secure channels.**
-chrome://net-internals (aka about:net-internals)**
-Until 2018, net-internals was visualizer for the NetLog event stream. It can be used both in real-time, and also to load up post-mortem NetLog dumps.**
-Most of the data displayed by net-internals comes from the NetLog event stream. When you load about:net-internals it installed an observer into ChromeNetLog (inside the browser process) which serializes network events to JSON and sends them over IPC to the renderer running about:net-internals application.**
-about:net-internals is itself a JavaScript application, whose code lived under [src/chrome/browser/resources/net_internals/](https://cs.chromium.org/chromium/src/chrome/browser/resources/net_internals/)**
-Some of the other data displayed by net-internals comes by polling other state
-functions. For instance the “DNS cache” listed on about:net-internals is
-obtained by polling the browser for the full list, rather than by an event-based
-mechanism.**
-
-**In 2018, the NetLog Viewer was removed from Chrome and moved to an external
-repository:**
-<https://github.com/catapult-project/catapult/tree/master/netlog_viewer>. The
-move was made for a number of reasons, including reducing installer size and
-attack surface. Perhaps most importantly, WebUI-run JavaScript is subject to
-many tight restrictions (e.g. no external libraries) which made it hard to
-improve and maintain. As an external application running in an untrusted web
-context, these restrictions can be relaxed.
-
-***Limiting Log Size***
-
-**[FileNetLogObserver::FileWriter](https://source.chromium.org/chromium/chromium/src/+/HEAD:net/log/file_net_log_observer.cc;l=238;drc=0afff123401318329000bfe34af0cde12ce3488c;bpv=1;bpt=1) explains the implementation of the basic "circular" log: To accommodate the request for a size-limited JSON file, instead of writing a single JSON output file initially, Chrome instead creates a folder full of event-containing JSON fragment files (10 by default) that are overwritten on a least-recently-written basis. When the capture is complete, Chrome will stitch the partial files together, including a "prefix" and "end" file containing the constants and other non-event data.**
-
-**NetLogExporter::CreateScratchDir's function uses |scratch_dir.CreateUniqueTempDir()| to choose the target location.**
-
-**History: the evolution of NetLog/NetInternals**
-
-**version 1: LoadLog**
-**In the beginning, there was LoadLog. This was basically a small per-request log buffer. Each URLRequest had its own LoadLog member field, which was essentially a std::vector of events and their timings. These events got accumulated directly in the browser process. In other words, it was a form of always-on passive logging.**
-**My** (eroman) original use case for LoadLog was to diagnose performance
-problems. At the time, internal users were complaining to me about sporadic
-performance issues. But of course they would never reproduce them when I was
-around! I wanted a system that could dump the relevant information reliably
-AFTER the problem had already happened.
-
-**To capture the information I added LoadLogs to URLRequests to track the performance of core operations like DNS resolving and proxy resolution. This structure was really just a small set of event timings.**
-**To expose the data, I created a very simple webapp at “about:net-internals”. This page was entirely generated by C++, and believe me when I say it was basic!**
-**Since LoadLogs were attached to particular URLRequests, the visualizer would work by traversing the list of in-progress requests and subsequently printing its logs. I also kept a “graveyard” of recently finished requests in the browser so we would have some data on recently finished requests, (which ultimately was my original objective). Since about:net-internals was a static page, you would have to reload it to view new data. You would reload at the risk of losing the data it was currently displaying, since the data was backed by the browser process and might have already been evicted from the circular graveyard buffer.**
-**The exchange format for users to send me their logs was ad-hoc. Essentially they would save the HTML page, often by using the “save webpage as” feature, or simply copy-pasting. (Some users would even print-to-pdf and attach that).**
-**Due to the limitations of the C++ generated HTML formatting, this first version of “about:net-internals” used a fixed-width text format for visualizing logs. This is in fact the same cryptic display (with few changes) that is used today. (When I ported that code from C++ to Javascript I left a TODO to improve it later...)**
-**You might be wondering about the URL... why not simply call it “about:net”? Well, at the time there was already a page running at “about:net”! So I added the suffix “-internals” to distinguish them. Moreover, at the time I just wanted to build something quick and dirty and did not want to go through any UI reviews. By calling it “-internals” I was assuring people that it was just a janky developer page that didn’t need to meet our higher standards for end-user content.**
-**Interestingly “about:net” got deprecated when porting Chrome to Mac/Linux (since it relied on native UI which no one wanted to port). We eventually deleted it, but by the time that happened about:net-internals was already a well established URL so I saw no point in changing it. Today, ChromeOS has re-appropriated the url about:net.**
-**version 2: NetLog**
-**I wanted to do more with about:net-internals. However the fact that it was entirely generated via C++ made it very difficult to work with. Formatting HTML from C++ is no fun, and the code quickly devolves into something terrifying. That last thing I wanted to be adding to Chrome was a pile of ugly C++ code. Since webui pages were all the rage at the time (this was around the time of one of the “new tab page” rewrites), my solution was to re-write about:net-internals in JavaScript.**
-**I started doing this, and created the experimental page “about:net2” for my in-progress work. These two pages coexisted for a while since it was just a side project for me. But eventually when it was good enough, I would rename about:net2 to about:net-internals.**
-**The requirement to use JavaScript motivated the evolution of LoadLog into NetLog.**
-**The problem with LoadLog was the data was siloed by request. This meant my webapp couldn’t receive real-time updates. Rather it would have to do gnarly polling to see what had changed (and suck down all the data each time). Another issue that scared me was the browser-side accumulation of data being done with LoadLogs. When tracing I wanted to turn on higher logging granularity, but this came at the risk of using up lots of browser memory, which at the extreme might kill the process.**
-**To address this, I decided to generalize the mechanism of a per-request events into a single flat event stream. This gave me a nice choke point for transferring updates to the net-internals app. It also meant that the accumulation of data was done in a renderer process, so at worst a memory exhaustion would kill just that single tab.**
-**However this new approach also created some new problems. First of all, having a flat event stream meant needing to re-build the structure within each consumer of the NetLog. This is why the “source” parameter exists in events today. Since this is inefficient, I tried to keep browser-side interpretation of NetLog events to a minimum, and leaving this task to the visualizer.**
-**The second issue was how to maintain feature parity with LoadLog’s ability to do “passive logging.” To support passive logging (i.e. “always on logging”) I introduced the PassiveLogCollector observer.**
-**PassiveLogCollector was an observer that would watch the event stream and save recent events into a circular buffer (using a variety of heuristics to try and keep the most relevant information, clustered by request). That way if something went wrong, we would have a partial log showing what happened most recently. Getting the heuristics for PassiveLogCollector good-enough was non-trivial, since it required re-building and tracking the structure for the events.**
-**Initially PassiveLogCollector worked as promised. But it didn’t scale well over time, and has since been removed.**
-**The problem is that over time the amount of net logging grew substantially (success!), and also the event stream became more fragmented into sources (due to more overlapping async stuff). The tension with PassiveLogCollector was wanting to collect enough useful information to be able to debug problems, while at the same time not having fragile heuristics which could break at any moment leading to browser memory bloat. Ultimately I wasn’t comfortable with the risk of bad heuristics leading to bloat, and how it complicated adding new logging, and hence removed it.**
-**Some of the gaps left by the removal of PassiveLogCollector can be addressed by issue [117026](https://crbug.com/117026).**
-**version 3: The age of mmenke**
-**The current evolution of about:net-internals is what I would call v3. It starts when Matt Menke teamed up with me to work on net-internals.**
-**Prior to the golden age of menke, exchanging NetLogs involved copy-pasting the webpage! This was a hassle, since invariably users would paste the useless parts of the log and not the whole thing, or provide screenshots rather than text... It was downright embarrassing.**
-**Matt added the ability to save logs to a JSON file, and to import them back again into about:net-internals, which was a huge step forward.**
-**Other core features of v3 were unittests for the JavaScript frontend, and a timeline visualization to plot network metrics both in real-time and from postmortem dump files.**
-**v3 also got custom views for Sockets, SPDY sessions, Winsock LSPs (SPIs), a slew of other functionality and increased logging sprinkled throughout the network stack.**
-**version 4: The extraction of the viewer**
-**In 2018, the JavaScript of the viewer application was** [removed](https://bugs.chromium.org/p/chromium/issues/detail?id=678391&desc=2#c18) from the net-internals page and extracted to a [new repository](https://github.com/catapult-project/catapult/tree/master/netlog_viewer) and running from a [public app server](https://netlog-viewer.appspot.com/#import). The reasons for this extraction are documented in the [NetLog Viewer design document](https://docs.google.com/document/d/1Ll7T5cguj5m2DqkUTad5DWRCqtbQ3L1q9FRvTN5-Y28/).
-version ?:
-**… patches welcome!** \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/index.md
deleted file mode 100644
index a83934adfa8..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/index.md
+++ /dev/null
@@ -1,97 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-- - /developers/design-documents/network-stack/network-bug-triage
- - Network bug triage
-page_name: downloads-bug-triage
-title: Triaging Downloads Bugs
----
-
-[Downloads
-bugs](https://code.google.com/p/chromium/issues/list?can=2&q=cr-ui-browser%3Ddownloads&sort=&groupby=&colspec=ID+Pri+Status+Modified+Owner+Summary&nobtn=Update)
-are automatically cc'd to the mailing list download-bugs@chromium.org. People on
-the bugs rotation should subscribe to that list. The suggested frequency for
-handling incoming bugs is 2-3 days, but that's just a suggestion.
-
-Triagers should
-
-1. Attempt to reproduce and set the correct Status for [untriaged and
- unconfirmed
- bugs](http://crbug.com/?q=cr-ui-browser%3Ddownloads+status%3Auntriaged%2Cunconfirmed+-blocking%3A96604+-needs%3Dfeedback&colspec=ID+Summary+Modified).
- (That search ignores Needs-Feedback and External bugs.)
- * Engage with users as they report bugs to get all the information
- we need for eventual resolution.
- * Set Type-Feature Available for feature requests or mark WontFix
- and explain why this feature is unlikely to be implemented.
- * Crash reports frequently do not contain crash ids. Send
- reporters to [Reporting a Crash
- Bug](http://www.chromium.org/for-testers/bug-reporting-guidelines/reporting-crash-bug)
- and set Needs-Feedback.
- * [Providing Network Details for bug
- reports](/for-testers/providing-network-details)
- * Ensure high priority bugs receive appropriate attention. Consult
- benjhayden or asanka if necessary in order to ensure timely
- resolution. High priority bugs include
- * Regressions: Mark as Type-Bug-Regression Pri-1
- * Possible security problems: Request security review and lock
- down (Restrict-View-Commit). If security review shows
- severity medium or higher, mark Pri-1
- * Crashers: Mark as Pri-1 if it's on the top crash list or
- looks likely to happen in the wild. If it looks like a
- use-after-free that might be reproducible in the wild, it's
- a security issue.
- * [chromecrash
- query](https://chromecrash.corp.google.com/browse?q=int32(extract_regexp(product.version%2C%20r'%5Cd%2B%5C.%5Cd%2B%5C.(%5Cd%2B)%5C.%5Cd%2B'))%20%3E%3D%200%20AND%0Acustom_data.ChromeCrashProto.ptype%3D'browser'%20AND%0A(custom_data.ChromeCrashProto.magic_signature_1.component%20CONTAINS%20'src%2Fcontent%2Fbrowser%2Fdownload'%20OR%0A%20custom_data.ChromeCrashProto.magic_signature_1.component%20CONTAINS%20'src%2Fchrome%2Fbrowser%2Fdownload'%20OR%0A%20custom_data.ChromeCrashProto.magic_signature_1.file_path%20CONTAINS%20'src%2Fchrome%2Fbrowser%2Fextensions%2Fapi%2Fdownloads'%20OR%0A%20custom_data.ChromeCrashProto.magic_signature_1.file_path%20CONTAINS%20'src%2Fchrome%2Fbrowser%2Fui%2Fwebui%2Fdownload')):
- fiddle with the literal 0 to find crashes
- before/after/at a specific branch.
- * Badly broken features
- * Failing or disabled tests
- * Mark duplicates as such. [Frequently duplicated
- bugs](/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/salient-bug-list)
- * Some issues on windows are due to bad shell extensions.
- Point reporters to
- [ShellMenuView](http://www.nirsoft.net/utils/shell_menu_view.html)
-2. Categorize [uncategorized
- bugs](http://crbug.com?q=cr-ui-browser%3Ddownloads+-blocking%3A68191+-blocking%3A68195+-blocking%3A68196+-blocking%3A68197+-blocking%3A68198+-blocking%3A68200+-blocking%3A68201+-blocking%3A68204+-blocking%3A68206+-blocking%3A68208+-blocking%3A68209+-blocking%3A68276+-blocking%3A68356+-blocking%3A68358+-blocking%3A68359+-blocking%3A68361+-blocking%3A69298+-blocking%3A78147+-blocking%3A78148+-blocking%3A96604+-blocking%3A133971+-blocking%3A133960&colspec=ID+Summary+Modified)
- by adding them to the "Blocked on" list of one of [these category
- bugs](http://crbug.com/?q=Cr-Ui-Browser%3DDownloads+blocking%3A133960&colspec=ID+Summary).
-3. Sweep
- [needs=feedback](https://code.google.com/p/chromium/issues/list?can=2&q=cr-ui-browser=downloads%20needs=feedback&sort=modified&colspec=ID%20Status%20Owner%20Summary%20Modified):
- if feedback has been provided, remove needs-feedback and continue
- debugging; if feedback has not been provided after 2 weeks, Archive
- the bug.
-4. When major changes such as file moves happen, either update the
- below documentation or delete it if nobody has found it useful.
-
-FAQ, Bug-hunting tips
-
-Where to begin fixing different kinds of bugs. Please feel free extend this
-liberally.
-
-* Main entry point from the ResourceDispatcherHost to the downloads
- system:
- [BufferedResourceHandler](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/loader/buffered_resource_handler.cc)
- decides whether RDH should create
- [DownloadResourceHandler](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/download/download_resource_handler.cc)
-* The download state machine is managed on the UI thread in
- [DownloadItemImpl](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/download/download_item_impl.cc),
- managed by
- [DownloadManagerImpl](https://code.google.com/p/chromium/codesearch#chromium/src/content/browser/download/download_manager_impl.cc).
-* chrome://downloads
- [WebUI](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/ui/webui/downloads_dom_handler.cc)
- [HTML/CSS/JS](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/resources/downloads/)
-* [Target filename
- determiner](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/download/download_target_determiner.cc)
-* [DownloadHistory](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/download/download_history.cc)
- observes the DownloadManager and DownloadItems and posts changes to
- [DownloadDatabase](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/history/download_database.cc)
-* chrome.downloads Extension API
- [IDL](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/common/extensions/api/downloads.idl),
- [implementation](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/extensions/api/downloads/downloads_api.cc)
-* Multiple automatic download throttling infobar
- [DownloadRequestLimiter](https://code.google.com/p/chromium/codesearch#chromium/src/chrome/browser/download/download_request_limiter.cc) \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/salient-bug-list/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/salient-bug-list/index.md
deleted file mode 100644
index 7dc992ff165..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage/salient-bug-list/index.md
+++ /dev/null
@@ -1,69 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-- - /developers/design-documents/network-stack/network-bug-triage
- - Network bug triage
-- - /developers/design-documents/network-stack/network-bug-triage/downloads-bug-triage
- - Triaging Downloads Bugs
-page_name: salient-bug-list
-title: Salient Bug List
----
-
-For reference, some bugs that may be useful to have an easy list for (usually
-for duping). Feel free to edit this list as you feel moved.
-
-<table>
-<tr>
-<td> FTP directory listing fails with pt-br locale (Bug reports may consist of a single line description alluding to a FTP error)</td>
-<td><a href="http://crbug.com/177428">177428</a></td>
-</tr>
-<tr>
-<td> Auto-execution of JNLP files don't work</td>
-<td><a href="http://crbug.com/92846">92846</a> </td>
-</tr>
-<tr>
-<td> Open With</td>
-<td><a href="http://crbug.com/333">333</a> </td>
-</tr>
-<tr>
-<td> Downloads Interrupted by Sleep</td>
-<td><a href="http://crbug.com/110497">110497</a> </td>
-</tr>
-<tr>
-<td> Can't download PDF as binary/octet-stream</td>
-<td><a href="http://crbug.com/104331">104331</a> </td>
-</tr>
-<tr>
-<td> Downloading a URL already downloading but paused hangs</td>
-<td><a href="http://crbug.com/100529">100529</a> </td>
-</tr>
-<tr>
-<td> Make MHTML a Save Page As ... format</td>
-<td><a href="http://crbug.com/120416">120416</a> </td>
-</tr>
-<tr>
-<td> Downloads resumption/resume interrupted downloads</td>
-<td><a href="http://crbug.com/7648">7648</a> </td>
-</tr>
-<tr>
-<td> Make pdf a Save Page As ... format</td>
-<td><a href="http://crbug.com/116749">116749</a> </td>
-</tr>
-<tr>
-<td> Make a whole lot of things a Save Page As .. format</td>
-<td><a href="http://crbug.com/113888">113888</a> </td>
-</tr>
-<tr>
-<td> Mac circumvention of downloads warning dialog incorrect for last incognito window close</td>
-<td><a href="http://crbug.com/88419">88419</a></td>
-</tr>
-<tr>
-<td> Downloads fail with 'Insufficient Permissions' error</td>
-<td><a href="https://code.google.com/p/chromium/issues/detail?id=161793">161793</a></td>
-</tr>
-</table> \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/index.md
deleted file mode 100644
index 5f1a1f61d27..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/network-bug-triage/index.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: network-bug-triage
-title: Network bug triage
----
-
-This page has moved to the source repository:
-
-[Chrome Network Bug
-Triage](https://chromium.googlesource.com/chromium/src/+/HEAD/net/docs/bug-triage.md) \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-objectives/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-objectives/index.md
deleted file mode 100644
index 174135eaa3e..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-objectives/index.md
+++ /dev/null
@@ -1,592 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: network-stack-objectives
-title: Network Stack Objectives
----
-
-## Q2 2014 Objectives
-
-### Blink
-
-* Make WebSocket scalable
- * Switch WebSocket to new stack in Chromium
- * Ensure that WS/HTTP2 mapping work with HTTP2 spec
- * Revive upgrade success rate experiment
- * Make permessage-compress spec ready for IESG review
-* Extend XMLHttpRequest for streaming use cases
- * Streams API standardization
-* Optimize networking APIs
-* Promises Blink bindings
-
-## Q4 2011 Objectives
-
-#### Performance
-
-* SPDY
-* mobile tuning
-* DNS resolver
-* HTTP pipelining prototype
-
-#### **SSL**
-
-* captive portals support
-* origin-bound certificates
-* DOMCrypt API
-
-#### **Developer productivity**
-
-* analysis view of net-internals logs
-* API cleanup
-
-## Q2 2011 Objectives
-
-#### Improve test coverage
-
-* Add tests of SSL client authentication (wtc)
-* Set up automated test environment for HTTP Negotatie and NTLM
- authentication (asanka, cbentzel)
-* Add drag-n-drop, fine-grained cancels tests to Downloads
- (rdsmith,ahendrickson,asanka)
-
-#### Fix bugs and clean up / refactor code
-
-* Clean up network stack API, threading model, etc. (willchan, wtc)
-* Use base, net, and crypto as DLLs on Windows (rvargas)
-* Refactor Socket classes to support server, UDP, and other transport
- sockets (mbelshe, willchan)
-* Finish Downloads System major refactors (dataflow, file
- determination, state granularity) (ahendrickson, rdsmith)
-* Fix Download incorrect name problems -- see http://crbug.com/78200
- (asanka)
-* Fix Downloads error detection and cache interface (ahendrickson)
-* Substantially reduce downloads crashers. Tentative Goal: halve
- "crashes touching downloads directory / total downloads initiated"
- metric (rdsmith, others)
-
-#### Improve network performance / features
-
-* SPDY (willchan)
-* NSS certificate verification and revocation checking (wtc)
-* SSL client authentication to destination server through HTTPS proxy
- (mattm, wtc)
-* WPAD over DHCP (joi)
-* Roll out Anti-DDoS functionality (joi)
-* \[Stretch\] Add Download resumption after error (ahendrickson)
-
-**Documentation**
-
-* Write design document for HTTP authentication (cbentzel)
-
-## Q1 2011 Objectives
-
-Improve test coverage
-
-* Set up test environment for HTTP Negotiate and NTLM authentication
- (asanka, cbentzel, wtc) - 0.1 Have a manual test environment.
- Started work on automated test environment at the very end of the
- quarter
-* Write new tests, enable and deflake existing ones for the download
- subsystem (rdsmith, ahendrickson) -- 0.8 Existing tests deflaked
- (major accomplishment), some new tests but not many.
-* Add tests of SSL client authentication (wtc) -- 0.0 Did not work on
- it.
-
-Fix bugs and clean up / refactor code
-
-* Fix download subsystem bugs - crashes, corruption, etc. (rdsmith,
- ahendrickson) -- 0.6 Fixed several bugs, but didn't get anywhere
- near as far with this as intended.
-* Clean up download subsystem code (rdsmith, ahendrickson) -- 0.7
- Control flow much cleaner, main path deraced. Two important
- refactors not done last quarter (dataflow, file determination); will
- be highpri this quarter.
-* Refactor safebrowsing code (lzheng)
-* Fix HTTP authentication bugs - background tabs, authentication
- freezes/crashes, Negotiate authentication failures on Unix. (asanka,
- cbentzel) - 0.7 Addressed a lot of key remaining issues, such as
- background tab.
-* Clean up network stack API - URLRequestContext, etc. (willchan)
-* Use base as a DLL, a prerequisite for using net as a DLL (rvargas) -
- 0.7 working on getting projects to compile cleanly
-
-Improve network performance / features
-
-* TLS enhancements - OCSP stapling in NSS and integration with Windows
- CryptoAPI, Snap Start (wtc, agl, rsleevi) -- 0.7 OCSP stapling
- turned on for Linux and Windows, but not Mac OS X. Finished
- implementation of Snap Start.
-* Add extension API for HTTP authentication prompt (stretch) (asanka,
- cbentzel) - 0.0 did not start
-* Make SPDY faster (mbelshe, willchan)
-* Relax single-writer, multi-reader locking of the http cache,
- allowing readers to start reading the parts of a resource that the
- writer has written (rvargas, gavinp) - 0.0, No progress.
-* Add server hint & prefetching support - Link: header and link
- rel=prefetch. (gavinp) - 0.5, link rel=prefetch is supported, link
- header is not.
-* Release binary exploration protection for safebrowsing (lzheng)
-* Continue disk cache performance and reliability experiments
- (rvargas) - 0.8, One is done, the other one is blocked on
- infrastructure.
-* Implement offline (network disconnected) detection for Mac and Linux
- (eroman)
-
-## Q4 2010 Objectives
-
-**Improve test coverage**
-
-* Implement <http://code.google.com/p/web-page-replay/> to provide
- more complete network stack coverage and catch performance
- regressions (tonyg,mbelshe) -- 0.5 lots of good progress; up and
- running, not yet done!
-* [Improve tests for HTTP
- authentication](http://www.chromium.org/developers/design-documents/http-authentication).
- (cbentzel, wtc) - 0.2 Added unit tests and manual system-level
- tests, but still need automated system level tests.
-* [Add tests for SSL client
- authentication](http://www.chromium.org/developers/design-documents/ssl-client-authentication).
- (wtc) -- 0.2. (by rsleevi) Implemented a better way to trust a test
- root CA that doesn't require changing the system certificate store.
- Regenerated test certificates to have long validity periods.
-
-## Fix bugs and clean up / refactor code
-
-* Fix bugs (everyone)
-* Improve network diagnostics (about:net-internals) to help fix bugs
- (mmenke, eroman)
-* Clean up / support previously neglected code (Downloads (rdsmith:
- 0.6), SafeBrowsing(lzheng: 0.6), HTTP Auth, etc) (rdsmith, lzheng,
- ahendrickson, cbentzel)
-* Clean up valgrind reported issues in network tests (everyone) --
- 0.3. Fixed some, but still have plenty more to fix.
-* Better modularize the network stack (willchan,eroman) -- 0.2. Lots
- of discussion, not many changes happened yet. A little work towards
- new URLRequestContexts
-
-## Improve network performance / features
-
-* Continue running cache experiments (request throttling, performance,
- reliability) (rvargas) -- 0.9 Constant monitoring of the experiments
- and changes made as appriopriate.
-* Relax SWMR locking of the http cache (rvargas, gavinp) -- 0.5 Work
- is under way, but nothing checked in yet.
-* Continue supporting SPDY development (mbelshe, etc) -- 0.6 SPDY up
- and running on all google.com. External partners starting to
- experiment.
-* TLS latency enhancements (False Start, Snap Start, etc) (agl, wtc)
- -- 0.6. Added a certificate verification result cache. False Start
- is enabled in M8, thanks to agl's hard work. OCSP stapling works on
- Linux.
-* Better support prefetching mechanisms (Link: and X-Purpose headers,
- link rel=prefetch, resource prediction, preconnection) (gavinp, jar)
-* Continue work towards HTTP pipelining (vandebo) -- 0.0. No progress.
-* Finish user certificate import and native SSL client authentication
- (wtc) -- 0.6. No progress on user certificate import. Finished
- native SSL client authentication (rsleevi wrote the original patch),
- which completed the switchover to NSS for SSL.
-* Detect network disconnectivity and handle it better (eroman)
-
-## Q3 2010 Objectives
-
-Annotations on the status of each objective (at the close of the quarter) shown
-in red.
-
-### High level
-
-* Measure performance.
-* Improve performance.
-* Investigate and fix bugs.
-* Enterprise features.
-
-### Specific items
-
-**Feature work and bug fixes for SSL library / crypto. (wtc, agl, rsleevi,
-davidben)**
-
-* Bring the NSS SSL library to feature parity with Windows Vista's
- SChannel. -- 0. Did not have time to work on this. Postponed to Q1
- 2011. Will work on native SSL client auth for NSS in Q4 2010.
-* Tackle long-standing bugs in Chrome's crypto and certificate code.
- -- 0.3. Fixed some certificate verification bugs in NSS and Chrome.
- Didn't have time to tackle the major items such as thread-safe
- certificate cache and certificate verification result cache.
-* [Certificate enrollment with the HTML &lt;keygen&gt;
- tag](http://code.google.com/p/chromium/issues/detail?id=148). --
- 0.7. davidben added UI and fixed many bugs in certificate
- enrollment. Remaining work is to [support all formats of
- application/x-x509-user-cert
- responses](http://code.google.com/p/chromium/issues/detail?id=37142),
- and then to test with various CAs.
-
-**Feature work on download handling (ahendrickson)**
-
-* Resume partially completed downloads, including across Chrome
- restarts. -- 0.5?; preliminary CL sent out
- (<http://codereview.chromium.org/3127008/show>)
-* Measure Chrome versus IE download performance to see whether it is
- in fact slower in chrome (user reports suggest this is the case). --
- 0
-
-**Improvements to cookie handling (rdsmith)**
-
-* Implement alternate eviction algorithm and measure impact (to reduce
- the cookies evicted while browsing). -- 1
-* (Stretch) [Restrict access of CookieMonster to IO
- Thread](http://code.google.com/p/chromium/issues/detail?id=44083).
- -- 0
-
-**URL Prefetching (gavinp)**
-
-* [Implement link
- rel=prefetch](http://code.google.com/p/chromium/issues/detail?id=13505)
- and measure impact. -- 1.0; implemented, measurement shows 10%
- improvement of PLT
-* Implement link HTTP headers and measure impact. -- 0.5; preliminary
- code reviews sent out.
-
-**HTTP cache (rvargas, gavinp)**
-
-* Simultaneous streaming readers on ranges in a cache entry (to
- support video prefetch for YouTube). -- 0
-* Experiment with [request throttling at the cache
- layer](http://code.google.com/p/chromium/issues/detail?id=10727) --
- 1.0
-
-**HTTP authentication (cbentzel)**
-
-* Integrated Authentication on all platforms. -- 0.9; NTLM on
- Linux/OSX not supported without auth prompt.
-* Add full proxy authentication support to
- [SocketStream](http://code.google.com/p/chromium/issues/detail?id=47069)
- and
- [SPDY](http://code.google.com/p/chromium/issues/detail?id=46620). --
- 0
-* [System level tests for
- NTLM/Negotiate](http://code.google.com/p/chromium/issues/detail?id=35021).
- -- 0
-
-**Simulated Network Tester (cbentzel, klm, tonyg)**
-
-* Implement basic pagecycler test over a DummyNet connection -- 0.7;
- work in progress for webpage replay
- (<http://code.google.com/p/web-page-replay/wiki/GettingStarted>)
-* Record and playback of Alexa 500 rather than static pages from 10
- years ago. -- 0
-* (stretch): Minimize false positives enough to make this a standard
- builder. -- 0
-
-**Network Diagnostics (rdsmith, mmenke, eroman)**
-
-* Improve error pages to better communicate network error -- 0.7; new
- error codes for proxy and offline, and reworked some other confusing
- ones. Updated text in the works.
-* Improve error page to link to system network configurator -- 0; need
- to figure out sandboxable solution.
-* Improve network diagnostics tool for configuration problems -- 0; no
- changes
-
-**Proxy handling**
-
-* [Extension API for changing proxy
- settings](http://code.google.com/p/chromium/issues/detail?id=48930)
- (pamg) -- 0.5
-* [Execute PAC scripts out of
- process](http://code.google.com/p/chromium/issues/detail?id=11746)
- (eroman) -- 0; punted
-
-**Implement HTTP pipelining (vandebo)**
-
-* [crbug.com/8991](http://crbug.com/8991)
-
-**WebKit/Chrome network integration (tonyg)**
-
-* Support the WebTiming spec. -- 1.0; landed in Chrome 6.
-* [Enable persisting disk cache of pre-parsed
- javascript](http://code.google.com/p/chromium/issues/detail?id=32407).
- -- 0
-* Pass all of the BrowserScope tests -- 0.9; ToT chromium scores
- 91/100 on the tests
-
-**SafeBrowsing (lzheng)**
-
-* [Add end to end tests for
- safe-browsing](http://code.google.com/p/chromium/issues/detail?id=47318)
- -- 1.0
-* Extract the safe browsing code to its own library that can be
- re-used by other projects -- 0
-
----
-
-## Past objectives
-
-Annotations on the status of each objective (at the close of the quarter) shown
-in red.
-
-### Milestone 6 (branch cut July 19 2010).
-
-#### #### Run PAC scripts out of process
-
-#### [Move the evaluation of proxy auto-config scripts out of the browser
-process](http://code.google.com/p/chromium/issues/detail?id=11746) to a
-sandboxed process for better security. (eroman)
-
-#### Ended up doing multi-threaded PAC execution instead, to address performance
-problems associated with speculative requests + slow DNS (crbug.com/11079)
-
-#### Cache pre-parsed JavaScript
-
-The work on the HTTP cache side is done. Need to write the code for [WebKit and
-V8 use the interface](http://code.google.com/p/chromium/issues/detail?id=32407)
-and measure the performance impact. (tonyg, rvargas)
-
-Done. M6 has pre-parsed JS in the memory cache ON by default. It has pre-parsed
-JS in the disk cache is OFF by default (--enable-preparsed-js-caching).
-
-#### Switch to NSS for SSL on Windows
-
-Use NSS for SSL on Windows by default. We need to modify NSS to [use Windows
-CryptoAPI for SSL client
-authentication](http://code.google.com/p/chromium/issues/detail?id=37560). (wtc)
-
-Done. NSS is being used for SSL on all platforms.
-
-#### Improve the network error page
-
-The network error page should [help the user diagnose and fix the
-problem](http://code.google.com/p/chromium/issues/detail?id=40431) (see also
-[issue 18673](http://code.google.com/p/chromium/issues/detail?id=18673)), rather
-than merely displaying a network error code. (eroman, jar, jcivelli)
-
-The UI of the error page has not been improved, however some user-level
-connectivity tests have been added to help diagnose when a chronic network error
-is happening (chrome://net-internals/#tests).
-
-#### #### Implement SSLClientSocketPool
-
-#### This allows us to implement [late binding of SSL
-sockets](http://code.google.com/p/chromium/issues/detail?id=30357) and is a
-prerequisite refactor for speculative SSL pre-connection and pipelining.
-(vandebo)
-
-#### Done.
-
-#### #### HTTP authentication
-
-* #### Implement the [Negotiate (SPNEGO) authentication scheme on
- Linux and
- Mac](http://code.google.com/p/chromium/issues/detail?id=33033) using
- GSS-API. (ahendrickson)
- #### Almost completed.
-* #### Create [system-level tests for NTLM and Negotiate
- authentication](http://code.google.com/p/chromium/issues/detail?id=35021).
- (cbentzel)
- #### Hasn't been started yet.
-
-#### #### HTTP cache improvements
-
-* #### Improve the coordination between the memory cache (in WebCore)
- and disk cache (in the network stack). For example, memory cache
- accesses should count as HTTP cache accesses so that the HTTP cache
- knows how to better maintain its LRU ordering. (rvargas)
- #### Still needs investigation.
-* #### Define good cache performance metrics. Measure HTTP cache's
- hit/miss rates, including "near misses". (rvargas)
- #### Still needs investigation.
-* #### Make the [HTTP
- cache](http://code.google.com/p/chromium/issues/detail?id=26729) and
- [disk
- cache](http://code.google.com/p/chromium/issues/detail?id=26730)
- fully asynchronous. Right now the HTTP cache is serving the metadata
- synchronously, which may block the IO thread.
- #### Done.
-* #### Throttle the requests.
- #### This was dependent on making the disk cache fully asynchronous, which
- only just got finished.
-
-#### Network internals instrumentation, logging, and diagnostics
-
-* [Create a chrome://net page for debugging the network
- stack](http://code.google.com/p/chromium/issues/detail?id=37421).
- (eroman)
- * This will replace about:net-internals and about:net.
- * Allow tracing of network requests and their internal states.
- * Diagnosing performance problems.
- * Getting more information from users in bug reports.
- * Exploring and resetting internal caches.
-
-Done. Replaced the defunct about:net with the new about:net-internals.
-Instruments a lot more tracing information, support for active and passive
-logging, and log generation for bug reports.
-
-#### Define Chromium extensions API for networking
-
-Define an API for Chromium extensions to access the network stack. We already
-defined an API that exposes proxy settings to extensions. (willchan)
-
-Some drafts were circulated for network interception APIs, but work hasn't been
-started yet.
-
-The proxy settings API has been revived, and Pam is starting on it.
-
-#### SafeBrowsing
-
-This is a stretch goal because we may not have time to work on this in Q2.
-
-* Refactor SafeBrowsing code into an independent library that can be
- shared with other SafeBrowsing clients.
- Not started, however an owner was found.
-* Integrate with SafeBrowsing test suite.
- Work in progress.
-
-#### IPv6
-
-* The AI_ADDRCONFIG flag for getaddrinfo is ignored on some platforms,
- causing us to issue DNS queries for IPv6 addresses (the AAAA DNS
- records) unnecessarily. AI_ADDRCONFIG also does not work for
- loopback addresses. We should find out when to pass AF_UNSPEC with
- AI_ADDRCONFIG and when to pass AF_INET to getaddrinfo, so we get the
- best host name resolution performance. (jar)
-* Implement IPv6 extensions to
- [FTP](http://code.google.com/p/chromium/issues/detail?id=35050).
- (gavinp)
- Done. Support for EPSV.
-
-#### Speculative TCP pre-connection
-
-Jim Roskind has an incomplete [changelist](http://codereview.chromium.org/38007)
-that shows where the necessary hooks are for TCP pre-connection. (jar)
-
-* First do this for search (pre-connect while user types a query)
-* Eventually pre-connect based on DNS sub-resource history so that we
- pre-connect for sub-resource acquisition before containing page even
- arrives.
-* Preliminary implementation behind flag will facilitate SDPY
- benchmarking of feature.
-
-Initial implementation has landed; it is off by default, but can be enabled with
-these flags:
-
---enable-preconnect
-
---preconnect-despite-proxy
-
-#### Improve WebKit resource loading
-
-Improve resource loading so we can pass all of the [network tests on
-Browserscope](http://www.browserscope.org/?category=network&v=top) (Chromium
-issues [13505](http://code.google.com/p/chromium/issues/detail?id=13505),
-[40014](http://code.google.com/p/chromium/issues/detail?id=40014),
-[40019](http://code.google.com/p/chromium/issues/detail?id=40019) and WebKit
-[bug 20710](https://bugs.webkit.org/show_bug.cgi?id=20710)). Most of the work
-will be in WebKit. (gavinp, tonyg).
-
-Work in progress.
-
-#### #### Certificate UI
-
-* #### [Linux certificate management
- UI](http://code.google.com/p/chromium/issues/detail?id=19991).
- (summer intern?)
- #### Work in progress.
-* #### UI for [&lt;keygen&gt; certificate
- enrollment](http://code.google.com/p/chromium/issues/detail?id=148)
- on Linux and Windows: right now &lt;keygen&gt; finishes silently.
- (summer intern?)
- #### Work in progress by summer intern.
-
----
-
-## Future
-
-#### Prioritizing HTTP transactions
-
-* #### Support loading resources in the background (for example, for
- updating the thumbnails in the New Tab Page) without impacting
- real-time performance if the user is doing something else.
-* #### Support dynamically adjusting priorities. If the user switches
- tabs, the newly focused tab should get a priority boost for its
- network requests.
-
-#### #### Other HTTP performance optimizations
-
-* #### Reuse HTTP keep-alive connections under more conditions
-* #### Resume SSL sessions under more conditions
-
-#### #### New unit tests and performance tests
-
-#### Some parts of the network stack, such as SSL, need more unit tests. Good
-test coverage helps bring up new ports. In addition, any bugs that get fixed
-should get unit tests to prevent regression.
-#### We should [add performance
-tests](http://code.google.com/p/chromium/issues/detail?id=6754) to measure the
-performance of the network stack and track it over time.
-
-#### ********Fix SSLUITests********
-
-All the [SSLUITests are marked as
-flaky](http://code.google.com/p/chromium/issues/detail?id=40932) now.
-
-#### ********Better histograms********
-
-**We need better histograms for networking.**
-
-**#### ****Integrate loader-specific parts of WebKit into the network stack******
-
-Parts of WebKit that throttle and prioritize resource load requests could be
-moved into the network stack. We can disable WebCore's queuing, and get more
-context about requests (flesh out the ResourceType enum).
-
-#### #### Captive portals
-
-#### [Avoid certificate name mismatch
-errors](http://code.google.com/p/chromium/issues/detail?id=71736) when visiting
-an HTTPS page through a captive portal.
-
-#### #### HTTP pipelining
-
-#### We should implement an [optional pipelining
-mode](http://code.google.com/p/chromium/issues/detail?id=8991).
-
-#### #### HTTP authentication
-
-* #### [support NTLMv2 on Linux and
- Mac](http://code.google.com/p/chromium/issues/detail?id=22532)
-
-#### We also need to review the interaction between HTTP authentication and disk
-cache. For example, [cached pages that were downloaded with authentication
-should not be retrieved without
-authentication](http://code.google.com/p/chromium/issues/detail?id=454).
-
-#### FTP
-
-* reusing control connections
-* caching directory listings.
-
-We need to be able to [request FTP URLs through a
-proxy](http://code.google.com/p/chromium/issues/detail?id=11227).
-
-#### Preference service for network settings
-
-We strive to use the system network settings so that users can control the
-network settings of all applications easily. However, there will be some
-configuration settings specific to our network stack, so we need to have our own
-preference service for those settings. See also [issue
-266](http://code.google.com/p/chromium/issues/detail?id=266), in which some
-Firefox users demand that we not use the WinInet proxy settings (the de facto
-system proxy settings) on Windows.
-
-#### Share code between HTTP, SPDY, and WebSocket
-
-A lot of code was copied from net/http to net/socket_stream for WebSocket
-support. We should find out if some code can be shared.
-
-#### WPAD over DHCP
-
-Support [WPAD over
-DHCP](http://code.google.com/p/chromium/issues/detail?id=18575). \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-use-in-chromium/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-use-in-chromium/index.md
deleted file mode 100644
index 8e42da7bc55..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/network-stack-use-in-chromium/index.md
+++ /dev/null
@@ -1,106 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: network-stack-use-in-chromium
-title: Network Stack Use in Chromium
----
-
-[TOC]
-
-## Overview
-
-Chromium is the primary consumer of the Chromium network stack. While there are
-other consumers, the main one is the Chromium browser process. For the most
-part, the network stack runs in the browser process's IO thread, and interfaces
-with the Chromium network stack via `ChromeURLRequestContext` and
-`URLRequest`/`URLFetcher`.
-
-### ChromeURLRequestContext
-
-The Chromium browser process uses several instances of
-`ChromeURLRequestContext`, a subclass of
-`[URLRequestContext](/developers/design-documents/network-stack#TOC-Overview)`.
-For a given profile, there are a number of `ChromeURLRequestContext` instances:
-
-* The "main" `ChromeURLRequestContext`. The majority of `URLRequest`s
- are associated with this `ChromeURLRequestContext` instance.
-* The "media" `ChromeURLRequestContext`. This one uses a different
- `[HttpCache](/developers/design-documents/network-stack/disk-cache)`
- instance that is optimized for large media requests (video). It
- shares many objects, such as the `HostResolver`,
- `[CookieMonster](/developers/design-documents/network-stack/cookiemonster)`,
- etc. with the main `ChromeURLRequestContext`.
-* The "extension" `ChromeURLRequestContext`. It shares many objects
- with the main `ChromeURLRequestContext`. This instance helps service
- chrome-extension:// requests.
-
-There are also some other `ChromeURLRequestContext`s, such as sync's
-`HttpBridge::RequestContext`, the `ConnectionTester`'s
-`ChromeURLRequestContext`, etc. These are not tied to the profile, although with
-a multiple profile Chromium, sync's `HttpBridge::RequestContext` likely would
-have to be tied to a profile.
-
-Note that the "off the record" (Incognito) profile uses a special OTR
-`ChromeURLRequestContext`, in addition to the media and extension contexts, that
-shares some objects, although notably not the `HttpCache` or `CookieMonster` etc
-that are important to be separated for an off the record profile. The
-`HostResolver` is shared.
-
-Many of the objects in `ChromeURLRequestContext` use `scoped_refptr`s to share
-ownership. This causes a lot of problems with object destruction ordering and
-reference cycles. Many of these members are being moved towards being owned
-externally, by the `IOThread`, although that choice may have to change to
-`ProfileImpl` or something to support multiple profiles in Chromium.
-
-Note that currently ChromeURLRequestContext contains a lot of extra members that
-don't exist in `URLRequestContext`. These are being moved out, because they have
-nothing to do with `URLRequestContext` and were only placed in
-`ChromeURLRequestContext` because it was a convenient per-profile object, and
-originally there weren't many ChromeURLRequestContext objects.
-
-### ChromeURLRequestContextGetter:
-
-`ChromeURLRequestContext` is constructed and destroyed on the same thread.
-However, sometimes other threads need to hold references to them, even before
-they get constructed. `ChromeURLRequestContextGetter` is a handle for
-`ChromeURLRequestContext` that will lazily construct it on the first attempt to
-use it. Accesses to the contained `ChromeURLRequestContext` are only allowed on
-the IO thread.
-
-#### URLRequest usages
-
-* `ResourceDispatcherHost` creates `URLRequests` when the renderer or
- plugin processes issue resource requests. It retains pointers to all
- the `ChromeURLRequestContextGetters`.
-* `URLFetcher` is another frontend for `URLRequest`. It allows any
- thread with access to the appropriate
- `ChromeURLRequestContextGetter` to proxy requests to the IO thread.
- It provides a simplified delegate interface that returns the full
- response body as a single string. There are many users of
- `URLFetcher` in the Chromium browser process, including the omnibox,
- extensions, sync (which uses its own `ChromeURLRequestContext`),
- etc. Usually these objects use the "main"
- ChromeURLRequestContextGetter object.
-
-#### NetworkChangeNotifier
-
-* IntranetRedirectDetector - Lets us probe the network to see if ISPs
- are trying to redirect requests to non-existent hosts.
-* GoogleURLTracker - Lets us probe Google to see the appropriate
- domain for the search engine.
-
-#### Network predictor
-
-It uses `URLRequest::Interceptor` to hook into all `URLRequests` to watch them
-come and go, so it can analyze the referers to learn the subdomains used in
-subsequent requests. It uses this knowledge to predict network requests. For
-likely requests, it keeps around the pointer to the "main"
-`ChromeURLRequestContext`'s `HostResolver`, so it can perform DNS prefetching.
-For requests that are almost guaranteed to happen, it goes even further and uses
-the "main" `ChromeURLRequestContext`'s `HttpStreamFactory` to preconnect a
-socket. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/preconnect/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/preconnect/index.md
deleted file mode 100644
index b915e0e8439..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/preconnect/index.md
+++ /dev/null
@@ -1,93 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: preconnect
-title: Preconnect
----
-
-## **[TOC]**
-
-## **Overview**
-
-The networking stack implements the blink interface for the [W3C Resource Hints
-preconnect specification](http://www.w3.org/TR/resource-hints/#preconnect) to
-allow site developers to request connections to servers in anticipation of
-future requests. The predominant use cases are:
-
-1. sub-resources for the current page that the preload scanner is not
- capable of identifying. This can include resources discovered by
- styles being applied (background images, fonts) as well as resources
- initiated by scripts (analytics beacons, ads domains, etc).
-2. Top-level domains used in future navigations. For example, clinking
- on a link to an url shortener or click tracker that then redirects
- to the actual page. If the page knows the resulting final
- destination it can start initiating the connection in parallel with
- the navigation to the redirector.
-
-[Issue 450682](https://code.google.com/p/chromium/issues/detail?id=450682)
-tracks the implementation as well as issues that came up during implementation
-and experimentation.
-
-## Implementation Notes
-
-### Connection Pools (not all connections are the same)
-
-Chrome maintains separate connection pools for the various different protocols
-(http, https, ftp, etc). Additionally, separate connection pools are managed for
-connections where cookies can be transmitted and connections where they can not
-in order to prevent tracking of users when a request is determined to be private
-(when blocking cookies to third-party domains for example). The determination of
-which connection pool a request will use is made at request time and depends on
-the context in which it is being made. Top-level navigations go in the
-cookie-allowed pool while sub-resource requests get assigned a pool based on the
-domain of the request, the domain of the page and the user's privacy settings.
-
-Connections can not move from one pool to another so the determination of which
-pool a connection will be assigned to needs to be made at the time when the
-connection is being established. In the case of TLS connections, the cost of not
-using a preconnected connection can be quite high, particularly on mobile
-devices with limited resources.
-
-The initial implementation for preconnect did not take private mode into account
-and all connections were pre-connected in the cookies-allowed connection pool
-leading to
-[situations](https://code.google.com/p/chromium/issues/detail?id=468005) where
-the preconnected connection was not leveraged for the future request.
-
-### Preconnect Connection Pool Selection Heuristic (proposed)
-
-The two different use cases for preconnect require different treatment for
-determining which connection pool a connection should belong to. In the case of
-a sub-resource request the document URL should be considered and the relevant
-privacy settings should be evaluated. In the case of the top-level navigation
-the document URL should NOT be considered and the connection. There are no hints
-that the site owner can provide to indicate what kind of request will be needed
-so it is up to the browser to guess correctly.
-
-For preconnect requests that are initiated while the document is being loaded
-(before the onload event while the HTML is being parsed and scripts are being
-executed), Chrome will assume that preconnect requests are going to be for
-sub-resources that are going to be requested in the context of the current
-document and the connection pool will be selected by taking the owning
-document's URL into account.
-
-For preconnect requests that are initiated after the document has finished
-loading, Chrome will assume that preconnect requests will be used for future
-page navigations and the connections will not take the current document's URL
-into account when selecting a connection pool (which effectively means the
-connection will be established in the allows-cookies pool).
-
-This does leave a few possible cases for mis-guessing but should handle the vast
-majority of cases. Specifically:
-
-1. Top-level navigations that are preconnected before the page has
- finished loading may end up in the wrong connection pool as the
- document's URL will be considered.
-2. Sub-resource requests that are preconnected after the page has
- finished loading may end up in the wrong pool if the request
- requires a private connection. \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/fox-proxy-settings.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/fox-proxy-settings.png.sha1
deleted file mode 100644
index b1ddec044ff..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/fox-proxy-settings.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-fba9dfa833a4210f919ab35cc4f9c0183fd32745 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.dot b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.dot
deleted file mode 100644
index 1275d3249d4..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.dot
+++ /dev/null
@@ -1,9 +0,0 @@
-digraph Fallback {
-
- Automatic [label="Automatic (PAC) configuration"];
- Direct;
- Error [color="red", style=bold];
-
- Automatic -> Direct [label="(On Javascript runtime error)"];
- Automatic -> Error [label="(On exhaustion of proxy list)"];
-}
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.png.sha1
deleted file mode 100644
index 50dc6ea6765..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-fa2d3758aed6f415e3109d58e5fdef6b97e9d066 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.dot b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.dot
deleted file mode 100644
index f5f2e7cd519..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.dot
+++ /dev/null
@@ -1,27 +0,0 @@
-digraph Fallback {
-
- subgraph cluster_automatic {
- label="Automatic settings";
- style=dotted;
-
- WPAD [label="WPAD (Automatically detect)"];
- CustomPac [label="Custom PAC script"];
- }
-
- subgraph cluster_manual {
- label="Manual settings";
- style=dotted;
-
- ProxyForAllSchemes [label="Single proxy server for all schemes"];
- SchemeSpecific [label="Proxy server per URL scheme"];
- Socks [label="SOCKS proxy server"];
- }
-
- Direct;
-
- WPAD -> CustomPac [style=bold, color="blue"];
- CustomPac -> ProxyForAllSchemes [style=bold, color="blue"];
- ProxyForAllSchemes -> SchemeSpecific;
- SchemeSpecific -> Socks;
- Socks -> Direct;
-}
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.png.sha1
deleted file mode 100644
index 1897a02a2fa..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-6f9bbc74751b42425aa60aafff220e9ce5317acf \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.dot b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.dot
deleted file mode 100644
index 95809842f8c..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.dot
+++ /dev/null
@@ -1,7 +0,0 @@
-digraph Fallback {
-
- Manual [label="Manual configuration"];
- Error [color="red", style=bold];
-
- Manual -> Error [label="(On any error)"];
-}
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.png.sha1
deleted file mode 100644
index 5474bdf7395..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-eea1f06bd7d27340cda669c0ed8e79bcc2f9ca2f \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-server-settings.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-server-settings.png.sha1
deleted file mode 100644
index 34d4364681b..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-server-settings.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-ca9e740f8f35de78ce45e5aee53ce1180dda2652 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-settings.png.sha1 b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-settings.png.sha1
deleted file mode 100644
index 9fe486d1be1..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-settings.png.sha1
+++ /dev/null
@@ -1 +0,0 @@
-88ac9351c0c3fbc7cc73a1506a3c2db3ecc90a79 \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/index.md
deleted file mode 100644
index c58b436b204..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/proxy-settings-fallback/index.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: proxy-settings-fallback
-title: Proxy settings and fallback
----
-
-On Windows, Chromium uses WinInet's proxy settings.
-
-Consequently, it is important that Chromium interpret and apply these proxy
-settings in the same manner as WinInet. Otherwise the same proxy settings may
-give different results in Chromium than in other WinInet-based applications
-(like Internet Explorer).
-
-In Firefox, the proxy settings are divided into four different modes using radio
-buttons. This modal approach makes it pretty easy to understand which proxy
-settings will be used, since there is only one set of choices.
-
-<img alt="image"
-src="/developers/design-documents/network-stack/proxy-settings-fallback/fox-proxy-settings.png">
-
-However in Internet Explorer, the settings are more complex.
-
-All of the various settings are presented in the UI as optional checkboxes.
-
-This makes it unclear what is supposed to happen when conflicting choices are
-given.
-
-Screenshot of IE's settings dialog:
-
-<table>
-<tr>
-<td> <img alt="image" src="/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-settings.png"> </td>
-<td> + </td>
-<td> <img alt="image" src="/developers/design-documents/network-stack/proxy-settings-fallback/ie-proxy-server-settings.png"> </td>
-</tr>
-</table>
-
-## How WinInet resolves the ambiguity
-
-\[The following was determined experimentally using Internet Explorer 8 on
-Windows XP. (Couldn't find an official explanation of the steps to link to).\]
-
-The way Internet Explorer applies these settings is using a fallback scheme
-during initialization:
-
-<img alt="image"
-src="/developers/design-documents/network-stack/proxy-settings-fallback/ie-fallback.png">
-
-* Fallback between the automatic settings is represented with a **blue
- arrow**, and occurs whenever:
- * The setting is not specified.
- * The underlying PAC script failed to be downloaded.
- * The underlying PAC script failed to be parsed.
-* Fallback between the manual settings is represented by a **black
- arrow**, and occurs whenever:
- * The setting is not specified.
-* The bypass list is applied ONLY within the manual settings.
-
-TODO(eroman): haven't verified fallback for SOCKS.
-
-There is a secondary fallback mechanism at runtime:
-
-<img alt="image"
-src="/developers/design-documents/network-stack/proxy-settings-fallback/ie-auto-fallback.png">
-<img alt="image"
-src="/developers/design-documents/network-stack/proxy-settings-fallback/ie-manual-fallback.png">
-
-So for example if auto-detect was chosen during the initialization sequence, but
-the PAC script is failing during execution of FindProxyForURL(), it will
-fallback to direct (regardless of whether there are manual proxy settings). \ No newline at end of file
diff --git a/chromium/docs/website/site/developers/design-documents/network-stack/socks-proxy/index.md b/chromium/docs/website/site/developers/design-documents/network-stack/socks-proxy/index.md
deleted file mode 100644
index 95d23cd782b..00000000000
--- a/chromium/docs/website/site/developers/design-documents/network-stack/socks-proxy/index.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-breadcrumbs:
-- - /developers
- - For Developers
-- - /developers/design-documents
- - Design Documents
-- - /developers/design-documents/network-stack
- - Network Stack
-page_name: socks-proxy
-title: Configuring a SOCKS proxy server in Chrome
----
-
-To configure chrome to proxy traffic through the SOCKS v5 proxy server
-***myproxy:8080***, launch chrome with these two command-line flags:
-
---proxy-server="socks5://***myproxy:8080***"
-
---host-resolver-rules="MAP \* ~NOTFOUND , EXCLUDE ***myproxy***"
-
-Explanation
-
-The --proxy-server="socks5://myproxy:8080" flag tells Chrome to send all http://
-and https:// URL requests through the SOCKS proxy server "myproxy:8080", using
-version 5 of the SOCKS protocol. The hostname for these URLs will be resolved by
-the *proxy server*, and not locally by Chrome.
-
-* NOTE: proxying of ftp:// URLs through a SOCKS proxy is not yet
- implemented.
-
-The --proxy-server flag applies to URL loads only. There are other components of
-Chrome which may issue DNS resolves *directly* and hence bypass this proxy
-server. The most notable such component is the "DNS prefetcher".Hence if DNS
-prefetching is not disabled in Chrome then you will still see local DNS requests
-being issued by Chrome despite having specified a SOCKS v5 proxy server.
-
-Disabling DNS prefetching would solve this problem, however it is a fragile
-solution since once needs to be aware of all the areas in Chrome which issue raw
-DNS requests. To address this, the next flag, --host-resolver-rules="MAP \*
-~NOTFOUND , EXCLUDE myproxy", is a catch-all to prevent Chrome from sending any
-DNS requests over the network. It says that all DNS resolves are to be simply
-mapped to the (invalid) address 0.0.0.0. The "EXCLUDE" clause make an exception
-for "myproxy", because otherwise Chrome would be unable to resolve the address
-of the SOCKS proxy server itself, and all requests would necessarily fail with
-PROXY_CONNECTION_FAILED.
-
-Debugging
-
-There are a lot of intricacies to configuring proxy settings as you intend:
-
-* Different profiles can use different proxy settings
-* Extensions can modify the proxy settings
-* If using the system setting, other applications can change them, and
- there can be per-connection settings.
-* The proxy settings might include fallbacks to other proxies, or
- direct connections
-* Plugins (for instance Flash and Java applets) can bypass the Chrome
- proxy settings alltogether
-* Other third-party components in Chrome might issue DNS resolves
- directly, or bypass Chrome's proxy settings.
-
-The first thing to check when debugging is look at the Proxy tab on
-about:net-internals, and verify what the effective proxy settings are:
-
-chrome://net-internals/#proxy
-
-Next, take a look at the DNS tab of about:net-internals to make sure Chrome
-isn't issuing local DNS resolves:
-
-chrome://net-internals/#dns
-
-Next, to trace the proxy logic for individual requests in Chrome take a look at
-the Events tab of about:net-internals:
-
-chrome://net-internals/#events \ No newline at end of file