summaryrefslogtreecommitdiff
path: root/doc/administration
diff options
context:
space:
mode:
Diffstat (limited to 'doc/administration')
-rw-r--r--doc/administration/container_registry.md7
-rw-r--r--doc/administration/geo/disaster_recovery/bring_primary_back.md2
-rw-r--r--doc/administration/geo/disaster_recovery/planned_failover.md2
-rw-r--r--doc/administration/geo/replication/database.md34
-rw-r--r--doc/administration/geo/replication/remove_geo_node.md12
-rw-r--r--doc/administration/high_availability/README.md9
-rw-r--r--doc/administration/integration/terminal.md4
-rw-r--r--doc/administration/raketasks/geo.md4
-rw-r--r--doc/administration/raketasks/project_import_export.md16
-rw-r--r--doc/administration/raketasks/storage.md29
10 files changed, 58 insertions, 61 deletions
diff --git a/doc/administration/container_registry.md b/doc/administration/container_registry.md
index 04f52783d22..3e3054af509 100644
--- a/doc/administration/container_registry.md
+++ b/doc/administration/container_registry.md
@@ -696,12 +696,11 @@ project or branch name. Special characters can include:
- Trailing hyphen/dash
- Double hyphen/dash
-To get around this, you can [change the group path](../user/group/index.md#changing-a-groups-path),
-[change the project path](../user/project/settings/index.md#renaming-a-repository) or change the
-branch name. Another option is to create a [push rule](../push_rules/push_rules.html) to prevent
+To get around this, you can [change the group path](../user/group/index.md#changing-a-groups-path),
+[change the project path](../user/project/settings/index.md#renaming-a-repository) or change the
+branch name. Another option is to create a [push rule](../push_rules/push_rules.html) to prevent
this at the instance level.
-
[ce-18239]: https://gitlab.com/gitlab-org/gitlab-ce/issues/18239
[docker-insecure-self-signed]: https://docs.docker.com/registry/insecure/#use-self-signed-certificates
[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure
diff --git a/doc/administration/geo/disaster_recovery/bring_primary_back.md b/doc/administration/geo/disaster_recovery/bring_primary_back.md
index 30d35df25c7..64d7ef2d609 100644
--- a/doc/administration/geo/disaster_recovery/bring_primary_back.md
+++ b/doc/administration/geo/disaster_recovery/bring_primary_back.md
@@ -30,7 +30,7 @@ To bring the former **primary** node up to date:
`sudo systemctl enable gitlab-runsvdir`. For CentOS 6, you need to install
the GitLab instance from scratch and set it up as a **secondary** node by
following [Setup instructions][setup-geo]. In this case, you don't need to follow the next step.
-
+
NOTE: **Note:** If you [changed the DNS records](index.md#step-4-optional-updating-the-primary-domain-dns-record)
for this node during disaster recovery procedure you may need to [block
all the writes to this node](planned_failover.md#prevent-updates-to-the-primary-node)
diff --git a/doc/administration/geo/disaster_recovery/planned_failover.md b/doc/administration/geo/disaster_recovery/planned_failover.md
index 393326c3347..0c2d48854f0 100644
--- a/doc/administration/geo/disaster_recovery/planned_failover.md
+++ b/doc/administration/geo/disaster_recovery/planned_failover.md
@@ -187,7 +187,7 @@ access to the **primary** node during the maintenance window.
before it is completed will cause the work to be lost.
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the
following conditions to be true of the **secondary** node you are failing over to:
-
+
- All replication meters to each 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
- Database replication lag is 0ms.
diff --git a/doc/administration/geo/replication/database.md b/doc/administration/geo/replication/database.md
index 71a1ce87833..9272287a4a7 100644
--- a/doc/administration/geo/replication/database.md
+++ b/doc/administration/geo/replication/database.md
@@ -116,7 +116,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
by default. However, Geo requires the **secondary** node to be able to
connect to the **primary** node's database. For this reason, we need the address of
each node.
-
+
NOTE: **Note:** For external PostgreSQL instances, see [additional instructions](external_database.md).
If you are using a cloud provider, you can lookup the addresses for each
@@ -424,22 +424,22 @@ data before running `pg_basebackup`.
- If PostgreSQL is listening on a non-standard port, add `--port=` as well.
- If your database is too large to be transferred in 30 minutes, you will need
- to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
- initial replication to take under an hour.
- - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
- (e.g., you know the network path is secure, or you are using a site-to-site
- VPN). This is **not** safe over the public Internet!
- - You can read more details about each `sslmode` in the
- [PostgreSQL documentation][pg-docs-ssl];
- the instructions above are carefully written to ensure protection against
- both passive eavesdroppers and active "man-in-the-middle" attackers.
- - Change the `--slot-name` to the name of the replication slot
- to be used on the **primary** database. The script will attempt to create the
- replication slot automatically if it does not exist.
- - If you're repurposing an old server into a Geo **secondary** node, you'll need to
- add `--force` to the command line.
- - When not in a production machine you can disable backup step if you
- really sure this is what you want by adding `--skip-backup`
+ to increase the timeout, e.g., `--backup-timeout=3600` if you expect the
+ initial replication to take under an hour.
+ - Pass `--sslmode=disable` to skip PostgreSQL TLS authentication altogether
+ (e.g., you know the network path is secure, or you are using a site-to-site
+ VPN). This is **not** safe over the public Internet!
+ - You can read more details about each `sslmode` in the
+ [PostgreSQL documentation][pg-docs-ssl];
+ the instructions above are carefully written to ensure protection against
+ both passive eavesdroppers and active "man-in-the-middle" attackers.
+ - Change the `--slot-name` to the name of the replication slot
+ to be used on the **primary** database. The script will attempt to create the
+ replication slot automatically if it does not exist.
+ - If you're repurposing an old server into a Geo **secondary** node, you'll need to
+ add `--force` to the command line.
+ - When not in a production machine you can disable backup step if you
+ really sure this is what you want by adding `--skip-backup`
The replication process is now complete.
diff --git a/doc/administration/geo/replication/remove_geo_node.md b/doc/administration/geo/replication/remove_geo_node.md
index 4f64e21f8ef..e24eb2bd428 100644
--- a/doc/administration/geo/replication/remove_geo_node.md
+++ b/doc/administration/geo/replication/remove_geo_node.md
@@ -19,10 +19,10 @@ Once removed from the Geo admin page, you must stop and uninstall the **secondar
```bash
# Stop gitlab and remove its supervision process
sudo gitlab-ctl uninstall
-
+
# Debian/Ubuntu
sudo dpkg --remove gitlab-ee
-
+
# Redhat/Centos
sudo rpm --erase gitlab-ee
```
@@ -32,9 +32,9 @@ Once GitLab has been uninstalled from the **secondary** node, the replication sl
1. On the **primary** node, start a PostgreSQL console session:
```bash
- sudo gitlab-psql
+ sudo gitlab-psql
```
-
+
NOTE: **Note:**
Using `gitlab-rails dbconsole` will not work, because managing replication slots requires superuser permissions.
@@ -43,9 +43,9 @@ Once GitLab has been uninstalled from the **secondary** node, the replication sl
```sql
SELECT * FROM pg_replication_slots;
```
-
+
1. Remove the replication slot for the **secondary** node:
```sql
SELECT pg_drop_replication_slot('<name_of_slot>');
- ```
+ ```
diff --git a/doc/administration/high_availability/README.md b/doc/administration/high_availability/README.md
index e81d2741082..4317d14ba68 100644
--- a/doc/administration/high_availability/README.md
+++ b/doc/administration/high_availability/README.md
@@ -162,14 +162,14 @@ contention due to certain workloads.
- **Status:** Work-in-progress
- **Supported Users (approximate):** 10,000
-- **Related Issues:** [gitlab-com/support/support-team-meta#1513](https://gitlab.com/gitlab-com/support/support-team-meta/issues/1513),
+- **Related Issues:** [gitlab-com/support/support-team-meta#1513](https://gitlab.com/gitlab-com/support/support-team-meta/issues/1513),
[gitlab-org/quality/team-tasks#110](https://gitlab.com/gitlab-org/quality/team-tasks/issues/110)
The Support and Quality teams are in the process of building and performance testing
an environment that will support about 10,000 users. The specifications below
-are a work-in-progress representation of the work so far. Quality will be
-certifying this environment in FY20-Q2. The specifications may be adjusted
-prior to certification based on performance testing.
+are a work-in-progress representation of the work so far. Quality will be
+certifying this environment in FY20-Q2. The specifications may be adjusted
+prior to certification based on performance testing.
- 3 PostgreSQL - 4 CPU, 8GB RAM per node
- 1 PgBouncer - 2 CPU, 4GB RAM
@@ -211,4 +211,3 @@ separately:
1. [Configure the GitLab application servers](gitlab.md)
1. [Configure the load balancers](load_balancer.md)
1. [Monitoring node (Prometheus and Grafana)](monitoring_node.md)
-
diff --git a/doc/administration/integration/terminal.md b/doc/administration/integration/terminal.md
index c34858cd0db..24c9cc0bea9 100644
--- a/doc/administration/integration/terminal.md
+++ b/doc/administration/integration/terminal.md
@@ -41,9 +41,9 @@ detail below.
- Every session URL that is created has an authorization header that needs to be sent, to establish a `wss` connection.
- The session URL is not exposed to the users in any way. GitLab holds all the state internally and proxies accordingly.
-## Enabling and disabling terminal support
+## Enabling and disabling terminal support
-NOTE: **Note:** AWS Elastic Load Balancers (ELBs) do not support web sockets.
+NOTE: **Note:** AWS Elastic Load Balancers (ELBs) do not support web sockets.
AWS Application Load Balancers (ALBs) must be used if you want web terminals
to work. See [AWS Elastic Load Balancing Product Comparison](https://aws.amazon.com/elasticloadbalancing/features/#compare)
for more information.
diff --git a/doc/administration/raketasks/geo.md b/doc/administration/raketasks/geo.md
index 435aed8c413..691d34ab7fa 100644
--- a/doc/administration/raketasks/geo.md
+++ b/doc/administration/raketasks/geo.md
@@ -2,7 +2,7 @@
## Git housekeeping
-There are few tasks you can run to schedule a git housekeeping to start at the
+There are few tasks you can run to schedule a git housekeeping to start at the
next repository sync in a **Secondary node**:
### Incremental Repack
@@ -23,7 +23,7 @@ sudo -u git -H bundle exec rake geo:git:housekeeping:incremental_repack RAILS_EN
### Full Repack
-This is equivalent of running `git repack -d -A --pack-kept-objects` on a
+This is equivalent of running `git repack -d -A --pack-kept-objects` on a
_bare_ repository which will optionally, write a reachability bitmap index
when this is enabled in GitLab.
diff --git a/doc/administration/raketasks/project_import_export.md b/doc/administration/raketasks/project_import_export.md
index 0599e12b913..138db4dfbb1 100644
--- a/doc/administration/raketasks/project_import_export.md
+++ b/doc/administration/raketasks/project_import_export.md
@@ -2,14 +2,14 @@
>**Note:**
>
-> - [Introduced][ce-3050] in GitLab 8.9.
-> - Importing will not be possible if the import instance version is lower
-> than that of the exporter.
-> - For existing installations, the project import option has to be enabled in
-> application settings (`/admin/application_settings`) under 'Import sources'.
-> - The exports are stored in a temporary [shared directory][tmp] and are deleted
-> every 24 hours by a specific worker.
-> - ImportExport can use object storage automatically starting from GitLab 11.3
+> - [Introduced][ce-3050] in GitLab 8.9.
+> - Importing will not be possible if the import instance version is lower
+> than that of the exporter.
+> - For existing installations, the project import option has to be enabled in
+> application settings (`/admin/application_settings`) under 'Import sources'.
+> - The exports are stored in a temporary [shared directory][tmp] and are deleted
+> every 24 hours by a specific worker.
+> - ImportExport can use object storage automatically starting from GitLab 11.3
The GitLab Import/Export version can be checked by using:
diff --git a/doc/administration/raketasks/storage.md b/doc/administration/raketasks/storage.md
index 42a1a1c2e60..2f83dd17d9f 100644
--- a/doc/administration/raketasks/storage.md
+++ b/doc/administration/raketasks/storage.md
@@ -1,17 +1,17 @@
# Repository Storage Rake Tasks
-This is a collection of rake tasks you can use to help you list and migrate
-existing projects and attachments associated with it from Legacy storage to
+This is a collection of rake tasks you can use to help you list and migrate
+existing projects and attachments associated with it from Legacy storage to
the new Hashed storage type.
You can read more about the storage types [here][storage-types].
## Migrate existing projects to Hashed storage
-Before migrating your existing projects, you should
+Before migrating your existing projects, you should
[enable hashed storage][storage-migration] for the new projects as well.
-This task will schedule all your existing projects and attachments associated with it to be migrated to the
+This task will schedule all your existing projects and attachments associated with it to be migrated to the
**Hashed** storage type:
**Omnibus Installation**
@@ -30,15 +30,15 @@ They both also accept a range as environment variable:
```bash
# to migrate any non migrated project from ID 20 to 50.
-export ID_FROM=20
+export ID_FROM=20
export ID_TO=50
```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
-There is a specific Queue you can watch to see how long it will take to finish:
+There is a specific Queue you can watch to see how long it will take to finish:
`hashed_storage:hashed_storage_project_migrate`
-After it reaches zero, you can confirm every project has been migrated by running the commands bellow.
+After it reaches zero, you can confirm every project has been migrated by running the commands bellow.
If you find it necessary, you can run this migration script again to schedule missing projects.
Any error or warning will be logged in Sidekiq's log file.
@@ -55,7 +55,7 @@ If you need to rollback the storage migration for any reason, you can follow the
NOTE: **Note:** Hashed Storage will be required in future version of GitLab.
-To prevent new projects from being created in the Hashed storage,
+To prevent new projects from being created in the Hashed storage,
you need to undo the [enable hashed storage][storage-migration] changes.
This task will schedule all your existing projects and associated attachments to be rolled back to the
@@ -77,15 +77,14 @@ Both commands accept a range as environment variable:
```bash
# to rollback any migrated project from ID 20 to 50.
-export ID_FROM=20
+export ID_FROM=20
export ID_TO=50
```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_rollback` queue to see how long the process will take to finish.
-
-After it reaches zero, you can confirm every project has been rolled back by running the commands bellow.
+After it reaches zero, you can confirm every project has been rolled back by running the commands bellow.
If some projects weren't rolled back, you can run this rollback script again to schedule further rollbacks.
Any error or warning will be logged in Sidekiq's log file.
@@ -106,7 +105,7 @@ sudo gitlab-rake gitlab:storage:legacy_projects
sudo -u git -H bundle exec rake gitlab:storage:legacy_projects RAILS_ENV=production
```
-------
+---
To list projects using **Legacy** storage:
@@ -139,7 +138,7 @@ sudo gitlab-rake gitlab:storage:hashed_projects
sudo -u git -H bundle exec rake gitlab:storage:hashed_projects RAILS_ENV=production
```
-------
+---
To list projects using **Hashed** storage:
@@ -171,7 +170,7 @@ sudo gitlab-rake gitlab:storage:legacy_attachments
sudo -u git -H bundle exec rake gitlab:storage:legacy_attachments RAILS_ENV=production
```
-------
+---
To list project attachments using **Legacy** storage:
@@ -203,7 +202,7 @@ sudo gitlab-rake gitlab:storage:hashed_attachments
sudo -u git -H bundle exec rake gitlab:storage:hashed_attachments RAILS_ENV=production
```
-------
+---
To list project attachments using **Hashed** storage: