diff options
author | GitLab Bot <gitlab-bot@gitlab.com> | 2022-08-09 21:09:21 +0000 |
---|---|---|
committer | GitLab Bot <gitlab-bot@gitlab.com> | 2022-08-09 21:09:21 +0000 |
commit | c03dce2dc9f0f257faac4d43d208d96320ca5c0e (patch) | |
tree | 3da57da8f1526935326a10f538bac15797e5f638 /doc | |
parent | 283318c20561cc040b62397060771efa74db0d90 (diff) | |
download | gitlab-ce-c03dce2dc9f0f257faac4d43d208d96320ca5c0e.tar.gz |
Add latest changes from gitlab-org/gitlab@master
Diffstat (limited to 'doc')
-rw-r--r-- | doc/administration/geo/replication/version_specific_upgrades.md | 14 | ||||
-rw-r--r-- | doc/api/users.md | 2 | ||||
-rw-r--r-- | doc/development/documentation/site_architecture/folder_structure.md | 2 | ||||
-rw-r--r-- | doc/development/documentation/site_architecture/global_nav.md | 2 | ||||
-rw-r--r-- | doc/development/documentation/site_architecture/index.md | 236 | ||||
-rw-r--r-- | doc/development/fe_guide/graphql.md | 45 | ||||
-rw-r--r-- | doc/development/uploads/working_with_uploads.md | 375 | ||||
-rw-r--r-- | doc/integration/omniauth.md | 59 | ||||
-rw-r--r-- | doc/user/admin_area/index.md | 4 | ||||
-rw-r--r-- | doc/user/analytics/index.md | 2 | ||||
-rw-r--r-- | doc/user/project/insights/index.md | 75 | ||||
-rw-r--r-- | doc/user/project/releases/index.md | 89 | ||||
-rw-r--r-- | doc/user/project/releases/release_cicd_examples.md | 100 | ||||
-rw-r--r-- | doc/user/project/wiki/index.md | 12 |
14 files changed, 597 insertions, 420 deletions
diff --git a/doc/administration/geo/replication/version_specific_upgrades.md b/doc/administration/geo/replication/version_specific_upgrades.md index 91d87f093c5..350310c7076 100644 --- a/doc/administration/geo/replication/version_specific_upgrades.md +++ b/doc/administration/geo/replication/version_specific_upgrades.md @@ -178,11 +178,15 @@ GitLab 13.9 through GitLab 14.3 are affected by a bug in which enabling [GitLab ## Upgrading to GitLab 13.7 -We've detected an issue with the `FetchRemove` call used by Geo secondaries. -This causes performance issues as we execute reference transaction hooks for -each upgraded reference. Delay any upgrade attempts until this is in the -[13.7.5 patch release.](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3002). -More details are available [in this issue](https://gitlab.com/gitlab-org/git/-/issues/79). +- We've detected an issue with the `FetchRemove` call used by Geo secondaries. + This causes performance issues as we execute reference transaction hooks for + each upgraded reference. Delay any upgrade attempts until this is in the + [13.7.5 patch release.](https://gitlab.com/gitlab-org/gitaly/-/merge_requests/3002). + More details are available [in this issue](https://gitlab.com/gitlab-org/git/-/issues/79). +- A new secret is generated in `/etc/gitlab/gitlab-secrets.json`. + In an HA GitLab or GitLab Geo environment, secrets need to be the same on all nodes. + Ensure this new secret is also accounted for if you are manually syncing the file across + nodes, or manually specifying secrets in `/etc/gitlab/gitlab.rb`. ## Upgrading to GitLab 13.5 diff --git a/doc/api/users.md b/doc/api/users.md index e9247a436ca..7f6851fe6df 100644 --- a/doc/api/users.md +++ b/doc/api/users.md @@ -753,7 +753,7 @@ PUT /user/status | Attribute | Type | Required | Description | | -------------------- | ------ | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `emoji` | string | no | Name of the emoji to use as status. If omitted `speech_balloon` is used. Emoji name can be one of the specified names in the [Gemojione index](https://github.com/bonusly/gemojione/blob/master/config/index.json). | -| `message` | string | no | Message to set as a status. It can also contain emoji codes. | +| `message` | string | no | Message to set as a status. It can also contain emoji codes. Cannot exceed 100 characters. | | `clear_status_after` | string | no | Automatically clean up the status after a given time interval, allowed values: `30_minutes`, `3_hours`, `8_hours`, `1_day`, `3_days`, `7_days`, `30_days` When both parameters `emoji` and `message` are empty, the status is cleared. When the `clear_status_after` parameter is missing from the request, the previously set value for `"clear_status_after` is cleared. diff --git a/doc/development/documentation/site_architecture/folder_structure.md b/doc/development/documentation/site_architecture/folder_structure.md index 0e8065d794f..7f29d3fba9e 100644 --- a/doc/development/documentation/site_architecture/folder_structure.md +++ b/doc/development/documentation/site_architecture/folder_structure.md @@ -85,7 +85,7 @@ place for it. Do not include the same information in multiple places. [Link to a single source of truth instead.](../styleguide/index.md#link-instead-of-repeating-text) -For example, if you have code in a repository other than the [primary repositories](index.md#architecture), +For example, if you have code in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md), and documentation in the same repository, you can keep the documentation in that repository. Then you can either: diff --git a/doc/development/documentation/site_architecture/global_nav.md b/doc/development/documentation/site_architecture/global_nav.md index d1cb65dd68b..05e697869b9 100644 --- a/doc/development/documentation/site_architecture/global_nav.md +++ b/doc/development/documentation/site_architecture/global_nav.md @@ -299,7 +299,7 @@ The [layout](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/global_ is fed by the [data file](#data-file), builds the global nav, and is rendered by the [default](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/layouts/default.html) layout. -The global nav contains links from all [four upstream projects](index.md#architecture). +The global nav contains links from all [four upstream projects](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md). The [global nav URL](#urls) has a different prefix depending on the documentation file you change. | Repository | Link prefix | Final URL | diff --git a/doc/development/documentation/site_architecture/index.md b/doc/development/documentation/site_architecture/index.md index d11bea86698..2864bbe7404 100644 --- a/doc/development/documentation/site_architecture/index.md +++ b/doc/development/documentation/site_architecture/index.md @@ -11,57 +11,12 @@ the repository which is used to generate the GitLab documentation website and is deployed to <https://docs.gitlab.com>. It uses the [Nanoc](https://nanoc.app/) static site generator. -## Architecture +View the [`gitlab-docs` architecture page](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md) +for more information. -While the source of the documentation content is stored in the repositories for -each GitLab product, the source that is used to build the documentation -site _from that content_ is located at <https://gitlab.com/gitlab-org/gitlab-docs>. +## Documentation in other repositories -The following diagram illustrates the relationship between the repositories -from where content is sourced, the `gitlab-docs` project, and the published output. - -```mermaid - graph LR - A[gitlab-org/gitlab/doc] - B[gitlab-org/gitlab-runner/docs] - C[gitlab-org/omnibus-gitlab/doc] - D[gitlab-org/charts/gitlab/doc] - E[gitlab-org/cloud-native/gitlab-operator/doc] - Y[gitlab-org/gitlab-docs] - A --> Y - B --> Y - C --> Y - D --> Y - E --> Y - Y -- Build pipeline --> Z - Z[docs.gitlab.com] - M[//ee/] - N[//runner/] - O[//omnibus/] - P[//charts/] - Q[//operator/] - Z --> M - Z --> N - Z --> O - Z --> P - Z --> Q -``` - -GitLab docs content isn't kept in the `gitlab-docs` repository. -All documentation files are hosted in the respective repository of each -product, and all together are pulled to generate the docs website: - -- [GitLab](https://gitlab.com/gitlab-org/gitlab/-/tree/master/doc) -- [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab/-/tree/master/doc) -- [GitLab Runner](https://gitlab.com/gitlab-org/gitlab-runner/-/tree/main/docs) -- [GitLab Chart](https://gitlab.com/gitlab-org/charts/gitlab/-/tree/master/doc) -- [GitLab Operator](https://gitlab.com/gitlab-org/cloud-native/gitlab-operator/-/tree/master/doc) - -Learn more about [the docs folder structure](folder_structure.md). - -### Documentation in other repositories - -If you have code and documentation in a repository other than the [primary repositories](#architecture), +If you have code and documentation in a repository other than the [primary repositories](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/architecture.md), you should keep the documentation with the code in that repository. Then you can use one of these approaches: @@ -81,187 +36,6 @@ Then you can use one of these approaches: We do not encourage the use of [pages with lists of links](../structure.md#topics-and-resources-pages), so only use this option if the recommended options are not feasible. -## Assets - -To provide an optimized site structure, design, and a search-engine friendly -website, along with a discoverable documentation, we use a few assets for -the GitLab Documentation website. - -### External libraries - -GitLab Docs is built with a combination of external: - -- [JavaScript libraries](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/package.json). -- [Ruby libraries](https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/Gemfile). - -### SEO - -- [Schema.org](https://schema.org/) -- [Google Analytics](https://marketingplatform.google.com/about/analytics/) -- [Google Tag Manager](https://developers.google.com/tag-platform/tag-manager) - -## Global navigation - -Read through [the global navigation documentation](global_nav.md) to understand: - -- How the global navigation is built. -- How to add new navigation items. - -<!-- -## Helpers - -TBA ---> - -## Pipelines - -The pipeline in the `gitlab-docs` project: - -- Tests changes to the docs site code. -- Builds the Docker images used in various pipeline jobs. -- Builds and deploys the docs site itself. -- Generates the review apps when the `review-docs-deploy` job is triggered. - -### Rebuild the docs site Docker images - -Once a week on Mondays, a scheduled pipeline runs and rebuilds the Docker images -used in various pipeline jobs, like `docs-lint`. The Docker image configuration files are -located in the [Dockerfiles directory](https://gitlab.com/gitlab-org/gitlab-docs/-/tree/main/dockerfiles). - -If you need to rebuild the Docker images immediately (must have maintainer level permissions): - -WARNING: -If you change the Dockerfile configuration and rebuild the images, you can break the main -pipeline in the main `gitlab` repository as well as in `gitlab-docs`. Create an image with -a different name first and test it to ensure you do not break the pipelines. - -1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI/CD > Pipelines**. -1. Select **Run pipeline**. -1. See that a new pipeline is running. The jobs that build the images are in the first - stage, `build-images`. You can select the pipeline number to see the larger pipeline - graph, or select the first (`build-images`) stage in the mini pipeline graph to - expose the jobs that build the images. -1. Select the **play** (**{play}**) button next to the images you want to rebuild. - - Normally, you do not need to rebuild the `image:gitlab-docs-base` image, as it - rarely changes. If it does need to be rebuilt, be sure to only run `image:docs-lint` - after it is finished rebuilding. - -### Deploy the docs site - -Every four hours a scheduled pipeline builds and deploys the docs site. The pipeline -fetches the current docs from the main project's main branch, builds it with Nanoc -and deploys it to <https://docs.gitlab.com>. - -To build and deploy the site immediately (must have the Maintainer role): - -1. In [`gitlab-docs`](https://gitlab.com/gitlab-org/gitlab-docs), go to **{rocket}** **CI/CD > Schedules**. -1. For the `Build docs.gitlab.com every 4 hours` scheduled pipeline, select the **play** (**{play}**) button. - -Read more about [documentation deployments](deployment_process.md). - -## Using YAML data files - -The easiest way to achieve something similar to -[Jekyll's data files](https://jekyllrb.com/docs/datafiles/) in Nanoc is by -using the [`@items`](https://nanoc.app/doc/reference/variables/#items-and-layouts) -variable. - -The data file must be placed inside the `content/` directory and then it can -be referenced in an ERB template. - -Suppose we have the `content/_data/versions.yaml` file with the content: - -```yaml -versions: - - 10.6 - - 10.5 - - 10.4 -``` - -We can then loop over the `versions` array with something like: - -```erb -<% @items['/_data/versions.yaml'][:versions].each do | version | %> - -<h3><%= version %></h3> - -<% end &> -``` - -Note that the data file must have the `yaml` extension (not `yml`) and that -we reference the array with a symbol (`:versions`). - -## Archived documentation banner - -A banner is displayed on archived documentation pages with the text `This is archived documentation for -GitLab. Go to the latest.` when either: - -- The version of the documentation displayed is not the first version entry in `online` in - `content/_data/versions.yaml`. -- The documentation was built from the default branch (`main`). - -For example, if the `online` entries for `content/_data/versions.yaml` are: - -```yaml -online: - - "14.4" - - "14.3" - - "14.2" -``` - -In this case, the archived documentation banner isn't displayed: - -- For 14.4, the docs built from the `14.4` branch. The branch name is the first entry in `online`. -- For 14.5-pre, the docs built from the default project branch (`main`). - -The archived documentation banner is displayed: - -- For 14.3. -- For 14.2. -- For any other version. - -## Bumping versions of CSS and JavaScript - -Whenever the custom CSS and JavaScript files under `content/assets/` change, -make sure to bump their version in the front matter. This method guarantees that -your changes take effect by clearing the cache of previous files. - -Always use Nanoc's way of including those files, do not hardcode them in the -layouts. For example use: - -```erb -<script async type="application/javascript" src="<%= @items['/assets/javascripts/badges.*'].path %>"></script> - -<link rel="stylesheet" href="<%= @items['/assets/stylesheets/toc.*'].path %>"> -``` - -The links pointing to the files should be similar to: - -```erb -<%= @items['/path/to/assets/file.*'].path %> -``` - -Nanoc then builds and renders those links correctly according with what's -defined in [`Rules`](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/Rules). - -## Linking to source files - -A helper called [`edit_on_gitlab`](https://gitlab.com/gitlab-org/gitlab-docs/blob/main/lib/helpers/edit_on_gitlab.rb) can be used -to link to a page's source file. We can link to both the simple editor and the -web IDE. Here's how you can use it in a Nanoc layout: - -- Default editor: `<a href="<%= edit_on_gitlab(@item, editor: :simple) %>">Simple editor</a>` -- Web IDE: `<a href="<%= edit_on_gitlab(@item, editor: :webide) %>">Web IDE</a>` - -If you don't specify `editor:`, the simple one is used by default. - -## Algolia search engine - -The docs site uses [Algolia DocSearch](https://docsearch.algolia.com/) -for its search function. - -Learn more in <https://gitlab.com/gitlab-org/gitlab-docs/-/blob/main/doc/docsearch.md>. - ## Monthly release process (versions) The docs website supports versions and each month we add the latest one to the list. @@ -269,5 +43,5 @@ For more information, read about the [monthly release process](https://gitlab.co ## Review Apps for documentation merge requests -If you are contributing to GitLab docs read how to +If you are contributing to GitLab docs read how to [create a Review App with each merge request](../index.md#previewing-the-changes-live). diff --git a/doc/development/fe_guide/graphql.md b/doc/development/fe_guide/graphql.md index 3c7280b8e5a..2dcc7290e87 100644 --- a/doc/development/fe_guide/graphql.md +++ b/doc/development/fe_guide/graphql.md @@ -896,6 +896,51 @@ export default new VueApollo({ This is similar to the `DesignCollection` example above as new page results are appended to the previous ones. +For some cases, it's hard to define the correct `keyArgs` for the field because all +the fields are updated. In this case, we can set `keyArgs` to `false`. This instructs +Apollo Client to not perform any automatic merge, and fully rely on the logic we +put into the `merge` function. + +For example, we have a query like this: + +```javascript +query searchGroupsWhereUserCanTransfer { + currentUser { + id + groups { + nodes { + id + fullName + } + pageInfo { + ...PageInfo + } + } + } +} +``` + +Here, the `groups` field doesn't have a good candidate for `keyArgs`: both +`nodes` and `pageInfo` will be updated when we're fetching a second page. +Setting `keyArgs` to `false` makes the update work as intended: + +```javascript +typePolicies: { + UserCore: { + fields: { + groups: { + keyArgs: false, + }, + }, + }, + GroupConnection: { + fields: { + nodes: concatPagination(), + }, + }, +} +``` + #### Using a recursive query in components When it is necessary to fetch all paginated data initially an Apollo query can do the trick for us. diff --git a/doc/development/uploads/working_with_uploads.md b/doc/development/uploads/working_with_uploads.md index d44f2f69168..5a5f987c37c 100644 --- a/doc/development/uploads/working_with_uploads.md +++ b/doc/development/uploads/working_with_uploads.md @@ -6,92 +6,295 @@ info: To determine the technical writer assigned to the Stage/Group associated w # Uploads guide: Adding new uploads -Here, we describe how to add a new upload route [accelerated](index.md#workhorse-assisted-uploads) by Workhorse. - -Upload routes belong to one of these categories: - -1. Rails controllers: uploads handled by Rails controllers. -1. Grape API: uploads handled by a Grape API endpoint. -1. GraphQL API: uploads handled by a GraphQL resolve function. - -WARNING: -GraphQL uploads do not support [direct upload](index.md#direct-upload). Depending on the use case, the feature may not work on installations without NFS (like GitLab.com or Kubernetes installations). Uploading to object storage inside the GraphQL resolve function may result in timeout errors. For more details, follow [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819). - -## Update Workhorse for the new route - -For both the Rails controller and Grape API uploads, Workhorse must be updated to get the -support for the new upload route. - -1. Open a new issue in the [Workhorse tracker](https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/new) describing precisely the new upload route: - - The route's URL. - - The upload encoding. - - If possible, provide a dump of the upload request. -1. Implement and get the MR merged for this issue above. -1. Ask the Maintainers of [Workhorse](https://gitlab.com/gitlab-org/gitlab-workhorse) to create a new release. You can do that in the merge request - directly during the maintainer review, or ask for it in the `#workhorse` Slack channel. -1. Bump the [Workhorse version file](https://gitlab.com/gitlab-org/gitlab/-/blob/master/GITLAB_WORKHORSE_VERSION) - to the version you have from the previous points, or bump it in the same merge request that contains - the Rails changes. Refer to [Implementing the new route with a Rails controller](#implementing-the-new-route-with-a-rails-controller) or [Implementing the new route with a Grape API endpoint](#implementing-the-new-route-with-a-grape-api-endpoint) below. - -## Implementing the new route with a Rails controller - -For a Rails controller upload, we usually have a `multipart/form-data` upload and there are a -few things to do: - -1. The upload is available under the parameter name you're using. For example, it could be an `artifact` - or a nested parameter such as `user[avatar]`. If you have the upload under the - `file` parameter, reading `params[:file]` should get you an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) instance. -1. Generally speaking, it's a good idea to check if the instance is from the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) class. For example, see how we checked -[that the parameter is indeed an `UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/commit/ea30fe8a71bf16ba07f1050ab4820607b5658719#51c0cc7a17b7f12c32bc41cfab3649ff2739b0eb_79_77). - -WARNING: -**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) -instance using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params` -passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) -instance that [`multipart.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/middleware/multipart.rb) -builds automatically for you. - -## Implementing the new route with a Grape API endpoint - -For a Grape API upload, we can have a body or multipart upload. Things are slightly more complicated: two endpoints are needed. One for the -Workhorse pre-upload authorization and one for accepting the upload metadata from Workhorse: - -1. Implement an endpoint with the URL + `/authorize` suffix that will: - - Check that the request is coming from Workhorse with the `require_gitlab_workhorse!` from the [API helpers](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/helpers.rb). - - Check user permissions. - - Set the status to `200` with `status 200`. - - Set the content type with `content_type Gitlab::Workhorse::INTERNAL_API_CONTENT_TYPE`. - - Use your dedicated `Uploader` class (let's say that it's `FileUploader`) to build the response with `FileUploader.workhorse_authorize(params)`. -1. Implement the endpoint for the upload request that will: - - Require all the `UploadedFile` objects as parameters. - - For example, if we expect a single parameter `file` to be an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) instance, -use `requires :file, type: ::API::Validations::Types::WorkhorseFile`. - - Body upload requests have their upload available under the parameter `file`. - - Check that the request is coming from Workhorse with the `require_gitlab_workhorse!` from the -[API helpers](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/api/helpers.rb). - - Check the user permissions. - - The remaining code of the processing. In this step, the code must read the parameter. For -our example, it would be `params[:file]`. - -WARNING: -**Do not** call `UploadedFile#from_params` directly! Do not build an [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) -object using `UploadedFile#from_params`! This method can be unsafe to use depending on the `params` -passed. Instead, use the [`UploadedFile`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/uploaded_file.rb) -object that [`multipart.rb`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/middleware/multipart.rb) -builds automatically for you. - -## Document Object Storage buckets and CarrierWave integration - -When using Object Storage, GitLab expects each kind of upload to maintain its own bucket in the respective -Object Storage destination. Moreover, the integration with CarrierWave is not used all the time. -The [Object Storage Working Group](https://about.gitlab.com/company/team/structure/working-groups/object-storage/) -is investigating an approach that unifies Object Storage buckets into a single one and removes CarrierWave -so as to simplify implementation and administration of uploads. - -Therefore, document new uploads here by slotting them into the following tables: - -- [Feature bucket details](#feature-bucket-details) -- [CarrierWave integration](#carrierwave-integration) +## Recommendations + +- When creating an uploader, [make it a subclass](#where-should-i-store-my-files) of `AttachmentUploader` +- Add your uploader to the [tables](#tables) in this document +- Do not add [new object storage buckets](#where-should-i-store-my-files) +- Implement [direct upload](#implementing-direct-upload-support) +- If you need to process your uploads, decide [where to do that](#processing-uploads) + +## Background information + +- [CarrierWave Uploaders](#carrierwave-uploaders) +- [GitLab modifications to CarrierWave](#gitlab-modifications-to-carrierwave) + +## Where should I store my files? + +CarrierWave Uploaders determine where files get +stored. When you create a new Uploader class you are deciding where to store the files of your new +feature. + +First of all, ask yourself if you need a new Uploader class. It is OK +to use the same Uploader class for different mountpoints or different +models. + +If you do want or need your own Uploader class then you should make it +a **subclass of `AttachmentUploader`**. You then inherit the storage +location and directory scheme from that class. The directory scheme +is: + +```ruby +File.join(model.class.underscore, mounted_as.to_s, model.id.to_s) +``` + +If you look around in the GitLab code base you will find quite a few +Uploaders that have their own storage location. For object storage, +this means Uploaders have their own buckets. We now **discourage** +adding new buckets for the following reasons: + +- Using a new bucket adds to development time because you need to make downstream changes in [GDK](https://gitlab.com/gitlab-org/gitlab-development-kit), [Omnibus GitLab](https://gitlab.com/gitlab-org/omnibus-gitlab) and [CNG](https://gitlab.com/gitlab-org/build/CNG). +- Using a new bucket requires GitLab.com Infrastructure changes, which slows down the roll-out of your new feature +- Using a new bucket slows down adoption of your new feature for self-managed GitLab installation: people cannot start using your new feature until their local GitLab administrator has configured the new bucket. + +By using an existing bucket you avoid all this extra work +and friction. The `Gitlab.config.uploads` storage location, which is what +`AttachmentUploader` uses, is guaranteed to already be configured. + +## Implementing Direct Upload support + +Below we will outline how to implement [direct upload](#direct-upload-via-workhorse) support. + +Using direct upload is not always necessary but it is usually a good +idea. Unless the uploads handled by your feature are both infrequent +and small, you probably want to implement direct upload. An example of +a feature with small and infrequent uploads is project avatars: these +rarely change and the application imposes strict size limits on them. + +If your feature handles uploads that are not both infrequent and small, +then not implementing direct upload support means that you are taking on +technical debt. At the very least, you should make sure that you _can_ +add direct upload support later. + +To support Direct Upload you need two things: + +1. A pre-authorization endpoint in Rails +1. A Workhorse routing rule + +Workhorse does not know where to store your upload. To find out it +makes a pre-authorization request. It also does not know whether or +where to make a pre-authorization request. For that you need the +routing rule. + +A note to those of us who remember, +[Workhorse used to be a separate project](https://gitlab.com/groups/gitlab-org/-/epics/4826): +it is not necessary anymore to break these two steps into separate merge +requests. In fact it is probably easier to do both in one merge +request. + +### Adding a Workhorse routing rule + +Routing rules are defined in +[workhorse/internal/upstream/routes.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/internal/upstream/routes.go). +They consist of: + +- An HTTP verb (usually "POST" or "PUT") +- A path regular expression +- An upload type: MIME multipart or "full request body" +- Optionally, you can also match on HTTP headers like `Content-Type` + +Example: + +```golang +u.route("PUT", apiProjectPattern+`packages/nuget/`, mimeMultipartUploader), +``` + +You should add a test for your routing rule to `TestAcceleratedUpload` +in +[workhorse/upload_test.go](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/workhorse/upload_test.go). + +You should also manually verify that when you perform an upload +request for your new feature, Workhorse makes a pre-authorization +request. You can check this by looking at the Rails access logs. This +is necessary because if you make a mistake in your routing rule you +won't get a hard failure: you just end up using the less efficient +default path. + +### Adding a pre-authorization endpoint + +We distinguish three cases: Rails controllers, Grape API endpoints and +GraphQL resources. + +To start with the bad news: direct upload for GraphQL is currently not +supported. The reason for this is that Workhorse does not parse +GraphQL queries. Also see [issue #280819](https://gitlab.com/gitlab-org/gitlab/-/issues/280819). +Consider accepting your file upload via Grape instead. + +For Grape pre-authorization endpoints, look for existing examples that +implement `/authorize` routes. One example is the +[POST `:id/uploads/authorize` endpoint](https://gitlab.com/gitlab-org/gitlab/-/blob/9ad53d623eecebb799ce89eada951e4f4a59c116/lib/api/projects.rb#L642-651). +Note that this particular example is using FileUploader, which means +that the upload will be stored in the storage location (bucket) of +that Uploader class. + +For Rails endpoints you can use the +[WorkhorseAuthorization concern](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/controllers/concerns/workhorse_authorization.rb). + +## Processing uploads + +Some features require us to process uploads, for example to extract +metadata from the uploaded file. There are a couple of different ways +you can implement this. The main choice is _where_ to implement the +processing, or "who is the processor". + +|Processor|Direct Upload possible?|Can reject HTTP request?|Implementation| +|---|---|---|---| +|Sidekiq|yes|no|Straightforward| +|Workhorse|yes|yes|Complex| +|Rails|no|yes|Easy| + +Processing in Rails looks appealing but it tends to lead to scaling +problems down the road because you cannot use direct upload. You are +then forced to rebuild your feature with processing in Workhorse. So +if the requirements of your feature allows it, doing the processing in +Sidekiq strikes a good balance between complexity and the ability to +scale. + +## CarrierWave Uploaders + +GitLab uses a modified version of +[CarrierWave](https://github.com/carrierwaveuploader/carrierwave) to +manage uploads. Below we will describe how we use CarrierWave and how +we modified it. + +The central concept of CarrierWave is the **Uploader** class. The +Uploader defines where files get stored, and optionally contains +validation and processing logic. To use an Uploader you must associate +it with a text column on an ActiveRecord model. This called "mounting" +and the column is called the "mountpoint". For example: + +```ruby +class Project < ApplicationRecord + mount_uploader :avatar, AttachmentUploader +end +``` + +Now if I upload an avatar called `tanuki.png` the idea is that in the +`projects.avatar` column for my project, CarrierWave stores the string +`tanuki.png`, and that the AttachmentUploader class contains the +configuration data and directory schema. For example if the project ID +is 123, the actual file may be in +`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/tanuki.png`. +The directory +`/var/opt/gitlab/gitlab-rails/uploads/-/system/project/avatar/123/` +was chosen by the Uploader using among others configuration +(`/var/opt/gitlab/gitlab-rails/uploads`), the model name (`project`), +the model ID (`123`) and the mountpoint (`avatar`). + +> The Uploader determines the individual storage directory of your +> upload. The mountpoint column in your model contains the filename. + +You never access the mountpoint column directly because CarrierWave +defines a getter and setter on your model that operates on file handle +objects. + +### Optional Uploader behaviors + +Besides determining the storage directory for your upload, a +CarrierWave Uploader can implement several other behaviors via +callbacks. Not all of these behaviors are usable in GitLab. In +particular, you currently cannot use the `version` mechanism of +CarrierWave. Things you can do include: + +- Filename validation +- **Incompatible with direct upload:** One time pre-processing of file contents, e.g. image resizing +- **Incompatible with direct upload:** Encryption at rest + +Note that CarrierWave pre-processing behaviors such as image resizing +or encryption require local access to the uploaded file. This forces +you to upload the processed file from Ruby. This flies against direct +upload, which is all about _not_ doing the upload in Ruby. If you use +direct upload with an Uploader with pre-processing behaviors then the +pre-processing behaviors will be skipped silently. + +### CarrierWave Storage engines + +CarrierWave has 2 storage engines: + +|CarrierWave class|GitLab name|Description| +|---|---|---| +|`CarrierWave::Storage::File`|`ObjectStorage::Store::LOCAL` |Local files, accessed through the Ruby stdlib| +| `CarrierWave::Storage::Fog`|`ObjectStorage::Store::REMOTE`|Cloud files, accessed through the [Fog gem](https://github.com/fog/fog)| + +GitLab uses both of these engines, depending on configuration. + +The normal way to choose a storage engine in CarrierWave is to use the +`Uploader.storage` class method. In GitLab we do not do this; we have +overridden `Uploader#storage` instead. This allows us to vary the +storage engine file by file. + +### CarrierWave file lifecycle + +An Uploader is associated with two storage areas: regular storage and +cache storage. Each has its own storage engine. If you assign a file +to a mountpoint setter (`project.avatar = +File.open('/tmp/tanuki.png')`) you will copy/move the file to cache +storage as a side effect via the `cache!` method. To persist the file +you must somehow call the `store!` method. This either happens via +[ActiveRecord callbacks](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/orm/activerecord.rb#L55) +or by calling `store!` on an Uploader instance. + +Normally you do not need to interact with `cache!` and `store!` but if +you need to debug GitLab CarrierWave modifications it is useful to +know that they are there and that they always get called. +Specifically, it is good to know that CarrierWave pre-processing +behaviors (`process` etc.) are implemented as `before :cache` hooks, +and in the case of direct upload, these hooks are ignored and do not +run. + +> Direct upload skips all CarrierWave `before :cache` hooks. + +## GitLab modifications to CarrierWave + +GitLab uses a modified version of CarrierWave to make a number of things possible. + +### Migrating data between storage engines + +In +[app/uploaders/object_storage.rb](https://gitlab.com/gitlab-org/gitlab/-/blob/adf99b5327700cf34a845626481d7d6fcc454e57/app/uploaders/object_storage.rb) +there is code for migrating user data between local storage and object +storage. This code exists because for a long time, GitLab.com stored +uploads on local storage via NFS. This changed when as part of an infrastructure +migration we had to move the uploads to object storage. + +This is why the CarrierWave `storage` varies from upload to upload in +GitLab, and why we have database columns like `uploads.store` or +`ci_job_artifacts.file_store`. + +### Direct Upload via Workhorse + +Workhorse direct upload is a mechanism that lets us accept large +uploads without spending a lot of Ruby CPU time. Workhorse is written +in Go and goroutines have a much lower resource footprint than Ruby +threads. + +Direct upload works as follows. + +1. Workhorse accepts a user upload request +1. Workhorse pre-authenticates the request with Rails, and receives a temporary upload location +1. Workhorse stores the file upload in the user's request to the temporary upload location +1. Workhorse propagates the request to Rails +1. Rails issues a remote copy operation to copy the uploaded file from its temporary location to the final location +1. Rails deletes the temporary upload +1. Workhorse deletes the temporary upload a second time in case Rails timed out + +Normally, `cache!` returns an instance of +`CarrierWave::SanitizedFile`, and `store!` then +[uploads that file using Fog](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L327-L335). + +In the case of object storage, with the modifications specific to GitLab, the +copying from the temporary location to the final location is +implemented by Rails fooling CarrierWave. When CarrierWave tries to +`cache!` the upload, we +[return](https://gitlab.com/gitlab-org/gitlab/-/blob/59b441d578e41cb177406a9799639e7a5aa9c7e1/app/uploaders/object_storage.rb#L367) +a `CarrierWave::Storage::Fog::File` file handle which points to the +temporary file. During the `store!` phase, CarrierWave then +[copies](https://github.com/carrierwaveuploader/carrierwave/blob/v1.3.2/lib/carrierwave/storage/fog.rb#L325) +this file to its intended location. + +## Tables + +The Scalability::Frameworks team is going to make object storage and uploads more easy to use and more robust. If you add or change uploaders, it helps us if you update this table too. This helps us keep an overview of where and how uploaders are used. ### Feature bucket details diff --git a/doc/integration/omniauth.md b/doc/integration/omniauth.md index 9ea6c614687..1c398ad6a8e 100644 --- a/doc/integration/omniauth.md +++ b/doc/integration/omniauth.md @@ -107,6 +107,65 @@ To change these settings: After configuring these settings, you can configure your chosen [provider](#supported-providers). +### Per-provider configuration + +> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/89379) in GitLab 15.3. + +If `allow_single_sign_on` is set, GitLab uses one of the following fields returned in the OmniAuth `auth_hash` to establish a username in GitLab for the user signing in, +choosing the first that exists: + +- `username`. +- `nickname`. +- `email`. + +You can create GitLab configuration on a per-provider basis, which is supplied to the [provider](#supported-providers) using `args`. If you set the `gitlab_username_claim` +variable in `args` for a provider, you can select another claim to use for the GitLab username. The chosen claim must be unique to avoid collisions. + +- **For Omnibus installations** + + ```ruby + gitlab_rails['omniauth_providers'] = [ + + # The generic pattern for configuring a provider with name PROVIDER_NAME + + gitlab_rails['omniauth_providers'] = { + name: "PROVIDER_NAME" + ... + args: { gitlab_username_claim: 'sub' } # For users signing in with the provider you configure, the GitLab username will be set to the "sub" received from the provider + }, + + # Here are examples using GitHub and Crowd + + gitlab_rails['omniauth_providers'] = { + name: "github" + ... + args: { gitlab_username_claim: 'name' } # For users signing in with GitHub, the GitLab username will be set to the "name" received from GitHub + }, + { + name: "crowd" + ... + args: { gitlab_username_claim: 'uid' } # For users signing in with Crowd, the GitLab username will be set to the "uid" received from Crowd + }, + ] + ``` + +- **For installations from source** + + ```yaml + - { name: 'PROVIDER_NAME', + ... + args: { gitlab_username_claim: 'sub' } + } + - { name: 'github', + ... + args: { gitlab_username_claim: 'name' } + } + - { name: 'crowd', + ... + args: { gitlab_username_claim: 'uid' } + } + ``` + ### Passwords for users created via OmniAuth The [Generated passwords for users created through integrated authentication](../security/passwords_for_integrated_authentication_methods.md) diff --git a/doc/user/admin_area/index.md b/doc/user/admin_area/index.md index c5a345d0197..326ad268546 100644 --- a/doc/user/admin_area/index.md +++ b/doc/user/admin_area/index.md @@ -221,7 +221,7 @@ The [Cohorts](user_cohorts.md) tab displays the monthly cohorts of new users and ### Prevent a user from creating groups -By default, users can create groups. To prevent a user from creating groups: +By default, users can create groups. To prevent a user from creating a top level group: 1. On the top bar, select **Menu > Admin**. 1. On the left sidebar, select **Overview > Users** (`/admin/users`). @@ -230,6 +230,8 @@ By default, users can create groups. To prevent a user from creating groups: 1. Clear the **Can create group** checkbox. 1. Select **Save changes**. +It is also possible to [limit which roles can create a subgroup within a group](../group/subgroups/index.md#change-who-can-create-subgroups). + ### Administering Groups You can administer all groups in the GitLab instance from the Admin Area's Groups page. diff --git a/doc/user/analytics/index.md b/doc/user/analytics/index.md index 3c06fcce2e2..41547430e88 100644 --- a/doc/user/analytics/index.md +++ b/doc/user/analytics/index.md @@ -47,7 +47,7 @@ You can use GitLab to review analytics at the project level. Some of these featu The following analytics features are available for users to create personalized views: -- [Application Security](../application_security/security_dashboard/#security-center) +- [Application Security](../application_security/security_dashboard/index.md#security-center) Be sure to review the documentation page for this feature for GitLab tier requirements. diff --git a/doc/user/project/insights/index.md b/doc/user/project/insights/index.md index 72af822d7b8..ff95945a64a 100644 --- a/doc/user/project/insights/index.md +++ b/doc/user/project/insights/index.md @@ -209,7 +209,12 @@ monthlyBugsCreated: > [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/725) in GitLab 15.3. -The `data_source` parameter was introduced to allow visualizing data from different data sources. Currently `issuable` is the only supported value. +The `data_source` parameter was introduced to allow visualizing data from different data sources. + +Supported values are: + +- `issuables`: Exposes merge request or issue data. +- `dora`: Exposes DORA metrics data. #### `Issuable` query parameters @@ -259,7 +264,7 @@ monthlyBugsCreated: - regression ``` -#### `query.params.collection_labels` +##### `query.params.collection_labels` Group "issuable" by the configured labels. @@ -286,7 +291,7 @@ weeklyBugsBySeverity: - S4 ``` -#### `query.group_by` +##### `query.group_by` Define the X-axis of your chart. @@ -296,7 +301,7 @@ Supported values are: - `week`: Group data per week. - `month`: Group data per month. -#### `query.period_limit` +##### `query.period_limit` Define how far "issuables" are queried in the past (using the `query.period_field`). @@ -333,6 +338,68 @@ NOTE: Until [this bug](https://gitlab.com/gitlab-org/gitlab/-/issues/26911), is resolved, you may see `created_at` in place of `merged_at`. `created_at` is used instead. +#### `DORA` query parameters + +> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/367248) in GitLab 15.3. + +An example DORA chart definition: + +```yaml +dora: + title: "DORA charts" + charts: + - title: "DORA deployment frequency" + type: bar + query: + data_source: dora + params: + metric: deployment_frequency + group_by: day + period_limit: 10 + projects: + only: + - 38 + - title: "DORA lead time for changes" + description: "DORA lead time for changes" + type: bar + query: + data_source: dora + params: + metric: lead_time_for_changes + group_by: day + environment_tiers: + - staging + period_limit: 30 +``` + +#### `query.metric` + +Defines which DORA metric to query. The available values are: + +- `deployment_frequency` (default) +- `lead_time_for_changes` +- `time_to_restore_service` +- `change_failure_rate` + +The metrics are described on the [DORA API](../../../api/dora/metrics.md#the-value-field) page. + +##### `query.group_by` + +Define the X-axis of your chart. + +Supported values are: + +- `day` (default): Group data per day. +- `month`: Group data per month. + +##### `query.period_limit` + +Define how far the metrics are queried in the past (default: 15). Maximum lookback period is 180 days or 6 months. + +##### `query.environment_tiers` + +An array of environments to include into the calculation (default: production). + ### `projects` > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10904) in GitLab 12.4. diff --git a/doc/user/project/releases/index.md b/doc/user/project/releases/index.md index c7d18353c47..93a808b1706 100644 --- a/doc/user/project/releases/index.md +++ b/doc/user/project/releases/index.md @@ -129,95 +129,6 @@ Methods for creating a release using a CI/CD job include: - Create a release when a Git tag is created. - Create a release when a commit is merged to the default branch. -#### Create a release when a Git tag is created - -In this CI/CD example, pushing a Git tag to the repository, or creating a Git tag in the UI triggers -the release. You can use this method if you prefer to create the Git tag manually, and create a -release as a result. - -NOTE: -Do not provide Release notes when you create the Git tag in the UI. Providing release notes -creates a release, resulting in the pipeline failing. - -Key points in the following _extract_ of an example `.gitlab-ci.yml` file: - -- The `rules` stanza defines when the job is added to the pipeline. -- The Git tag is used in the release's name and description. - -```yaml -release_job: - stage: release - image: registry.gitlab.com/gitlab-org/release-cli:latest - rules: - - if: $CI_COMMIT_TAG # Run this job when a tag is created - script: - - echo "running release_job" - release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties - tag_name: '$CI_COMMIT_TAG' - description: '$CI_COMMIT_TAG' -``` - -#### Create a release when a commit is merged to the default branch - -In this CI/CD example, merging a commit to the default branch triggers the pipeline. You can use -this method if your release workflow does not create a tag manually. - -Key points in the following _extract_ of an example `.gitlab-ci.yml` file: - -- The Git tag, description, and reference are created automatically in the pipeline. -- If you manually create a tag, the `release_job` job does not run. - -```yaml -release_job: - stage: release - image: registry.gitlab.com/gitlab-org/release-cli:latest - rules: - - if: $CI_COMMIT_TAG - when: never # Do not run this job when a tag is created manually - - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch - script: - - echo "running release_job for $TAG" - release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties - tag_name: 'v0.$CI_PIPELINE_IID' # The version is incremented per pipeline. - description: 'v0.$CI_PIPELINE_IID' - ref: '$CI_COMMIT_SHA' # The tag is created from the pipeline SHA. -``` - -NOTE: -Environment variables set in `before_script` or `script` are not available for expanding -in the same job. Read more about -[potentially making variables available for expanding](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/6400). - -#### Skip multiple pipelines when creating a release - -Creating a release using a CI/CD job could potentially trigger multiple pipelines if the associated tag does not exist already. To understand how this might happen, consider the following workflows: - -- Tag first, release second: - 1. A tag is created via UI or pushed. - 1. A tag pipeline is triggered, and runs `release` job. - 1. A release is created. - -- Release first, tag second: - 1. A pipeline is triggered when commits are pushed or merged to default branch. The pipeline runs `release` job. - 1. A release is created. - 1. A tag is created. - 1. A tag pipeline is triggered. The pipeline also runs `release` job. - -In the second workflow, the `release` job runs in multiple pipelines. To prevent this, you can use the [`workflow:rules` keyword](../../../ci/yaml/index.md#workflowrules) to determine if a release job should run in a tag pipeline: - -```yaml -release_job: - rules: - - if: $CI_COMMIT_TAG - when: never # Do not run this job in a tag pipeline - - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch - script: - - echo "Create release" - release: - name: 'My awesome release' - tag_name: '$CI_COMMIT_TAG' -``` - ### Use a custom SSL CA certificate authority You can use the `ADDITIONAL_CA_CERT_BUNDLE` CI/CD variable to configure a custom SSL CA certificate authority, diff --git a/doc/user/project/releases/release_cicd_examples.md b/doc/user/project/releases/release_cicd_examples.md new file mode 100644 index 00000000000..f1d3e55a707 --- /dev/null +++ b/doc/user/project/releases/release_cicd_examples.md @@ -0,0 +1,100 @@ +--- +stage: Release +group: Release +info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#assignments +--- + +# Release CI/CD examples + +GitLab release functionality is flexible, able to be configured to match your workflow. This page +features example CI/CD release jobs. Each example demonstrates a method of creating a release in a +CI/CD pipeline. + +## Create a release when a Git tag is created + +In this CI/CD example, pushing a Git tag to the repository, or creating a Git tag in the UI triggers +the release. You can use this method if you prefer to create the Git tag manually, and create a +release as a result. + +NOTE: +Do not provide Release notes when you create the Git tag in the UI. Providing release notes +creates a release, resulting in the pipeline failing. + +Key points in the following _extract_ of an example `.gitlab-ci.yml` file: + +- The `rules` stanza defines when the job is added to the pipeline. +- The Git tag is used in the release's name and description. + +```yaml +release_job: + stage: release + image: registry.gitlab.com/gitlab-org/release-cli:latest + rules: + - if: $CI_COMMIT_TAG # Run this job when a tag is created + script: + - echo "running release_job" + release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties + tag_name: '$CI_COMMIT_TAG' + description: '$CI_COMMIT_TAG' +``` + +## Create a release when a commit is merged to the default branch + +In this CI/CD example, merging a commit to the default branch triggers the pipeline. You can use +this method if your release workflow does not create a tag manually. + +Key points in the following _extract_ of an example `.gitlab-ci.yml` file: + +- The Git tag, description, and reference are created automatically in the pipeline. +- If you manually create a tag, the `release_job` job does not run. + +```yaml +release_job: + stage: release + image: registry.gitlab.com/gitlab-org/release-cli:latest + rules: + - if: $CI_COMMIT_TAG + when: never # Do not run this job when a tag is created manually + - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch + script: + - echo "running release_job for $TAG" + release: # See https://docs.gitlab.com/ee/ci/yaml/#release for available properties + tag_name: 'v0.$CI_PIPELINE_IID' # The version is incremented per pipeline. + description: 'v0.$CI_PIPELINE_IID' + ref: '$CI_COMMIT_SHA' # The tag is created from the pipeline SHA. +``` + +NOTE: +Environment variables set in `before_script` or `script` are not available for expanding +in the same job. Read more about +[potentially making variables available for expanding](https://gitlab.com/gitlab-org/gitlab-runner/-/issues/6400). + +## Skip multiple pipelines when creating a release + +Creating a release using a CI/CD job could potentially trigger multiple pipelines if the associated tag does not exist already. To understand how this might happen, consider the following workflows: + +- Tag first, release second: + 1. A tag is created via UI or pushed. + 1. A tag pipeline is triggered, and runs `release` job. + 1. A release is created. + +- Release first, tag second: + 1. A pipeline is triggered when commits are pushed or merged to default branch. The pipeline runs `release` job. + 1. A release is created. + 1. A tag is created. + 1. A tag pipeline is triggered. The pipeline also runs `release` job. + +In the second workflow, the `release` job runs in multiple pipelines. To prevent this, you can use the [`workflow:rules` keyword](../../../ci/yaml/index.md#workflowrules) to determine if a release job should run in a tag pipeline: + +```yaml +release_job: + rules: + - if: $CI_COMMIT_TAG + when: never # Do not run this job in a tag pipeline + - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # Run this job when commits are pushed or merged to the default branch + script: + - echo "Create release" + release: + name: 'My awesome release' + tag_name: '$CI_COMMIT_TAG' +``` diff --git a/doc/user/project/wiki/index.md b/doc/user/project/wiki/index.md index c1f7436f716..e8870e2b028 100644 --- a/doc/user/project/wiki/index.md +++ b/doc/user/project/wiki/index.md @@ -6,6 +6,9 @@ info: To determine the technical writer assigned to the Stage/Group associated w # Wiki **(FREE)** +> - Page loading [changed](https://gitlab.com/gitlab-org/gitlab/-/issues/336792) to asynchronous in GitLab 14.9. +> - Page slug encoding method [changed](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/71753) to `ERB::Util.url_encode` in GitLab 14.9. + If you don't want to keep your documentation in your repository, but you want to keep it in the same project as your code, you can use the wiki GitLab provides in each GitLab project. Every wiki is a separate Git repository, so you can create @@ -370,3 +373,12 @@ For the status of the ongoing development for CommonMark and GitLab Flavored Mar - [Group repository storage moves API](../../../api/group_repository_storage_moves.md) - [Group wikis API](../../../api/group_wikis.md) - [Wiki keyboard shortcuts](../../shortcuts.md#wiki-pages) + +## Troubleshooting + +### Page slug rendering with Apache reverse proxy + +In GitLab 14.9 and later, page slugs are now encoded using the +[`ERB::Util.url_encode`](https://www.rubydoc.info/stdlib/erb/ERB%2FUtil.url_encode) method. +If you use an Apache reverse proxy, you can add a `nocanon` argument to the `ProxyPass` +line of your Apache configuration to ensure your page slugs render correctly. |