diff options
author | Bob Van Landuyt <bob@vanlanduyt.co> | 2017-11-13 16:52:07 +0100 |
---|---|---|
committer | Bob Van Landuyt <bob@vanlanduyt.co> | 2017-12-08 09:11:39 +0100 |
commit | f1ae1e39ce6b7578c5697c977bc3b52b119301ab (patch) | |
tree | 1d01033287e4e15e505c7b8b3f69ced4e6cf21c8 /spec/controllers | |
parent | 12d33b883adda7093f0f4b838532871036af3925 (diff) | |
download | gitlab-ce-f1ae1e39ce6b7578c5697c977bc3b52b119301ab.tar.gz |
Move the circuitbreaker check out in a separate processbvl-circuitbreaker-process
Moving the check out of the general requests, makes sure we don't have
any slowdown in the regular requests.
To keep the process performing this checks small, the check is still
performed inside a unicorn. But that is called from a process running
on the same server.
Because the checks are now done outside normal request, we can have a
simpler failure strategy:
The check is now performed in the background every
`circuitbreaker_check_interval`. Failures are logged in redis. The
failures are reset when the check succeeds. Per check we will try
`circuitbreaker_access_retries` times within
`circuitbreaker_storage_timeout` seconds.
When the number of failures exceeds
`circuitbreaker_failure_count_threshold`, we will block access to the
storage.
After `failure_reset_time` of no checks, we will clear the stored
failures. This could happen when the process that performs the checks
is not running.
Diffstat (limited to 'spec/controllers')
-rw-r--r-- | spec/controllers/admin/health_check_controller_spec.rb | 4 | ||||
-rw-r--r-- | spec/controllers/health_controller_spec.rb | 42 |
2 files changed, 44 insertions, 2 deletions
diff --git a/spec/controllers/admin/health_check_controller_spec.rb b/spec/controllers/admin/health_check_controller_spec.rb index 0b8e0c8a065..d15ee0021d9 100644 --- a/spec/controllers/admin/health_check_controller_spec.rb +++ b/spec/controllers/admin/health_check_controller_spec.rb @@ -1,6 +1,6 @@ require 'spec_helper' -describe Admin::HealthCheckController, broken_storage: true do +describe Admin::HealthCheckController do let(:admin) { create(:admin) } before do @@ -17,7 +17,7 @@ describe Admin::HealthCheckController, broken_storage: true do describe 'POST reset_storage_health' do it 'resets all storage health information' do - expect(Gitlab::Git::Storage::CircuitBreaker).to receive(:reset_all!) + expect(Gitlab::Git::Storage::FailureInfo).to receive(:reset_all!) post :reset_storage_health end diff --git a/spec/controllers/health_controller_spec.rb b/spec/controllers/health_controller_spec.rb index 9e9cf4f2c1f..95946def5f9 100644 --- a/spec/controllers/health_controller_spec.rb +++ b/spec/controllers/health_controller_spec.rb @@ -14,6 +14,48 @@ describe HealthController do stub_env('IN_MEMORY_APPLICATION_SETTINGS', 'false') end + describe '#storage_check' do + before do + allow(Gitlab::RequestContext).to receive(:client_ip).and_return(whitelisted_ip) + end + + subject { post :storage_check } + + it 'checks all the configured storages' do + expect(Gitlab::Git::Storage::Checker).to receive(:check_all).and_call_original + + subject + end + + it 'returns the check interval' do + stub_env('IN_MEMORY_APPLICATION_SETTINGS', 'true') + stub_application_setting(circuitbreaker_check_interval: 10) + + subject + + expect(json_response['check_interval']).to eq(10) + end + + context 'with failing storages', :broken_storage do + before do + stub_storage_settings( + broken: { path: 'tmp/tests/non-existent-repositories' } + ) + end + + it 'includes the failure information' do + subject + + expected_results = [ + { 'storage' => 'broken', 'success' => false }, + { 'storage' => 'default', 'success' => true } + ] + + expect(json_response['results']).to eq(expected_results) + end + end + end + describe '#readiness' do shared_context 'endpoint responding with readiness data' do let(:request_params) { {} } |