Gitlab's disk full again

Bryan Richter b at
Sun Aug 11 08:50:24 UTC 2019

Hi Artem,

I would say it's usual operations practice to keep N>1 backups of a system
as assurance against corrupted backups. But maybe they could be stored on
another server/service?

Other suggestions:

Gitlab stores both artifacts and caches for the CI pipelines. By default,
archives are stored on the same machine as the GitLab service, creating the
risk of resource contention. But there is an option to store them in an
object storage service e.g. S3.

The same goes for caches, but I think they are stored on the CI runner
machine by default (is it separate from the GitLab machine?). Plus, caches
are shared across many jobs while artifacts are unique to a job, so there
are many less caches than artifacts.

Still, it might be valuable to audit the use of both artifacts and caches.

On Sun, 11 Aug 2019, 1.23 Artem Pelenitsyn, <a.pelenitsyn at> wrote:

> Hello,
> Is there a reason to keep more than one backup of GitLab ever?
> --
> Best, Artem
> On Sat, Aug 10, 2019, 4:49 AM Ömer Sinan Ağacan <omeragacan at>
> wrote:
>> Hi,
>> Just yesterday Gitlab was giving 500 because the disk was full. Ben
>> deleted some
>> files, but in less than 24h it's full again. This started happening
>> regularly, I
>> wonder if we could do something about this.
>> The reason this time seems to be that Gitlab started generating 22G-large
>> backups daily since the 7th. I'm not sure how important those backups are
>> so I'm
>> not deleting them.
>> There's also a large docker-registry directory (101G).
>> I think it might be good to set up some kind of downtime monitoring or
>> maybe
>> something on the Gitlab server to send an email when the disk is nearly
>> full. It
>> could send an email to people who has access to the server.
>> It'd also be good to come up with an action plan when this happens. I have
>> access to the server, but I have no idea which files are important.
>> Documenting
>> Gitlab setup (and the server details) in more details might be helpful.
>> Does anyone have any other ideas to keep the server running?
>> Ömer
>> _______________________________________________
>> ghc-devs mailing list
>> ghc-devs at
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list