Last week, a backup incident in a staging server on GitLab resulted in the deletion of the production database and was responsible for 6 hours of data loss and some server downtime.
The official blog post about the incident starts like this:
Yesterday we had a serious incident with one of our databases. We lost six hours of database data (issues, merge requests, users, comments, snippets, etc.) for GitLab.com. Git/wiki repositories and self-hosted installations were not affected. Losing production data is unacceptable and in a few days we’ll publish a post on why this happened and a list of measures we will implement to prevent it happening again.
As we all know, data loss is a major nightmare for any product out there, but it’s worst if you’re a cloud code repository with a massive amount of daily users like GitLab. Although these things were not supposed to happen in 2017, organizations are made of humans (at least for now…) and making mistakes is part of being a human. But is the way that the organization deal with the problem that makes the difference.