When you have a start-up with $32.9 million in funding and handle valuable customer data, it's probably not the best look to go offline because an employee accidentally deleted your database.
This is what happened to high-profile Silicon Valley start-up GitLab - a virtual workspace for programmers to merge individual projects.
The problem with the service started when spammers were hammering the database, making it unstable.
A system administrator attempted to tidy up the backup database and restart the copying process to fix the slowdown on the service, which would have all been fine if the employee didn't type the command to delete the directory.
We accidentally deleted production data and might have to restore from backup. Google Doc with live notes https://t.co/EVRbHzYlk8— GitLab.com Status (@gitlabstatus) February 1, 2017
"After a second or two he notices ... terminates the removal, but it's too late. Of around 300GB only about 4.5GB is left," the company wrote in a blog.
Following the incident GitLab took the site down for emergency maintenance, keeping all of its customers informed on social media, which saw the company praised for its transparency
@gitlabstatus Extremely impressed with the level of transparency- good luck getting it cleaned up.— Adam Caudill (@adamcaudill) February 1, 2017
While noticing the error quickly, the start-up was unable to fully restore all of the data.
"Out of five backup/replication techniques deployed, none are working reliably or set up in the first place. We ended up restoring a six hours old backup," the company wrote.
Thankfully, the database in question only contained comments and bug reports, meaning it wasn't home to anyone's code that would have been lost from the missing six-hour window.