Managing MongoDB backups

Managing MongoDB backups

If you are searching for backup solutions for your small MongoDB environments and if you want full control on your backups, Mongodump and Mongorestore tools are the best options, so far.

You can manually backup your databases, specific collections or the whole MongoDB server with Mongodump. If you're on the other side of the bridge and you're trying to manage your large (larger than 100GB) MongoDB backups, dealing with Mongodump and Mongorestore might be a big problem for you.

As we started migrating from MSSQL Server to MongoDB at Grouptree, we started to search for the best options for backing up our databases. There were a couple options like copying database files manually, using Mongodump/Mongorestore or using MMS . We didn't have options like taking snapshots of the drives since we were using Windows Servers. I must admit that we thought about running our MongoDB instances on UNIX/Linux servers many times. We usually work with Windows servers so decided to stick with what we know best and where our expertise is.

We settled on the idea of using MMS and only had a couple of databases on our live servers which was easy to handle and easily manageable. Over the past few months we have migrated our CMS from MSSQL to MongoDB and started to use MongoDB on lots of projects. Here, things started to get more challenging.

MMS was handling everything perfectly but it wasn't very cost effective anymore. We've had our Staging and Live environments backing up on MMS but we were not backing up some of the databases that have large amount of test data.

So we're currently writing our own Backup Environment with which we can manually and periodically backup any database we have on our servers. We're using Mongodump to back up all the files to our Amazon Servers and Mongorestore to restore them back. That's working perfectly when we use the tool with small databases(less than 10-20 GB).

As I mentioned in the introduction, there's a huge performance problem with Mongodump when it has to deal with large databases. As an example, we've tried to backup a database which is more than 250GB and it took more than 6 hours to dump all the files. Since it has lots of images in GridFS (another topic to discuss in a future blog) and a testing machine that wasn't powerful, it still wasn't enough to satisfy us. On some other tests with the same database, Mongodump didn't even create all the collections we wanted and just quit without even giving any errors.

As I mentioned in the intro of my post, if you are trying to create a healthy backup environment either you should work on the small databases or you should forget the existence of Mongodump/Mongorestore and go for the other options like MMS, snapshots or point in time backups with operation log option of MongoDB (which I will discuss in my next blog post).

Sky
M&S
Samsung
LG
Jonhson Matthey
Amec FW
ABF