Skip to content

November 7, 2011

3

My Experiences with MongoDB in production over the last year

by e1ven

MongoDB Logo

First off, a disclaimer. I’m writing this article in my off hours, on my personal machine.
I don’t speak for my employer, although my use on the job as well as several projects at home do flavor my impressions.

I’ve been using MongoDB in production for various projects for the last year and a half, and found that there are some things it does incredibly well, and other areas where it tends to fall flat on it’s face.

I’ve seen a lot of people writing up their Mongo Experiences lately, and I’d had this document in the Drafts window for a while, so I thought I’d share my own thoughts with the DB.

Mongo is Fast

First of, Mongo is Fast. Like, seriously, screamingly, How-is-that-possible fast.
When you have a workflow that maps well to what Mongo prefers, Mongo is substantially faster than Couch, Riak, or Postgres.

The key, though, is ensuring your workflow maps to what MongoDB is good at.

Lots of people have written about Mongo’s Write Lock, but I think one point has become somewhat lost in the noise– When MongoDB is writing, the entire DB is locked. No writes can be performed, and no reads can be performed either.
This is normally fine, since writes are very, very fast. In fact, they’ve gotten much faster in Mongo 2.0, since the DB now yields the lock when it sends the command to write to disk, rather than waiting for the disk to return.

The fact remains, however, that when you have a lot of writes, your DB will halt, and reads will block.
Again, to be very, very clear- when MongoDB is writing something, ALL reads to ALL collections on ALL slaves will be blocked.

In the future, 10gen hopes to make this collection-level, which is a start, but other Databases such as Couch have versioned objects, which mean than even individual Objects aren’t locked!

Replica Sets are straightforward

One way to try to solve this is to use Read Slaves.
Replication in Mongo is dead-simple to set up; A competent admin can have a working 3-node replica set in an hour or two.

The intended deployment for MongoDB is that in production, you will always run multiple servers, and thus, always have multiple copies of your data.
This is why until 1.8, any single server might become corrupt if shutdown uncleanly.
This wasn’t a problem– If you’re only running on one server, it was argued, you were doing it wrong!

Essentially Mongo was designed to have Cluster-level safety, rather than server-level safety.

If a machine goes down, no worries, one of the other cluster members will take over for it.

Having multiple read slaves, and using SlaveOK queries means that you can redirect your queries onto these read-only slave replicas, even while the master is blocked by a write.

This generally works well enough, but there are some caveats you should keep in mind:

1) Not all drivers route SlaveOK automatically.
Because Mongo puts a certain degree of logic into the client drivers, you need to be careful to ensure that the driver for your language-of-choice supports the mongo features you want to use.
In some cases you might find that the while core mongo server supports a feature, such as SlaveOK queries, your particular language-driver might not. YMMV.

2) If replication backs up, SlaveOK queries might have bad/old data.
This ought to be obvious, but it’s important to keep in mind. If you store Session data in mongo, make sure you don’t use SlaveOK queries to retrieve it, or logins will mysteriously fail when replication falls behind.

3) Replicas get blocked on writes also!
This was not immediately obvious to me, but it’s important to note.
Writes to the mongoDB block reads. This is due to the aforementioned Global Lock
Less intuitive, is that it reads to the Slaves are also blocked during writes.
This is because during each write, mongo will replicate the write to each slave.
While this write is being written to the slave, as part of normal replication, the slave will be blocked.

There aren’t any good work around here. 10Gen is working to make the Global Lock suck less, but for now, it’s pain on a stick.

Sharding is expensive

If you ask 10Gen about these problems, they’ll patiently explain that they don’t intend for ReplicaSets to be a speed boost, they’re intended to provide data safety.

At the end of day, if you want to get past the blocking writes, you need to shard your data. Note, however, that this is not a panacea- If you have 12 Shards, and you send in a blocking write, 1/12th of your DB will still be blocked!

So here’s my rant about Mongo Sharding. I’m sure if 10Gen reads this, they will say it functions as designed, and I recognize that the way they did it is a valid tradeoff, I just don’t personally agree. Take that for what it is.

Sharding properly requires too many servers.

If you have a database that is getting to be more write-heavy than Mongo can easily keep up with, and you want to divide it up into Shards, you need a lot of machines. Probably 3X as many machines as you think you need.

The reason for this is that each shard needs it’s own replicaset; They don’t share, and you can’t re-use.

That means if you want to divide your DB into 10 partitions, you will need at least 30 servers.

  • Replicaset1
    • mongo1
    • mongo2
    • mongo2
  • Replicaset2
    • mongo1
    • mongo2
    • mongo2
  • Replicaset3
    • mongo1
    • mongo2
    • mongo2
  • Replicaset10
    • mongo1
    • mongo2
    • mongo2

You’re also supposed to have 3 config servers for each replica set, although these are lightweight enough that you can probably share them.

None if this is unbearable, but it’s something to keep in mind in your design configurations.
If you’re writing a lot, even VERY SMALL writes, you’re going to need a lot of servers.

To be fair, it’s also possible to try to offset this by running multiple Mongo Instances on the same HW.
Beyond adding mental and config overhead (which are bearable), since you’re generally sharding in situations when you’re write limited, this won’t help as much as you might hope.
Essentially, you’re sharing I/O, including replication to all the different servers, across fewer spindles.

What I’d really like to see is a more RAID-6 style deployment, wherein each node added to a mongoDB cluster held 1/nth parity data, as well as it’s own data.
This allows you to distribute requests to each node in the deployment, and still survive if a server fails.

This is more conceptually similar to how Riak handles adding nodes. Each node you add to a Riak server adds both redundancy AND speed.

Development is easy

Finally, Developing against MongoDB is about as straightforward as I could possibly ask for.
Getting it running on my Macbook was trivial, and development is fast, with little learning curve.

In Mongo, I can insert an arbitrary JSON document, such as inserting all email received from the outside world. After this, I can query on any element of the JSON.

This makes it VERY simple to pull all the emails sent from Thunderbird, or any message received in a date range..

I could do these same things with SQL, but I’d generally need to fit everything into a defined schema; For things like Email, or HTML pages, which might have any number of arbitrary headers, a schema-less design is a godsend.

Another thing I really enjoyed is that I can query in familiar styles. For example, to retrieve all messages from a certain sender, I could use a built in JSON query format, which if you squint hard enough, isn’t THAT different from SQL
Contrast this with my impression that Couch/Riak, I believe I need to use MapReduce for everything ((This was my impression in going through basic tutorials and development of mini toy apps; I accept I could be way off))

I like MapReduce for a certain class of problem, but sometimes it’s nice to be able to reach for a simple query, and have it just work. According to 10Gen, these queries are by and away the most popular form of requesting data from MongoDB, and they are going to be adding new forms of queries (such as Group by) in upcoming versions.

Support is available

I have to say that 10Gen has always been very helpful in providing support.
In additional, their new MMS tool is very helpful in seeing where things are having trouble.

Whether through paid (Tickets) or free (IRC/Mailing List) means, they’ve almost always come through with solid answers to almost any problem that comes up.
I have a lot of respect for the company for bringing in senior level people on a regular basis to help answer problems, rather than throwing us to people working off a script.

The only negative I have to say on the support is that too often, the answer is “It’s fixed in the next version”.

This seems to be the default answer to any problem that comes up..

  • “MongoS instances are not routing to all mongoD servers”
    Fixed in the next version
  • “Our MongoD server failed with this error overnight “
    Fixed in the next version
  • “Our replica set isn’t sure who the master should be”
    Fixed in the next version

Don’t get me wrong, it’s great that things are being fixed!
Just know going in that if you’re using MongoDB as your primary DB, you’ll be on a constant upgrade treadmill.

10Gen is still deciding how much to backport, and when, but generally it seems like problems with known workarounds only get fixed in Trunk, and problems without usable workarounds will get a backport to one version back. Don’t expect to use the same version for a year; If you want to avoid major bugs, it’s just not going to happen.

Summary

In Summary, I like MongoDB quite a bit, but like any software, it’s important to know it’s warts.
In a lot of ways, Mongo reminds me of MySQL on MyISAM a decade ago-
It’s Fast, has easy replication, and it’s used by everyone, so you can get help when you need it.
But if you’re not careful, it will blow up in unexpected ways.

If you understand the tradeoffs going in, and you have a relatively DB with relatively infrequent writes, MongoDB can be a godsend. It’s dramatically faster than the alternatives, and we’ve never had problem resulting in dataloss. But like any package you do need to design with it’s limitations in mind.

Advertisements
Read more from Uncategorized
3 Comments Post a comment
  1. Chandra
    Nov 20 2011

    Hello, throughly enjoyed reading through ur article.Very informative and to the point.You haven`t mentioned what version you are on and what configuration your using.Whats the volume and if it is write intensive or read.I am working on Mongo as part of our ETL process for OLTP solution and very much intreseted in this stats.

    Reply
    • e1ven
      Nov 20 2011

      Thanks! I’ve used Mongo 1.4,1.6 (for a long time), 1.84, and now 2.01

      Reply
  2. usha
    Jan 29 2013

    Hi,
    Could anyone please help me out in finding solution for scalability of mongodb in cloud. If the volume gets filled up with data and i dont want to replace it..how can i get new volume?should i connect it to new volume manually or it automatically gets connected?please reply as soon as possible

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

%d bloggers like this: