Sharded_eredis

A while ago, antirez wrote a great post about Redis presharding. I haven’t read it until recently, but it seems like the ideal way to go about distributed systems. You should definitely read this before continuing to read this post. First, let’s talk about why people bother with distributed systems at all.

Why do I (or you) want a distributed database?

Many people think it’s a no brainer, but if you haven’t thought it out, I encourage you to think hard about the how and why. The most common reasons are:

  1. My throughput is too high
  2. I want to scale effortlessly (just hit a button)

My throughput is too high!

When you say this, do you mean that your throughput per key is high, or that your throughput per machine is high? In the former case, yes, you should probably get a distributed database and replicate your keys to increase read/write availability (at the cost of strong consistency).

Most of us are in the latter camp though. Our throughput exceeds our infrastructure. This can be solved very very well with sharded non-distributed databases!

I want to scale effortlessly

This is a legitimate concern. As developers, nobody wants to deal with the headache of moving data around, ensuring that no data is lost, and that migrations run smoothly without corruption. So they do what comes to mind first: throw a distributed database at the problem. Distributed databases do in fact solve this problem, but perhaps at a higher cost than people realize.

The “issues” with distributed databases

These aren’t issues so much as concerns that the developer must address when migrating to a distributed data model. The developer must realize that transactions are gone, and so, consistency is harder to enforce. Distributed databases that assign a master node to each key (RethinkDB, Couchbase) have slightly easier consistency problems but sacrifice some availablity, especially in the presence of node failure.

To top it off, one of The worst problems that a maintainer of a distributed database needs to deal with is the actual scaling part. Each addition or subtraction requires a rebalancing operation of some sort and sacrifices availability during the change.

Given that not all applications require strong consistency and transactions, many developers might be ok with this. However …

There is another way

Presharding. By appropriating all the shards in advance, one never needs to worry about transferring data between nodes or rebalancing. By using built in replication (handled well by most non-distributed databases), shards can be moved to larger machines or split off (if they started on the same machine) with zero downtime or degraded performance.

The project sharded_eredis does this to some effect using eredis process pools. Given a Redis command, it will automatically find the pool name associated with the correct shard and pass the command along to the correct node.

By tacking on administrative tools to spin up many redis instances, setup replication, and failover, we can leverage Erlang’s code swapping to point existing db pools to the correct locations seamlessly.

Comments