eatonphil 10 days ago

Link to the paper [0]. Interesting stuff.

> This paper presents Amazon MemoryDB, a fast and durable inmemory storage cloud-based service. A core design behind MemoryDB is to decouple durability from the in-memory execution engine by leveraging an internal AWS transaction log service. In doing so, MemoryDB is able to separate consistency and durability concerns away from the engine allowing to independently scale performance and availability. To achieve that, a key challenge was ensuring strong consistency across all failure modes while maintaining the performance and full compatibility with Redis. MemoryDB solves this by intercepting the Redis replication stream, redirecting it to the transaction log, and converting it into synchronous replication. MemoryDB built a leadership mechanism atop the transaction log which enforces strong consistency. MemoryDB unlocks new capabilities for customers that do not want to trade consistency or performance while using Redis API, one of the most popular data stores of the past decade.

[0] https://assets.amazon.science/e0/1b/ba6c28034babbc1b18f54aa8...

  • vlovich123 10 days ago

    I don’t get this part:

    > MemoryDB solves this by intercepting the Redis replication stream, redirecting it to the transaction log, and converting it into synchronous replication

    Replication is eventually consistent in Redis - is it saying that it’s intercepting the stream at the source and blocking the write from completing until replication completes? Cause intercepting it at the point it’s going out (which is what the word interception implies to me) wouldn’t get you strong consistency I would think.

    • alucard055 10 days ago

      from sec 3.2 in the paper

      "Due to our choice of using passive replication, mutations are executed on a primary node before being committed into the trans- action log. If a commit fails, for example due to network isolation, the change must not be acknowledged and must not become visible. Other database engines use isolation mechanisms like Multi-Version Concurrency Control (MVCC) to achieve this, but Redis data struc- tures do not support this functionality, and it cannot be readily decoupled from the database engine itself. Instead, MemoryDB adds a layer of client blocking. After a client sends a mutation, the reply from the mutation operation is stored in a tracker until the transaction log acknowledges persistence and only then sent to the client. Meanwhile, the Redis workloop can process other operations. Non-mutating operations can be executed immediately but must consult the tracker to determine if their results must also be delayed until a particular log write completes. Hazards are detected at the key level. If the value or data-structure in a key has been modified by an operation which is not yet persisted, the responses to read operations on that key are delayed until all data in that response is persisted. Replica nodes do not require blocking as mutations are only visible once committed to three AZs"

      • panarky 10 days ago

        > the reply from the mutation operation is stored in a tracker until the transaction log acknowledges persistence

        What are the consistency and durability properties of the tracker datastore?

        Are replies from tracker mutations stored in a tracker-tracker until the tracker-transaction-log acknowledges persistence?

        Is it trackers all the way down?

        • Onavo 9 days ago

          Strong consistency at the expense of availability.

        • Bjartr 9 days ago

          Maybe it could be done round-robin?

      • swasheck 10 days ago

        this sounds mostly like how MS implemented synchronous availability groups in sql server

      • convolvatron 10 days ago

        this is kind of a strange design. in order to support this we build an external model of the database for all potentially conflicting state. wouldn't it be easier to make a redis lookalike that supported high isolation levels?

        • jitl 10 days ago

          The external model is much simpler to do for Redis than a database with complex cross-key queries like an SQL database. Redis has scans but no queries or “real” transactions with rollback. To me it sounds like more work to implement MemoryDB and then additionally re-implement the Redis interior than to implement MemoryDB and use the off-the-shelf Redis.

          I also think that the decoupled design is kind of elegant, it allows the logical implementation to be developed independently of the durability bits. It’s open-core but someone else is building the core.

          • zadokshi 9 days ago

            Yes, it does seem like all that effort to wrap redis is wasted, and that it’d be easier to just create their own. It isn’t clear to me why this is better than creating their own. Does anyone know why they would go the route of trying to wrap something around redis rather than just replace it with something that has a redis compatible API?

          • ec109685 9 days ago

            They also likely have technology / libraries / expertise that make building the tracker straightforward.

        • jchrisa 9 days ago

          It's smart because you have deep control of bugwards compatibility, and can swap parts of the stack later.

flakiness 10 days ago

FYI MemoryDB is a Redis compatible managed db. (For someone who's not familiar with AWS offerings)

https://aws.amazon.com/memorydb/

  • husam212 10 days ago

    It's compatible with Redis cluster mode only, which is not always supported.

  • posix86 10 days ago

    it says so in the link posted!! read before writing

    • flakiness 10 days ago

      There are people who check comments before the linked page ;-)

vasco 10 days ago

We've used it at work for a specific use case where paying for a more expensive redis that survives downtime with no frills made sense. It's quite expensive but super easy to use.

  • refset 10 days ago

    Considering the functionality on offer the service does look particularly easy to work with, even though there's still a notion of a stateful cluster with per-node sizing & pricing. Perhaps the MemoryDB team will offer something more 'serverless' eventually.

supportengineer 10 days ago

If we had truly orthogonal systems, you could setup a RAM Disk, and run SQLite with the backing store file in the RAM disk, without any custom software needed at all.

  • zbentley 9 days ago

    Are you saying durability is "orthogonal" in that it should be managed outside of the database (in your example, perhaps by copying sqlite files to durable storage)?

    If not, then your proposed design seems pretty different from MemoryDB; yours doesn't persist data in the event of machine loss or reboot.

  • rmbyrro 10 days ago

    It depends on a few more factors, no?

    For instance: it's hard to scale concurrent writes with SQLite. I read they have an enterprise paid version with higher write concurrency support, but have no idea how it works and whether it'd compare with Redis or MemoryDB's write concurrency levels.

zxt_tzx 9 days ago

Interesting stuff. We use MemoryDB as the underlying service for BullMQ, a NodeJS queue that’s built on top of Redis. We trade off a bit of speed and cost (MemoryDB costs more than Elasticache) for persistence and BullMQ’s many features, which is a good tradeoff for most apps.

spxneo 10 days ago

> 11 9s of durability

so thats 99.(eleven 9s) ?

where would this sort of database used? streaming financial instrument ticks? Do you point Kinesis and its able to write/read super quickly?

  • coxley 10 days ago

    Minor nit: it'd be 99.(nine 9s)

    • pwarner 10 days ago

      That just says they are using S3 to persist to disk.

dangoodmanUT 10 days ago

This feels too high level. They just sort of explain how they are durable via a log (e.g. RedPanda) and store things in mem.

It'd be more interesting if they talked about what log they used (Kinesis? Something on another DB?), what did they use for a locking service and how did they handle failure cases, etc.

  • dangoodmanUT 10 days ago

    It also seems like a strange choice to use gossip when they have a shared log

flanked-evergl 10 days ago

MemoryDB launched in 2021.

  • richwater 10 days ago

    Yes, that's why the abstract says that.

klaussilveira 10 days ago

The fact that there's no source anywhere shows how important GPL is. What use is this to the rest of the community?

  • throwaway918274 10 days ago

    Even if it was GPL you wouldn't be entitled to the source since it lives behind a managed service.

  • abigail95 10 days ago

    I don't care about the source code of anything Amazon runs. The source code doesn't get me 11 9's. That's what people are paying for.

    Why is the community better off without this option?

    • shrubble 10 days ago

      So basically if you can't run it anywhere other than AWS, it's no more newsworthy than talking about a COBOL compiler for an IBM mainframe; which is also equally capable of giving you 11 9's.

      BTW AWS can't possibly have been delivering on 11 9's given their previous outages; 9 9's is 0.031s over a year.

      • markfive 10 days ago

        Pedantic clarification incoming: 11 9's is the Durability guarantee for S3. It doesn't refer to Availability.

        • anonzzzies 10 days ago

          But for this memorydb it seems it is? Indeed not for entire AWS either way though.

          • 0x457 10 days ago

            No, it specifically says for 11 9s for durability in the first sentence.

    • refset 10 days ago

      Anything that helps turn RESP into more of a commodity protocol seems like a good thing in the long run. It seems to be simple enough that workloads can legitimately migrate around and users can vote with their wallets for whoever operates the most secure/available/portable(OSS) platform. By contrast SQL tech is in a far more precarious state.

    • didip 10 days ago

      In this case, that's right. I don't care about source since client-side is widely documented. And as for server-side, there's still Redis source code.

    • nitwit005 10 days ago

      > The source code doesn't get me 11 9's

      You mean four nines:

      > AWS will use commercially reasonable efforts to make Amazon EC2 available for each AWS region with a Monthly Uptime Percentage of at least 99.99%, in each case during any monthly billing cycle

      https://aws.amazon.com/compute/sla/

  • refset 10 days ago

    If nothing else, MemoryDB demonstrates that Valkey could be extended with much richer durability/HA guarantees.

  • otterley 10 days ago

    MemoryDB isn't software that's being distributed to customers. It's a service.

  • rmbyrro 10 days ago

    I don't agree, but I also don't think this should be downvoted. It's a valid point of view and can spark a worthwhile discussion to enlighten other people that might have a somewhat limited view of open source vs. cloud-based offerings.

  • borplk 10 days ago

    Companies can have private commercial things, get over it.

random3 10 days ago

It's a bit disingenuous to slap 11 9s in the first phrase of something called MemoryDB and drop "durability" after, implied by the otherwise boring durability of S3.

This said, it's not bad. I'd keep in mind that the paper is one thing and putting your money where your mouth is means having an SLA for latency. So far, Google's BigTable is the only service with a read latency SLA.

- 99.99 availability

- 3ms p50 - 6ms P99 read-only! latency

  • deanCommie 9 days ago

    SLAs with public cloud providers don't mean what people think they mean (not saying you do, but i bet a lot of other readers do).

    SLAs just mean "if you can prove that we didn't meet our SLA, we'll give you a refund, and by refund we mean some % of your bill for some duration".

    It's not nothing - it's obviously $$s, and so teams get measured and have goals about improving their availability and latency.

    But most customers don't seek out those refunds, and so there is no real pressure connected between the SLA and their true performance (which is often much BETTER than the SLAs, but not because of the SLAs)

    • ak217 9 days ago

      > there is no real pressure connected between the SLA and their true performance

      I'm not sure I understand this. Regardless of the refund, if a provider cuts enough corners with SLAs, won't the customers eventually raise a stink about it and make use of the (thankfully robust) competition? Plus there's the support overhead of tracking the performance and issuing the refund, which might exceed the cost of the refund itself.

      I think to most people an SLA is an indicator that a company is serious about this aspect of the product's performance. Serious enough to write it into long term contracts and align its incentives to fulfill it.