C++ vs Rust: an async Thread-per-Core story

Edit: due to a trademark issue, the project formerly known as Scipio was renamed to “Glommio”. The article was edited to match.

I have recently released a new Rust library aimed at easing the task of writing asynchronous Thread-per-Core applications: Glommio. I intend to use it to power the new generation of storage intensive systems I am writing for my current employer, Datadog.

But I am no novice to such systems: for the past 7+ years I have worked for ScyllaDB, a NoSQL database that managed to consistently post 5 to 10x performance improvements over its rivals, in large part by leveraging a Thread-per-Core architecture based on the Seastar asynchronous Thread-per-Core framework for C++.

In part because I had the luxury of starting later with many lessons learned, Glommio differs from Seastar in some aspects. To briefly touch on some: it is less opinionated about its applications, as I am trying to position it as a library rather than a framework. It allows applications to change their latency needs dynamically rather than statically at startup, etc. But by and large, Seastar and Glommio are very similar. (Well, another difference is that Seastar is a 7-year-old mature framework, and Glommio barely works enough for an initial release)

The biggest difference really is that Seastar is written in C++, and Glommio is written in Rust. So it is impossible to compare them without mostly talking about differences in the language. Comparing languages sound like a futile exercise in exchanging personal taste preferences and I am sure many have written on Rust and C++ before. But I believe it is still interesting: while I am sure there are many articles out there talking about C++ vs Rust in general terms, it is often times valuable to narrow the focus to how it applies to a particular use case.

Writing an article is hard, in the sense that you never know who is reading and in which context. So I want to start by making two things very clear:

  1. I love Seastar and I poured years of work into it. For a variety of reasons that I won’t go into here, C++ was not a choice for me. But if you like C++ and want to write a Thread-per-Core application take a look at Seastar. I guarantee you will love it. If you get the idea that I am bashing it, you are mistaken.
  2. The craziest part of this experience for me is that when I started this, I didn’t know Rust at all. And to be honest, how much can you really know after a couple of months, so in a sense I am still learning. If you are an experienced Rustacean keep this in mind. Some things that may look totally obvious to you were a total and complete surprise to me. But those are exactly the things I will be mostly talking about here.

From the get go, there were some things I absolutely loved about Rust in comparison with my C++ experience:

  • Rust is opinionated about style. I never liked the fact that people would waste time discussing code style, but at the same time I do agree that a consistent code base is easier to read. Rust comes with a tool that formats the code for you, and while you don’t have to use it, by using it you are coding in the same style as the rest of the community and end of story.
  • Rust has a really nice module system. Cargo is just fantastic and in my experience there is little I would change about it.

But let’s face it, that’s all quite superficial. To take this discussion a little deeper, I’d like to frame it with a specific example.

Let’s take a look at the following issue from the Scylla repository:

I will try to guide the uninitiated with Seastar through some of that complexity:

  • There is a file open for read, somewhere inside “entries_reader”.
  • This code is trying to consume this file.
  • “then” is how Seastar consumes the result of a future, but it doesn’t handle exceptions. “then_wrapped” gives you an opportunity to handle the exception.

Previous to this fix, the code would not handle exceptions correctly and the file object would be destroyed without being closed. RAII is of limited use for things that need to be destroyed asynchronously, a limitation that Rust shares. This could have been just another bug, if it wasn’t for the fact that Seastar files have a read-ahead mechanism. Even after the file was closed, some of those requests were still in-flight. As they would return, they would write to memory areas that were now deallocated.

Triggering this bug was extremely rare. As a matter of fact, as far as I know it only ever happened to one user and up until these days I still have no idea what was special about them. It was lurking there since pretty much forever. However once the bug did find the right set of circumstances to manifest itself, it would happen every couple of hours.

Unfortunately I can’t go into details of the situation, but this unassumingly simple bug was one of the hardest, if not the hardest I ever had to deal with. Nobody else aside from this user could reproduce it, and even though it happened a lot, every time we tried to instrument it happened a little differently. It was essentially impossible to reason about any code paths, because by definition due to a corruption it would lead the code to do things nobody expected. And after days of looking at coredumps we did find out that a piece of memory that was set to 0 in a place would magically flip to 1 the line below. But if you think that helps, ponder the follow up question: Ok, who flipped it?

Finding and fixing it was a week-long multi-person job, and I only claim partial if not minimal credit for it. Most of the credit goes to Avi. In case you don’t know Avi (creator of both ScyllaDB and the KVM hypervisor), he is such a brutally good engineer I sometimes even doubt he is really human. And he struggled a lot with this one.

But given the circumstances around it, a bug essentially impossible to reproduce and happening under production load, I can safely say this was the worst week of my professional life. And don’t get me wrong: those issues in Seastar are extremely rare. In fact, how rare they are given how easy it is to make these mistakes is a testament to how talented the team at Scylla is. But hey, mistakes happen, we all write bugs, and asynchronous code is just hard… right?

Coming from that tradition, my first weeks with Rust were incredibly frustrating to the point of desperation. As a person who wandered in performance oriented system my whole life, I am not about to start copying objects around needlessly. And Rust has all those nice rules about the borrow checker to guarantee safety, so I want to use it. The minimal cost I have to pay to share data is a reference, so a reference is what I want to use!

In Rust, all of your references are scrutinized by the borrow checker according to a simple rule: there can only be one mutable reference at a time, and there can’t be mutable references and non-mutable references happening at the same time.

The borrow checker works well for things that happen at predictable times. But once you start implementing things like an asynchronous call, or a queue of requests that will be consumed by io_uring at a later time and are stored in a on-heap array, things get more complex. Take the following code as a toy example:

Image for post
Image for post
a simple Rust code, taking a reference asynchronously

It creates an Executor, then it drives its future to completion. The future is the return value of a closure that spawns a new asynchronous task that just fires a timer. The parameter to that timer is behind a reference b. This looks totally safe to me, because a, the owner of the data referenced by b should only go out of scope when the program finishes, since the call to run() is blocking (let’s assume it can’t be static for whatever reason).

But Rust doesn’t like it:

Image for post
Image for post
the original owner doesn’t live long enough.

Although we can reason about the lifetime of a and b, because there is an asynchronous function involved, and we have no control over when it executes, the borrow checker cannot guarantee it will live long enough.

In my mind, there had to be something that I could do to just let the borrow checker that what I was doing was okay. Discovering lifetime annotations, refreshing at first, was the worst thing that could have happened to me in those first weeks. All it did was fill my heart with false hope only to crush my dreams later. Ah! Once I finally master those annotations I will find a way to let the borrow checker know it’s all fine… (spoiler alert: I didn’t)

And that’s when I remember the story above and realized: Is my code really safe? What about the corner cases? What if the program ends sooner than I expected with a crash I didn’t predict, then there is some weird destructor ordering rule in the standard I am not aware of, and that memory ends up writing garbage to an important file? What if there is a file in a subfunction I am calling about which I don’t even remember?

Being an engineer is (or at least should be) a humbling experience: all of the things I thought I knew for sure… all of the cases I thought I had handled… but I didn’t.

We can rewrite this code in a way that makes the (compile time) borrow checker not even play a part:

Image for post
Image for post
With the internal mutability pattern, it all works.

Rc is Rust’s parlance for Reference Count. The data is guaranteed to be alive long enough because it is now reference counted. However reference counted objects are immutable, so that nobody sees its value changing under their noses. To be able to change its contents (which we don’t in this particular example but alas…), we use a RefCell. The RefCell doesn’t excuse you from the borrow checker, but it moves the check to runtime. In Rust, this pattern is called interior mutability.

Even if this code would fail at runtime because the borrow checker rules are violated, we are still at an advantage: we would be failing at a predictable, reproducible manner, instead of with a subtle data corruption.

But here is where the asynchronous + Thread-per-Core model really shines: because this data is thread local, and there is only one thread, absolutely nothing else is happening at the same time. It is just impossible. We only ever yield control in very well defined points, where a future could defer if not ready (the calls to .await in this code).

So long as we never call .await (called a suspension point) with anything borrowed, this will always work. My top #1 item for my Rust wish list is to move this to a compile check time, where the compiler can guarantee my RefCell borrows are never crossing a suspension point and then the runtime cost can be voided. Pretty please?

You can argue that I am now paying the cost of bumping reference counts (and runtime checking the RefCells, which will hopefully will be able to avoid in the future). In fact, I spent a long time trying to find ways around it exactly because I wanted to avoid this cost for maximum performance.

But my mind flipped a switch when I stopped viewing it as cost, and started viewing it as an investment: if I could go back in time, would I be willing to pay the performance count of bumping reference counts to avoid all the pain of memory corruption issues? Hell yeah. Some say that taxes are the price we pay to live in a civilized society. Maybe you agree with that, maybe you don’t. But I will claim that reference counting is the price we pay to live in a civilized asynchronous world.

A bit more expensive than the reference count is the fact that to allow for that my data was moved to the heap instead of living in the stack. But that’s only a concern in this toy example. If you have data that you are passing through asynchronous functions, I doubt it is living in the stack. Maybe it is living inside other structures, in which case Rust would likely make you pass the entire outer structure. Ugly. But worth it.

Is Rust 100% safe? Obviously not. This is the real life, not just fantasy. We’re all caught in a landslide and there is no escape from reality: the compiler itself could be broken, and there are some things for which you really need to invoke Rust’s unsafe keyword. But in practice I found that this works well. Uses of unsafe are mostly encapsulated in small parts of the program which makes it easier to reason about sources of problems and in the rust ecosystem they are usually only present in core libraries and infrastructure (like Glommio) but the applications themselves will stay away from it, preferring to use abstractions the libraries built on top of unsafe.

Finally, one obviously can use reference counting in C++. You can do essentially anything in C++ (which is part of the problem with it). But in Rust you kind of have to for any moderately complex asynchronous project.

And for some days I hated it. Now I embrace it.

Veteran infrastructure engineer with decades of experience in low-level systems. Previously Linux Kernel and ScyllaDB. Now at Datadog.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store