Home / Blog / What C++ can’t borrow from Rust

What C++ can’t borrow from Rust

I enjoy learning new things, such as taking drum lessons and attempting to learn the ukulele. I’ve been improving my skills in games like Dark Souls and Elden Ring. Additionally, I manage a custom network and a home server running on Arch Linux. Recently, I created presentation slides for the Game Industry Conference using Markdown in Obsidian and exported them to HTML. I’m always open to trying new and exciting things.

I also love programming, I started with functional programming in F Sharp but later moved to game development at Pixelant Games. While working with various engines and frameworks is satisfying and can bring a lot of fun, I realized I was still looking back at the clean and pure functional style of programming and I think I found it…

Have you ever tried writing in Rust? I’ll be comparing it a bit to C++, but these points should apply to most other languages too. For those who aren’t game developers, Rust is a good language to start with. I code in C++ every day. However, even with all the features that C++ offers, I believe there are certain things that C++ can’t take from Rust.

Let’s begin with what Rust and C++ have in common. Unlike Java or C#, Rust compiles to native code, meaning it doesn’t run in a VM and doesn’t have a garbage collector. All memory management is manual and fully controlled by the programmer. However, Rust feels more like a high-level language. Functions are the main building blocks of your program, and the type system is rich and well-designed. Rust claims to be fully type-safe and memory-safe, meaning all references must always be valid—no null pointers or dangling references.

I decided to test Rust’s claims with game development-oriented and real-life example. I chose to write a game physics engine. How hard could that be? By coding two of them—one in Rust and one in C++—I have a one-to-one comparison between the two languages.

Throughout this article, the physics engine will serve more as an example. I don’t expect you to have any prior knowledge of physics, nor should you expect to learn exactly how to write a physics engine. However, I believe a basic overview will be helpful.


When you think about a physics engine, you probably imagine the most obvious aspects, like simulating movement. The engine is responsible for gathering all the forces, calculating velocities, and estimating the positions of bodies. This process is called position integration.

Additionally, there’s collision detection, which is usually split into two parts: the broad phase and the narrow phase. The broad phase is fast but coarse. Its main job is to quickly discard objects that definitely don’t collide. Then there’s the narrow phase, which precisely calculates exact collisions but can be much slower.

Finally, there’s constraint resolution. This specifies how bodies move. Collisions are constraints, as are ropes or joints between bones. They all limit and dictate how a body moves.

For my example, I chose a relatively simple yet fitting solution for games called position-based dynamics. The algorithm fixes broken constraints one by one in a loop, eventually converging on a correct solution.

Let’s look at a simple C++ implementation of this loop. The code should be self-explanatory; we integrate the positions of bodies, detect collisions between them, and resolve constraints. I’m skipping rendering steps or player input, focusing solely on the physics side.

while(running) {

In Rust, the code doesn’t change much; the syntax is basically the same.

while running {

But here’s where we make our first decision: how to pass arguments into functions. In simple C++, the simplest way is by value, but this introduces an implicit copy. The next invocation won’t see the updated positions, which isn’t what we want.

void integratePositions(Bodies bodies);
Bodies bodies;
integratePositions(bodies); // implicit copy
detectCollisions(bodies); // positions NOT integrated!

We could pass by pointer, allowing us to update bodies in place, but then we risk the pointer being null, and we don’t want to clutter the code with unnecessary checks.

void integratePositions(Bodies* bodies);
Bodies bodies;
integratePositions(nullptr); // What if?

So, obviously, a reference is the simplest and safest solution in C++, as references cannot be null; the compiler will check for it.

void integratePositions(Bodies& bodies);
Bodies bodies;
integratePositions(nullptr); // ERROR: Good!

In Rust, parameters are consumed by the function by default. The function is free to do anything with the consumed parameter, but its value cannot be used after the call; it’s gone.

fn integrate_positions(bodies: Bodies) { /*...*/ }

We want to somehow share the collection of bodies between functions, so we have to borrow them. Borrowing in Rust is very similar to a reference in C++; they even share the same syntax. However, Rust makes it clear on the caller side as well that the borrow occurred.

fn integrate_positions(bodies: &Bodies) { /*...*/ }
integrate_positions(&bodies); // borrow

There’s one more thing: in Rust, everything is constant by default, so we have to explicitly state that we want to mutate the bodies.

fn integrate_positions(bodies: &mut Bodies) { /*...*/ }
integrate_positions(&bodies); // ERROR

“Mute” specifier has to be visible on the caller side. Just by glancing at the code, we can already tell that “integrate_positions” will mutate the bodies, but “detect_collisions” will not.

fn integrate_positions(bodies: &mut Bodies) { /*...*/ }
integrate_positions(&mut bodies);

As you’ll hopefully see, Rust is explicit where it matters but not verbose. Everything that it deems unsafe has to be visible, including borrows and mutations. Clones are also explicit, as they may involve expensive copying. You just saw that parameters are consumed by the function by default. In Rust, there are no implicit copies or move semantics like in C++. Error handling is also performed explicitly through the return value of a function. Rust typically treats exceptions as bugs, which aligns well with game development, where exceptions are rarely thrown.


For the simplest collision detection, we can brute force it. First, we get all pairs of bodies. If there are multiple collisions at once, we split them and deal with pairs. We find those pairs that collide, and if they do, we resolve each collision.


Starting with simple C++ code, we use a loop where we deal with indexes on all bodies. In the nested loop, we should start from the next index to avoid dealing with duplicates or the object colliding with itself. Inside, we check for collisions between bodies, and if there is one, we run the resolution code.

for (int i = 0; i < bodies.size(); ++i) {
    for (int j = i+1; j < bodies.size(); ++j) {
        if (bodies[i].collides(bodies[j])) {
            resolve(bodies[i], bodies[j]);

However, this code feels a bit like the raw pointer example. We can make mistakes with index arithmetics. I think we can do better.


C++ has iterators, which nicely abstract away the raw index and are usually the preferred option. However, you’ll notice that the structure of the code didn’t change much. That’s because C++ reflects almost one-to-one how the CPU runs instructions. You have to think like a machine when writing your code.

for (auto a = bodies.begin(); a != bodies.end(); ++a) {
    for (auto b = std::next(a); b != bodies.end(); ++b) {
        if (a->collides(b)) {
            resolve(*a, *b);
    .for_each(|collision| resolve(collision, &mut bodies));

In Rust, the situation is a bit different. “Bodies” is the container that we want to iterate on. So first, let’s create an iterator. Notice that we don’t have to deal with beginning or ending; the iterator itself has all the required information. We then turn it into an iterator on all pairs – this is called an iterator adaptor. It takes one iterator and transforms it into another. Then we use another adapter, a filter. This one takes a function as a parameter. “Body : : collides” checks for collision between bodies. The result is yet another iterator that visits only the colliding pairs. Finally, for each collision, we run some resolution code.

Notice that the iterators we’ve used are immutable, so we have to borrow bodies again mutably to actually update the collection. Let’s read the code again: Bodies.iter().into_pairs().filter(body_collides).for_each(resolve_collision) with mutable bodies. Rust code reads almost like English. It’s just much more natural to think in a general way and in abstract implementation. We’ve just built a layer upon layer of computation in the direction of data flow.

    .for_each(|collision| resolve(collision, &mut bodies));
error[E0500]: closure requires unique access to bodies
but it is already borrowed

However, the code doesn’t compile. Let’s read the error: “Closure requires unique access to bodies but it is already borrowed.” The borrow checker is enforcing its rules. Each variable can be borrowed multiple times, but immutably or mutably, only one time. And this is exactly what we are doing wrong.

    .for_each(|collision| resolve(collision, &mut bodies));

The iterators are borrowing the bodies collection immutably, so it’s fine. But then we want to borrow again mutably. The compiler won’t compile this code.


We could introduce an intermediate value. Let’s get all of the collisions and resolve them at once.

let collisions =
    .for_each(|collision| resolve(collision, &mut bodies));

This still doesn’t work because iterators in Rust are lazy. They iterate only on as many items as needed, and they will do nothing unless requested. So, we just declared the intent of iteration but did not perform any action.


The iterator will do nothing and just hold the borrow, so we have to consume the iterator and collect the results into a vector. This frees the original borrow, so it is safe to borrow again mutably.

let collisions =
    .for_each(|collision| resolve(collision, &mut bodies));

The borrow checker operates on lifetimes. The compiler will make sure that the borrows are always valid, so it will extend its lifetime until the very last use. You also saw that the borrow is preserved through multiple iterators, so lifetimes can span multiple function calls.

If we didn’t collect the results, the iterators will be needed until the very last collision is resolved, so the lifetimes will overlap. This is what the borrow checker will not allow.


The engine would be especially boring if it only dealt with a single body type. In games, we have a wide variety of objects: characters running, cars driving, buildings collapsing, and they all have to interact with each other in an engine.


To model this in C++, we could use inheritance – nice object-oriented design. First, we declare an interface of “Body” that provides some collision detection. Then, we create classes like “Sphere”, “Cube”, and “Car”, all inheriting from the common base class. With this approach, we can even store all of our bodies in a single container.

struct Body {
    virtual bool collides(Body* other) = 0;
struct Sphere: Body { /*...*/ };
struct Cube: Body { /*...*/ };
struct Car: Body { /*...*/ };
std::vector<Body*> bodies;
bodies.push_back(new Sphere());

However, inheritance in C++ introduces virtual call tables and storing variables on a heap, which can cause memory fragmentation, neither of which are particularly performance-friendly.


So, we could use a union. Now, our body has a “kind” and storage for all of its variants.

struct Body {
    BodyKind kind;
    union {
        Sphere sphere;
        Cube cube;
        Car car;

However, even though it doesn’t suffer from performance issues, it’s much more error-prone.

Body body;
body.kind = BodyKind::Cube;
body.sphere.radius = 5.0f; // Ooops!

Here, I specified that the body kind is a cube and straight away set its radius. “Oooops!”

The key takeaway from that is that the C++ compiler trusts you. You have to know what you’re doing because the compiler won’t get in the way, but it won’t offer much help either if you don’t know what you’re doing or just forgot how to. The compiler often won’t stop you from making mistakes.


Rust deals with this problem with its algorithmic type system. In Rust, enums are supercharged; they represent the logical “or” of whole types. So, a body can either be a sphere, a cube, or a car. Notice that this information is not stored in the value but rather in the type, so the compiler will check it.

struct SphereBody { /*...*/ }
struct CubeBody { /*...*/ }
struct CarBody { /*...*/ }
enum Body {

To deal with enums in Rust, we use a match statement, which is very similar to the C++ switch statement, but again, it’s supercharged. It allows us to deconstruct the inner structure of the enum.

Here, we’re accessing the radius of the sphere:

match body {
    Sphere(sphere) => sphere.radius,
    Cube(cube) => cube.radius,

but if we try to access the radius of a cube, we get an error.

match body {
    Sphere(sphere) => sphere.radius,
    Cube(cube) => cube.radius, // ERROR: cube has no radius

It also checks for missing cases; here, we forgot to check for a car.

match body {
    Sphere(sphere) => sphere.radius,
    Cube(cube) => cube.radius,
} // ERROR: missing Car

What’s more, it even allows matching against much more complex patterns. For example, in the narrow phase collision detection, we can match against every single possible pair of bodies and run specific code for them. So, for example, a sphere and a cube colliding, or a cube and a car colliding, and the compiler will run all of its checks.

match (a, b) {
    (Sphere(sphere), Cube(cube)) | (Cube(cube), Sphere(sphere))
        => sphere_cube(sphere, cube),
    (Cube(cube), Car(car)) | (Car(car), Cube(cube))
        => cube_car(cube, car)),

Speaking of collisions, defining one is a surprisingly hard problem. We could try to calculate the exact volume of overlap between the two bodies, but that can get really complex really quickly, and it’s not strictly needed.

A simpler solution would be to just compute the deepest penetration. In physics engines.

We usually split that into a collision normal vector and a depth. That’s because the way to calculate the collision is to actually check for it—it’s the last step of the collision detection. So, if the depth is negative, we know there wasn’t a collision in the first place.


In C++, the code looks something like this:

auto collision = a.collides(b);
if (collision.depth > 0.0) {
    Vector d = collision.normal * collision.depth;
    a->position += d / 2.0;
    b->position -= d / 2.0;

we calculate the collision between A and B, we check for depth, and if it’s positive, we calculate the displacement vector and move bodies in the opposite direction. This way, we know that after running this code, the bodies will no longer collide. However, this code is a bit problematic. What if we don’t check for depth? Is the normal even meaningful if there isn’t a collision?

In Rust, we don’t have that problem. The normal and depth exist only in the inner structure of one of the enum variants. If there isn’t a collision, we get back a completely different type. There is no room for a mistake.

match a.collides(&b) {
    Some((normal, depth)) => {
        let d = collision.normal * collision.depth;
        a.position += d / 2.0;
        b.position -= d / 2.0;
    None => ()

This pattern of encoding state in types is super common in Rust. You just saw it used for functions that may or may not return a value—they return an Option that either has some value or none.

It is also the backbone of error handling: a function that can fail either returns a success with a value or reports an error.

The takeaway from that is that you can trust the Rust compiler much more. You can be sure that borrows are always valid, mutations are always exclusive. The compiler promises to optimize iterators away so they are zero-cost obstructions. The types are statically checked, obviously, but if you encode your state in types, the compiler will check it for you. Rust does a lot of the boring bookkeeping for you, so you can rest assured that your code is correct.


I’ve prepared a small demo using Bevy, which is a Rust-based engine. In the demo, the player can control a red sphere and create other objects. The sphere can move and interact with the world, allowing the player to move objects around and be moved by them as well.

You might notice in the top corner that there’s an algorithm being used. Right now, it’s running the Brute Force approach that we coded for, but I can switch it to a hash grid. A hash grid is similar to a hash map but in three dimensions. It runs much faster and allows for handling a lot more objects at once. Finally, there’s the sweep and prune algorithm, which is even faster.

The whole engine was compiled to a static library, and the best part is that it provides the same interface as a C++ implementation of the same algorithms. So, I can switch it in runtime. Right now, it’s running C++ code. Again, the Brute Force algorithm. There will be a hash grid as well, and next, the sweep and prune. You might notice that at some point, I ran out of time, and the C++ implementation is a little bit buggy—it doesn’t actually remove the objects.


Let’s dive a bit deeper into some of the most interesting issues. I won’t explain everything in full detail, but I want you to get a taste of what Rust brings to the table. Let’s start with a bit of theory – the separating axis theorem states that if you can stick a piece of paper between two objects, they don’t collide.

How a computer actually computes that is by casting a shadow. If the first shadow ends before the other one begins, the objects don’t collide.


What we can do now is pick a couple of axis, say the primary axis, cast a shadow of all the objects in our scene, and sort them. Now we have a nice broad phase algorithm called sweep and prune. If the shadows don’t overlap, the objects definitely don’t collide.

Here’s a simple implementation in Rust: for each axis, we cast a shadow and get the minimum and maximum value, the starting and stopping points of the shadow. We can store them in an array and then sort the array. Pretty simple stuff.

for axis in axes {
    let (min, max) = cast_shadow(body, axis);

Casting a shadow of a sphere is a rather trivial task. A bit more involved example is casting a shadow of a cube.

pub fn cast_shadow(cube: &Cube, axis: Vector) -> (f32, f32) {
        .map(|v| v.project(axis))

For each vertex of a cube, we can project it on an axis and then get the minimum and maximum value of all of them. This will get us the starting and stopping points of the shadow. But the min and max function is not defined in the standard library, and it would be really convenient to have it as it is used in multiple places, in multiple algorithms.


Fortunately, in Rust, we can extend existing types using extension traits.

pub trait MinMaxExt: Iterator {
    fn min_max(mut self) -> Option<(Self::Item, Self::Item)>
        Self::Item: Ord + Copy
        self.next().map(|first| {
            self.fold((first, first), |(prev_min, prev_max), v| {
                (min(prev_min, v), max(prev_max, v))
impl<I: Iterator> MinMaxExt for I {}

An interface defined for an iterator that iterates on items that can be ordered and copied. It provides a method that consumes the original iterator and outputs two items. It is wrapped in an Option type because the iterator can be empty – we don’t know that. In that case, we output None. Inside, we use the standard min and max functions to get the minimum and maximum value.

Here is the amazing part, here’s where the magic happens. We tell the compiler to implement our trait for all iterators, no matter where they come from. They can be our own iterators, third-party iterators, or iterators from the standard library. Even better, one library can provide a trait, the other can provide the type, and they can all work seamlessly together without even knowing of each other’s existence. There’s no need for upstream changes or for ugly hooks.


Now, another optimization that we cover is based on the following observation: between frames, the objects usually don’t move by a lot. So, it would be really wasteful to recalculate everything from scratch. What we can do instead is to store the sorted array of shadows and update it whenever it’s needed. This provides a significant speedup, but we may run into a big problem.

The borrow checker ensures that access to bodies is exclusive. Right now, it’s split evenly into position integration, collision detection, and constraint resolution, and it’s spread across multiple frames.

What we want, however, is to store some results for a long time. Clearly, we can’t use a borrow for this; the lifetime will overlap, and the borrow checker will stop us from doing that.


So, we have to introduce a proxy system. When we create a new body, what we get back is a proxy. It’s like an index or a key to the value stored.

let proxy = bodies.insert(body);
// ...
let mut body = proxy.upgrade_mut(&mut bodies)?;
body.positon += displacement;

The proxy doesn’t hold the borrow, so it is safe to store the proxy for a long time. You may think that this proxy system works against the borrow checker. In my opinion, however, it works with it because to actually access the stored value, we have to borrow the entire collection again. What we are essentially doing is limiting how long we have to borrow it for.


To implement this proxy system, I borrowed a nice idea of an arena from Katherine West, who did a great talk about ECS in Rust.

	// Credit: Catherine West
pub struct Arena<T> {
    entries: Vec<Entry<T>>,
    first_free: Option<usize>,
pub enum Entry<T> {
    Taken { generation: usize, value: T },
    Free { generation: usize, next_free: Option<usize> },
pub struct Proxy {
    generation: usize,
    index: usize,

The arena is composed of entries; it’s a vector of entries. Each entry can either be taken and contain a value, or it can be free and form a linked list of items to be reused. Now, the proxy has to hold a generation counter. That’s because each time an entry is reused, we increment the counter. If the counts don’t match, we have detected a stale proxy. Its original value was removed and replaced with a new one.


In my opinion, in Rust, it’s much more convenient to compose more complex algorithms from smaller pieces. In C++, it’s made much more difficult. You either have to deal with runtime polymorphism and the cost of that, or you have to deal with templates, which on a bigger scale is just hell to deal with.

You just saw minmax function that is defined for iterators. Arena is completely generic. They don’t even know about each other, yet they can come together and form a nice algorithm.


You may be thinking, what’s the price of all of that? What’s the price of all the checks and ensures that the compiler is giving us? I needed to test that, so I created a bunch of synthetic benchmarks. They run in Rust and C++ for each of the broad phase algorithms, I also validated that they output the same result.

And here is the result: at least for the brute force, Rust is a bit faster than C++. What I found out is that the Rust compiler was able to vectorize the operation, even though it was done with iterators and no raw loops. It was still faster in Rust than in C++.

The results are not looking that good for the hash grid; C++. However, Rust is slower by a little bit, not by a whole order of magnitude as you would expect from a high-level language.

The sweep and prune algorithm is just fascinating to me because Rust and C++ are so similar to each other, they are two different languages, and they exhibit the same behavior—the exponential growth with the input size increase, but still, C++ was a little bit faster.


Now, let’s wrap up with a summary. I intentionally left out some details about Rust, such as multi-threading. The borrow checker essentially guarantees exclusive access to values, similar to a mutex. This means you get multi-threading capabilities built-in for free in Rust. It could be an interesting topic for another post.

Rust comes with a comprehensive ecosystem through its cargo toolchain. It includes dependency management, in-code documentation, unit testing, and built-in formatting and linting, all readily available from the start of your project.

Rust is rapidly gaining popularity, with Firefox being one of the earliest and most widely used projects utilizing Rust. It’s also the only language, besides C and assembler, to be accepted into the Linux kernel, expanding Rust’s presence into cloud computing, although to a small extent. Furthermore, JetBrains recently introduced a Rust-exclusive IDE called Rust Rover.

However, according to statistics tracked by arewegameyet.rs, the situation is peculiar: there are over 40 Rust-based game engines, but only 12 games have been released on such engines. In my view, this represents an untapped niche that could be explored. If creating a AAA game isn’t feasible, integrating Rust with C or using Rust bindings with existing engines like Godot could be a viable option. Alternatively, Rust can be utilized for developing smaller helper tools, which are frequently seen in everyday command-line use.

Exploring Rust in the context of a game jam can also lead to positive results, as demonstrated by my personal experience at PixelAnt Games. I won an internal game jam using Rust, with the program not crashing throughout the entire development process.

Rapid integration and prototyping are simplified when dealing with Rust’s strict compiler, which helps eliminate runtime errors. While many ideas presented here are not revolutionary and can be implemented in other languages, what sets Rust apart, particularly from C++, is its compiler, which actively encourages and enforces good coding practices.


Mateusz Resterny

Software Engineer

Cheerful and eager computer fan who loves diving into all sorts of activities. While programming professionally takes up a big chunk of his time, he also enjoys working on his own projects and learning about different things in his free time. He's particularly into the Linux system and experimenting with simulations, whether it's about physics, intelligence, or biology.

See more posts by this author

Sounds promising?

We’re recruiting - reach out to us to learn more about
the projects we’re currently working on!

Hi! We use cookies and similar technologies to better know you and improve your experience with our website.
You can find out more by reading our Privacy Policy.