A Critical Look at Rust: Advantages and Trade-offs
Last updated on
※ Disclaimer: This article contains the author’s personal critical analysis of a specific programming language’s design philosophy and its real-world trade-offs. The evaluation of technology can be approached from various perspectives, and this article represents just one of them.
Rust, through its powerful community and marketing, has cultivated a certain perception that can sometimes influence an objective assessment of its pros and cons. This article aims to re-examine Rust’s core features from a critical viewpoint.
Zero-Cost Abstractions: Closer to a Marketing Term?
The C language also efficiently supports ‘Zero-Cost Abstractions’. In fact, few languages provide abstractions as cost-effectively as C. Yet, Rust tends to promote this concept at the forefront of its marketing. While it’s one of the key logics used to explain Rust’s advantages, there is a view that it is a slogan emphasized for marketing purposes.
Rust’s ‘Zero-Cost Abstractions’ refers to the concept of reducing runtime overhead through its ownership system and borrow checker. However, this comes at the cost of the developer having to learn and adhere to the compiler’s strict rules. In areas where a C developer can directly control and maximize efficiency, Rust enforces it at the language level, presenting a steep learning curve for beginners.
In conclusion, Rust’s ‘Zero-Cost Abstractions’ can be seen as a reinterpretation of what C already provides, but in a different way and with more constraints. Emphasizing this as a unique feature of Rust could be perceived as a strategy to reinforce the perception of it being a ‘modern technology’ by giving new meaning to an existing concept. True cost-efficiency comes from a developer’s competence and flexible control, not just from the strict enforcement by a language.
Rust Has No Segmentation Faults?
It is widely known that applications written in Rust do not experience Segmentation Faults. This is because Rust strongly guarantees memory safety. However, instead of a segmentation fault, the program can terminate by causing a panic
.
Of course, compared to the worst-case scenario in C where a program malfunctions without a segmentation fault, leading to severe security vulnerabilities, Rust’s panic
can be seen as a clear improvement in safety. This is because a panic
leads to a controlled program termination instead of unpredictable memory errors.
However, the slogan ‘Rust has no segmentation faults’ could misleadingly create the misconception that the program will not terminate due to any error. From an end-user’s perspective, whether the program suddenly closes due to a segmentation fault or terminates due to a panic
, the result is the same: the application stops. While Rust prevents a specific type of memory error, we must be wary of creating the impression that a program is perfectly ‘crash-proof’.
Rust Panic: A Flaw in Error Handling, or a Philosophy?
I compiled the official example source code.
https://doc.rust-lang.org/book/ch12-01-accepting-command-line-arguments.html
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
let query = &args[1];
let file_path = &args[2];
println!("Searching for {query}");
println!("In file {file_path}");
}
When run without arguments, it terminates like this:
$ ./main
thread 'main' panicked at main.rs:6:22:
index out of bounds: the len is 1 but the index is 1
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
In the given code example, the Rust program terminating with a panic
due to insufficient arguments is expected behavior in Rust. The Rust community explains this as a mechanism to safely halt the program, preventing it from leading to undefined behavior. This aligns with Rust’s core philosophy of preventing the unpredictable memory errors and security vulnerabilities common in C/C++.
However, while Rust catches certain types of errors at compile time, panic
s can still occur in a real application’s operating environment due to unexpected input or external environmental changes.
Even in Rust, if a programmer is negligent in handling panic
s or properly using the Result
type, the application can still terminate abruptly. While Rust’s ‘safety’ clearly shows its strength in the domain of specific memory errors, it does not perfectly guarantee the overall robustness of the program—that is, the ability to operate stably and recover from errors under any circumstances.
Ultimately, Rust’s panic
has the positive aspect of clearly exposing dangerous errors and aiding in debugging. However, for the end-user, it delivers the same result as a segmentation fault: an abrupt program termination. While Rust ‘enforces’ safety at the language level, it is crucial to recognize that this does not absolve the programmer of all responsibility or eliminate all types of runtime errors.
A Critical Evaluation of Rust’s Result
: Is Clarity the Same as Inconvenience?
Rust’s Result
type is explained in detail in the official documents below.
https://doc.rust-lang.org/book/ch09-02-recoverable-errors-with-result.html
https://doc.rust-lang.org/std/result/
As the official documentation explains, Rust’s Result
type is a powerful tool that explicitly enforces error handling. However, this enforcement is a double-edged sword, trading off developer convenience and productivity. While Result
attempts to solve the problems of error handling in existing languages, it is not free from the criticism that it has created a new form of ‘cost’: the inconvenience of the development flow.
For example, the try...catch
syntax in JavaScript or Python has the clear advantage of separating main logic from error handling logic, improving code readability and helping developers focus on core logic. However, this structure often carries the instability of not knowing where an unchecked exception might erupt at runtime.
The errno
global variable approach, common in C, is unparalleled in its conciseness. It allows checking and handling errors with just a few lines of code, without complex error propagation logic.
const char *err_msg = strerror (errno);
printf ("failed: %s\n", err_msg);
It is precisely at this juncture that Rust’s Result
was born. Result
is an attempt to solve the fundamental problems of errno
and try...catch
—namely, ‘forgettable errors’ and ‘hidden control flow’—at the language and compiler level. It radically enhances program reliability by explicitly stating all error possibilities in the type system and forcing the compiler to handle them.
However, this solution demands a significant price. Forcing the explicit handling of every potential error inevitably leads to more boilerplate code and complex matching patterns. This can act as a factor that significantly hinders development productivity in environments where rapid prototyping or swift implementation of business logic is crucial.
From this perspective, Rust’s choice appears to be an extreme trade-off, willingly paying the ‘cost’ of development productivity for the value of ‘safety’. While the explicitness and reliability provided by Result
are undeniably strong advantages, they come at the expense of giving up the philosophical benefits of other languages that prioritize conciseness and development speed. This can feel like an over-correction, akin to installing unnecessarily complex safety devices at every turn to prevent every minor risk, ultimately making the act of traveling the road itself a painful experience.
The ‘Bloat’ of Rust Binaries: Cause, Evidence, and the Truth of the Rebuttal
When creating applications with Rust, the question about the size of the generated binary often goes beyond a mere ‘increase’. Especially when compared to traditional compiled languages like C, a Rust binary can leave the shockingly ‘huge’ impression. This issue stems from Rust’s specific design philosophies and distribution methods, making it an unavoidable reality.
1. The Root Cause: Unstable ABI and Forced Static Linking
When you build something with Rust, you inevitably use the standard library (libstd
). The core of the problem is that this libstd
currently offers no ABI (Application Binary Interface) stability guarantee whatsoever.
libstd
is distributed with a filename that includes a hash value for version identification, like so:
$ pkg list rust | grep libstd
...
/usr/local/lib/rustlib/x86_64-unknown-freebsd/lib/libstd-441959e578fbfa8d.so
...
Why this is a problem becomes clear through an experiment of dynamically linking a “Hello World!” app.
fn main() {
println!("Hello World!");
}
If you compile this code with dynamic linking options and use the strip
command to remove unnecessary information, the binary size can be reduced to 5960 bytes.
rustc -C opt-level=s -C prefer-dynamic -C target-feature=-crt-static hello.rs
But checking the dependencies with the ldd
command reveals a critical problem.
$ LD_LIBRARY_PATH=/usr/local/lib/rustlib/x86_64-unknown-freebsd/lib ldd ./hello
./hello:
libstd-441959e578fbfa8d.so => /usr/local/lib/rustlib/x86_64-unknown-freebsd/lib/libstd-441959e578fbfa8d.so (0x25f114c46000)
libc.so.7 => /lib/libc.so.7 (0x25f115e0f000)
libthr.so.3 => /lib/libthr.so.3 (0x25f114e6b000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x25f11756d000)
[vdso] (0x25f1130d4000)
The ldd
command confirms that libstd-441959e578fbfa8d.so
is linked.
As the filename libstd-441959e578fbfa8d.so
suggests, the internal structure of libstd
can change with every new version of the Rust compiler. This fundamentally blocks backward compatibility, the greatest advantage of traditional dynamic linking. If an app is dynamically linked to libstd
, it is very likely to stop working whenever the system’s Rust version is updated. Furthermore, if the rust
package is removed from the system, libstd
is also removed, crippling all related apps.
Due to this fundamental ABI instability in Rust, static linking, which embeds libstd directly into the binary, becomes the only viable and effectively mandatory option for stable operation in a real deployment environment. And this is the single biggest reason why binary sizes become ‘bloated’.
2. Real-World Evidence: Comparing Actual Applications
This ‘bloat’ is not just theoretical; it becomes clear through comparisons of real applications.
Case 1: rg
(Rust) vs grep
(C)
Let’s compare the binary size of rg
(ripgrep), a grep
alternative made with Rust, to grep
itself.
debian:~$ ls -l /bin/grep /bin/rg
-rwxr-xr-x 1 root root 203072 Nov 10 2020 /bin/grep
-rwxr-xr-x 1 root root 4345184 Jan 19 2021 /bin/rg
rg
has a binary size about 21 times larger than grep
. Of course, one could argue that rg
offers more features, such as parallel processing. That argument is valid. However, even considering the functional differences, a size difference of over 20 times is hard to accept easily. So, what about a case with far fewer features?
Case 2: kime
(Rust) vs nimf
(C) Input Method Comparison
$ ls -al kime_ubuntu-22.04_v3.1.1_amd64.deb nimf_2023.01.26-bookworm_amd64.deb
-rw-r--r-- 1 user user 3197276 Jun 27 07:38 kime_ubuntu-22.04_v3.1.1_amd64.deb
-rw-r--r-- 1 user user 275728 Jun 27 07:37 nimf_2023.01.26-bookworm_amd64.deb
The Rust-based kime
package is about 3.2MB, while the C-based nimf
package is only about 0.28MB, a difference of more than 11 times. Functionally, kime
focuses on Korean input, whereas nimf
is a framework supporting dozens of languages including Korean, Chinese, and Japanese.
The fact that nimf
, which is functionally far more extensive than the language-specific kime
, has a package size less than one-tenth of it suggests that the ‘bloat’ of Rust binaries is a fundamental problem that cannot be explained by feature additions alone.
3. The Community’s Explanation and Its Other Side: The Inconvenient Truth of min-sized-rust
(2025 Edition)
In response to such criticism, the Rust community argues that binary size can be reduced through documents like min-sized-rust
. However, most of the methods suggested in this document are either unrealistic tricks or require giving up Rust’s advantages.
Setting a Baseline: C “Hello World”
First, we need to establish a clear baseline for comparison. Here is a basic C “Hello World” code.
#include <stdio.h>
int main ()
{
puts ("hello world");
return 0;
};
Compiling this with cc -O2 -o hello hello.c
and then stripping it with strip hello
results in a final binary size of 4,960 bytes (approx. 5KB). This is not a special ‘trick’ but a standard process for optimizing a binary in C. We will evaluate Rust’s results against this 5KB baseline.
The Normative Way and Its Limits
The min-sized-rust
guide first suggests the ‘normative way’ of adding a few settings to Cargo.toml
.
[profile.release]
strip = true # Automatically strip symbols
opt-level = "z" # Optimize for size over speed
lto = true # Link-Time Optimization
codegen-units = 1 # Maximize optimization opportunities
Even with all these standard optimizations applied, a “Hello World” binary remains at a size of around 277KB. Compared to C’s 5KB, this is still a disappointing result, over 50 times larger.
Behavior-Changing Optimization: panic = "abort"
The guide’s next step is to suggest the panic = "abort"
option. This reduces binary size by immediately aborting the program on panic instead of unwinding the stack. However, the guide itself warns that this “will affect your program’s behavior”. This is the first step in giving up convenience for the sake of safety.
The Realm of ‘Tricks’: Nightly and Unstable Features
To achieve any significant size reduction, one must ultimately rely on nightly-only features not available in the stable version. The build-std
feature, which shows the most dramatic size reduction in the guide, is a prime example. This method requires installing the nightly
toolchain and the rust-src
component with rustup
, and then using a very long and complex command.
# Example for macOS
$ RUSTFLAGS="-Zlocation-detail=none -Zfmt-debug=none" cargo +nightly build \
-Z build-std=std,panic_abort \
-Z build-std-features="optimize_for_size,panic_immediate_abort" \
--target x86_64-apple-darwin --release
The guide claims that after all these steps, the final binary size can be reduced to 30KB. While 30KB is an impressive number, it hides the following facts:
- Unrealistic: It’s impossible on the stable version and requires numerous unstable (
-Z
) flags and a complex build process. This is not a standard development practice. - Still Large: Even the 30KB size achieved through such ‘tricks’ is still 6 times larger than C’s 5KB.
Extreme Choices That Forfeit Rust’s Advantages
The guide goes even further, suggesting extreme methods like no_main
and no_std
.
no_main
: The guide itself warns, “Expect the code to be hacky and unportable.”no_std
: It admits you “will lose access to most of the Rust crates,” meaning you have to give up the advantages of the Rust ecosystem.
At this stage, the developer has to abandon almost all the benefits of Rust’s safety and ecosystem, effectively coding in a manner no different from C. This is a contradiction that eliminates the very reason for using Rust to reduce binary size.
The Inconvenient Truth Unchanged in 2025
Through this verification, we can see that the essence of the min-sized-rust
guide has not changed even in 2025. The guide shows the ‘possibility’ of making Rust binaries small, but its methods are mostly impractical, unstable, or require abandoning Rust’s core values.
Therefore, before claiming that ‘Rust binaries can be small too’ based on the min-sized-rust
guide, it is necessary to clearly recognize the realistic constraints and trade-offs of those methods. The reality is that binary size remains large with standard development practices, and a more balanced discussion on this point is needed.
Does ‘Memory Safety’ Equal ‘Application Stability’?: A Case Study of librsvg
The expectation that using Rust, which guarantees memory safety, will make applications safer overall is a reasonable one. However, let’s critically examine whether ‘memory safety’ is truly synonymous with ‘overall application stability’ through librsvg
, a representative case of a successful port from C to Rust.
librsvg
is a library for rendering SVG (Scalable Vector Graphics) files, and its transition from C to Rust seems to have started around October 2016.
- Estimated Porting Start Commit: https://gitlab.gnome.org/GNOME/librsvg/-/commit/f27a8c908ac23f5b7560d2279acec44e41b91a25
As of June 2025, it can be confirmed that the librsvg
codebase is mostly composed of Rust.
Rust 87.0%
C 7.7%
Python 2.1%
Meson 1.8%
Shell 1.1%
Batchfile 0.3%
So, has librsvg
, largely rewritten in Rust, become perfectly ‘safe’? A look at the project’s issue tracker reveals the reality.
- Issue search for ‘panic’: https://gitlab.gnome.org/GNOME/librsvg/-/issues/?search=panic&sort=created_date&state=opened&first_page_size=20
- Issue search for ‘faults’: https://gitlab.gnome.org/GNOME/librsvg/-/issues/?search=faults&sort=created_date&state=opened&first_page_size=20
The reality of the issue tracker shows that despite achieving ‘memory safety’, abnormal terminations due to panic
or fault
are still occurring. This means that while Rust may prevent a specific type of memory error at its source, it does not prevent all potential application crashes. ‘Memory safety’ is not a panacea that guarantees ‘flawless’ or ‘uninterrupted’ operation.
Furthermore, this language transition has caused another practical problem: an explosion in binary size. Let’s compare the package sizes of the past C-based version and the current Rust-rewritten version.
$ ls -l librsvg*~*
-rw-r--r-- 1 root wheel 201832 Jun 25 05:13 librsvg2-2.40.21_4~f605eba4b2.pkg
-rw-r--r-- 1 root wheel 3202515 Jun 24 10:28 librsvg2-rust-2.60.0~c96f7b1992.pkg
The C-based librsvg
2.40 package was about 0.2MB, but the Rust-ported 2.60 package is about 3.2MB, an increase of nearly 16 times. Of course, there is a significant version gap between the two packages, and it cannot be denied that feature additions contributed to the size increase. However, considering the previous kime
case where the file size differed by more than 10 times even with fewer features, it is difficult to explain the entire size increase by feature improvements alone.
Thus, the librsvg
case clearly shows that the transition to Rust, in exchange for gaining ‘memory safety’, has created new trade-offs in terms of ‘application stability’ and ‘deployment efficiency’. When adopting a technology in a practical environment, it is imperative to have a critical attitude, directly checking the issue trackers of various projects and comprehensively examining potential problems, rather than relying on internet reviews or marketing slogans.
Is Rust’s Competitor C++?
Rust promotes ‘memory safety’ at the forefront, but in reality, many mainstream languages within the TIOBE index top 20, such as Java, C#, Python, JavaScript, Go, Swift, Ada, Kotlin, Ruby, and PHP, also support memory safety by default. The fact that Rust uniquely highlights this as a special advantage can be seen as a tendency to overemphasize the value of ‘safety’ over the complex trade-offs of the technology.
Considering all these points, it is difficult for Rust to replace C/C++ in all domains. There are realistic trade-offs such as the aforementioned unstable ABI, relatively large binary sizes, the possibility of program termination due to panic, complex C ABI interoperability, a steep learning curve, and coding productivity.
Of course, a suitable use case for Rust certainly exists. It is in some system programming areas that demand extreme performance and security simultaneously. For example, in environments like servers handling customer information or cloud infrastructure, where information leakage due to incorrect memory references in C/C++ could lead to severe losses. In such environments, Rust, which can be expected to be faster than Java while providing Java-level stability, can be a worthy alternative to consider, even at the cost of development expenses.
However, generalizing this to claim that “C/C++ is unsafe, so everything should be replaced with Rust” could be a simplistic logic that ignores reality. It is like arguing to replace all cars on the road with top-safety-rated armored vehicles. Every tool has its own trade-offs and appropriate use cases.
When memory leaks and abrupt program terminations (panics) still remain the programmer’s responsibility, it is hard to say that ‘memory safety’ alone solves everything.
Rust’s Use Case: Don’t Mistake the Growth of a ‘Niche Market’ for a ‘Mainstream Trend’
It is true that Rust has shown remarkable growth in recent years, consistently being selected as the ‘most loved language’ in developer surveys and being adopted by major big tech companies. However, it is necessary to take a sober look at the other side of this growth.
Rust’s growth is mostly concentrated in the niche market where it shines the brightest: system programming, including operating systems, cloud infrastructure, and blockchain. In the vast field of business application development, which accounts for the majority of jobs in the software industry, Rust’s adoption rate does not live up to its reputation.
Therefore, one must face the reality that the title ‘most loved language’ is not synonymous with ‘the language with the most jobs’. Before getting swept up in the enthusiastic atmosphere around Rust and expecting a ‘rosy future’, one must distinguish whether this is the growth of a niche market for a few experts or a change in the mainstream market that can provide practical career opportunities.
For the majority of aspiring developers, unless they are targeting a specific use case for Rust, building a foundation in computer science fundamentals such as C/C++, data structures, and networking still provides much more stable and broader opportunities. It is a wiser long-term investment to accumulate universal and important knowledge applicable to any language, rather than jumping on the bandwagon of a particular language’s trend.
Conclusion: The Clear Value Rust Holds Nevertheless
Despite all this criticism, we must also clearly acknowledge the reasons why Rust receives such strong support in certain areas.
In C/C++, an incorrect memory reference can sometimes go undetected, causing program malfunctions or, in the worst case, leading to severe security vulnerabilities that expose sensitive information.
It is precisely at this point that Rust provides uncompromising value. The ownership system and the borrow checker at compile time prevent numerous potential memory errors at their source. Even if an unexpected memory access error occurs at runtime, it results in an immediate panic instead of Undefined Behavior, stopping the program to prevent a greater disaster.
Ultimately, these two powerful mechanisms combine to fulfill Rust’s core value: ‘enhanced reliability and security’. The numerous trade-offs pointed out in this article are the price paid for this core value.