Let’s talk about something I find truly fascinating about Rust: its ability to do work before your program even runs. This isn’t about clever tricks; it’s a core part of the language designed to make your final program faster, safer, and often easier to use. When the compiler does the heavy lifting, your application starts up quicker and has fewer runtime surprises.
I want to walk through several ways Rust lets you move computations from runtime to compile time. Think of it as preparing everything in the kitchen before the guests arrive, so the dinner service goes smoothly.
The most straightforward tool is the const fn, or constant function. You mark a function with const, and it becomes something the compiler can execute while it’s building your program. The result is baked directly into the binary. I use this for things that are known upfront and never change, like mathematical constants, lookup tables, or configuration that’s fixed for a release.
const fn bytes_to_kib(bytes: usize) -> usize {
bytes / 1024
}
const FILE_SIZE_LIMIT: usize = bytes_to_kib(10_485_760); // 10 MiB in KiB
fn main() {
// This value was computed when I compiled the program.
println!("The limit is {} KiB.", FILE_SIZE_LIMIT);
}
The beauty here is the guarantee. FILE_SIZE_LIMIT isn’t just a variable; it’s a constant. It lives in the program’s read-only data, and its calculation cost was paid once, by me, on my machine, not by every user every time they run the app.
Sometimes, you need more than simple calculations. This is where constant evaluation with traits and impl blocks comes in. You can perform logic on types themselves during compilation. A common use is ensuring data structures have certain properties, like being a power-of-two size, which is crucial for some low-level memory operations.
// A trait with a constant associated value.
trait CheckSize {
const IS_VALID: bool;
}
// Implement it for specific types, defining the constant value.
impl CheckSize for [u8; 256] {
const IS_VALID: bool = true; // 256 is a good size.
}
impl CheckSize for [u8; 300] {
const IS_VALID: bool = false; // 300 is not ideal.
}
struct Buffer<T> where T: CheckSize<IS_VALID = true> {
data: T,
}
fn main() {
// This compiles fine.
let _good = Buffer { data: [0u8; 256] };
// This line will cause a compile-time error.
// let _bad = Buffer { data: [0u8; 300] };
}
The compiler stops me if I try to create a Buffer with an invalid size. The error appears right there in my editor, not in a log file from a user’s machine halfway across the world. This is a powerful form of validation.
Now, let’s consider generics, a concept many languages have. Rust’s generics are incredibly efficient because they use monomorphization. That’s a complex word for a simple idea: the compiler creates a separate, concrete copy of your generic function for every type you use it with.
fn add<T: std::ops::Add<Output = T>>(a: T, b: T) -> T {
a + b
}
fn main() {
let x = add(5, 10); // This creates `fn add_i32(i32, i32) -> i32`
let y = add(5.5, 10.2); // This creates `fn add_f64(f64, f64) -> f64`
}
There’s no runtime type checking or pointer indirection. The call add(5, 10) becomes a direct call to a function that only works with integers. The performance is identical to writing two separate, typed functions by hand, but I only had to write the logic once.
Const generics take this further by letting types depend on constant values, not just other types. For years, a major pain point in Rust was dealing with arrays of different sizes. With const generics, a [u8; 32] and a [u8; 64] are distinct, usable types.
struct PixelGrid<const W: usize, const H: usize> {
pixels: [[u8; 3]; W * H], // Represents W*H RGB pixels.
}
impl<const W: usize, const H: usize> PixelGrid<W, H> {
fn get_pixel(&self, x: usize, y: usize) -> [u8; 3] {
self.pixels[y * W + x]
}
}
// I can now work with specific resolutions as types.
fn process_4k(frame: &PixelGrid<3840, 2160>) {
// Process ultra-high definition.
}
fn process_720p(frame: &PixelGrid<1280, 720>) {
// Process high definition.
}
This is a game-changer for libraries dealing with matrices, cryptography, or any domain where dimensions are critical to correctness. The compiler can check that I’m not accidentally mixing a 4x4 matrix with a 3x3 matrix operation.
Declarative macros, created with macro_rules!, are the first step into code generation. They work by pattern matching. I think of them as a sophisticated “find and replace” that happens during compilation. They’re perfect for eliminating repetitive boilerplate.
macro_rules! create_enums {
($name:ident { $($variant:ident = $val:expr),* $(,)? }) => {
enum $name {
$($variant = $val),*
}
impl $name {
fn describe(&self) -> &'static str {
match self {
$(Self::$variant => stringify!($variant)),*
}
}
}
};
}
// Using the macro.
create_enums! { Status {
Ok = 0,
Warning = 1,
Error = 2,
}}
fn main() {
let s = Status::Warning;
println!("{} is value {}", s.describe(), s as i32);
}
I wrote the match logic once in the macro, and it was expanded for the Status enum. If I need a similar enum for HttpCode, I just use the macro again. It keeps my code dry and consistent.
When macro_rules! isn’t powerful enough, I turn to procedural macros. These are full Rust programs that take your code as input and produce new code as output. They operate on the abstract syntax tree (AST), which is the compiler’s structured understanding of your program.
The most common kind is the derive macro. You’ve seen #[derive(Debug, Clone)]. Writing my own allows me to automatically implement traits for my types.
// In a separate crate named `my_derive`
use proc_macro::TokenStream;
use quote::quote;
use syn::{parse_macro_input, DeriveInput};
#[proc_macro_derive(Greeter)]
pub fn greeter_derive(input: TokenStream) -> TokenStream {
let ast = parse_macro_input!(input as DeriveInput);
let name = &ast.ident;
let gen = quote! {
impl #name {
fn hello() {
println!("Hello, I am a {}!", stringify!(#name));
}
}
};
gen.into()
}
Then, in my application code, I can use it like this:
use my_derive::Greeter;
#[derive(Greeter)]
struct Robot;
#[derive(Greeter)]
struct Satellite;
fn main() {
Robot::hello(); // Prints: "Hello, I am a Robot!"
Satellite::hello(); // Prints: "Hello, I am a Satellite!"
}
The derive macro saved me from writing the same trivial impl block for each struct. For complex traits involving serialization or validation, this is indispensable.
Attribute macros are a more flexible cousin. They can attach to any item (like a function, struct, or module) and transform it. I often use these for lightweight frameworks, like adding logging or routing information.
// In the macro crate
#[proc_macro_attribute]
pub fn with_logging(_attr: TokenStream, item: TokenStream) -> TokenStream {
let input_fn: syn::ItemFn = syn::parse(item).unwrap();
let fn_name = &input_fn.sig.ident;
let fn_block = &input_fn.block;
let output = quote! {
fn #fn_name() {
println!("[LOG] >> Starting #fn_name");
#fn_block
println!("[LOG] << Finished #fn_name");
}
};
output.into()
}
Applying the macro is clean and declarative.
#[with_logging]
fn perform_calculation() {
println!("Computing...");
// Complex logic here.
}
fn main() {
perform_calculation();
// Output:
// [LOG] >> Starting perform_calculation
// Computing...
// [LOG] << Finished perform_calculation
}
The function I wrote is wrapped with logging code automatically. This separation of concerns keeps my core logic clean.
Function-like procedural macros let me define my own syntax. They look like regular macro calls, custom!(...), but can parse that content in any way I choose. I use these to create small, domain-specific languages embedded in Rust.
#[proc_macro]
pub fn def_commands(input: TokenStream) -> TokenStream {
let input_str = input.to_string();
// Parse my custom syntax: `def_commands! { CmdOne, CmdTwo }`
let commands: Vec<&str> = input_str.trim_matches(|c| c == '{' || c == '}')
.split(',')
.map(|s| s.trim())
.collect();
let expansions: Vec<_> = commands.iter().map(|cmd| {
let cmd_ident = syn::Ident::new(cmd, proc_macro2::Span::call_site());
quote! {
Command::new(stringify!(#cmd_ident), |args| {
println!("Executing {} with {:?}", stringify!(#cmd_ident), args);
})
}
}).collect();
let output = quote! {
vec![ #(#expansions),* ]
};
output.into()
}
In my main code, this allows for a very clear definition.
let command_list = def_commands! { Start, Stop, Pause, Resume };
// Expands to a vector of pre-configured Command objects.
Finally, there’s the build script, a build.rs file. This is a separate program that runs before the main compilation. I use it when my code depends on something external, like a data file, a protocol schema, or even the version of a system library.
// build.rs
use std::{env, fs, path::Path};
fn main() {
// Tell Cargo to re-run this script only if this file changes.
println!("cargo:rerun-if-changed=config/features.toml");
let out_dir = env::var_os("OUT_DIR").unwrap();
let dest_path = Path::new(&out_dir).join("features.rs");
let config = fs::read_to_string("config/features.toml").unwrap();
// ... parse the TOML file ...
let mut generated_code = String::new();
generated_code.push_str("pub const ENABLED_FEATURES: &[&str] = &[\n");
for feature in parsed_features {
generated_code.push_str(&format!(" \"{}\",\n", feature));
}
generated_code.push_str("];\n");
fs::write(&dest_path, generated_code).unwrap();
}
Then, in my lib.rs or main.rs, I include that generated file. It becomes a normal part of my program.
// This file was created by build.rs
include!(concat!(env!("OUT_DIR"), "/features.rs"));
fn main() {
for feat in ENABLED_FEATURES {
println!("Feature active: {}", feat);
}
}
The build script stage is my last chance to do complex, perhaps even I/O-heavy, preparation before the rigorous world of Rust compilation begins.
Each of these techniques shifts work left in the development timeline. The cost is paid during compilation, resulting in a binary that is faster and more correct. It makes me think differently about program structure. I ask myself: “Does this value need to be computed at runtime, or can I know it now?” “Is this boilerplate I can generate?” “Can I use the type system to rule out invalid states?” By leveraging these tools, I spend more time solving unique problems and less time writing repetitive, error-prone code. The compiler becomes my most active collaborator.