Abstraction Costs

Every abstraction has a cost. The question is whether the compiler can eliminate it. Mojo's design goal is zero-cost abstractions — high-level code that compiles to the same machine code as hand-written low-level code. But this only works when you understand what the compiler can and cannot optimize away.

Code

Comparing abstraction levels for the same operation:

from std.memory import UnsafePointer

# High-level: Pythonic, readable
fn sum_pythonic(data: List[Int]) -> Int:
    var total = 0
    for val in data:
        total += val
    return total

# Low-level: explicit memory access
fn sum_lowlevel(ptr: UnsafePointer[Int, _], n: Int) -> Int:
    var total: Int = 0
    for i in range(n):
        total += ptr[i]
    return total

# Both should compile to the same machine code
# if the compiler can see through the List abstraction

When to Stay Pythonic

  • Data pipeline code: I/O, parsing, configuration — not performance-critical
  • Prototyping: Get correctness first, optimize later
  • Non-hot paths: Code that runs once, not in a loop

When to Go Low-Level

  • Inner loops: The 1% of code that takes 99% of runtime
  • Kernel code: SIMD, tiling, explicit memory management
  • When the compiler can't see through: Dynamic dispatch, Python interop

Constraint

Write the same function two ways: once using high-level Mojo constructs, once using raw pointers and manual indexing. Reason about whether the compiler can generate identical machine code for both.

Why It Matters

Premature optimization wastes engineering time. Late optimization wastes compute. The skill is knowing which abstractions are free (inlined functions, value-type structs) and which are not (dynamic dispatch, heap allocation, Python interop). Profile first, then optimize the hot path.