Building with WebAssembly: a practical guide for JS developers
What WebAssembly actually is, when it's worth using, and how to ship Rust to the browser without losing your mind.
- WebAssembly
- Rust
- Performance
- Web
WebAssembly gets a lot of hype and a lot of dismissal in equal measure. After shipping a couple of projects that use it in production, here’s my honest take on when it’s worth the complexity and when you should just write more TypeScript.
What WASM actually is
WebAssembly is a binary instruction format that runs in a virtual machine embedded in the browser. It’s not a programming language — it’s a compilation target. You write Rust, C, C++, or Go, and compile it to .wasm.
The VM is:
- Sandboxed: WASM code can’t access the DOM, network, or filesystem directly — it communicates with JS through an explicit API surface
- Fast to parse: the binary format is designed to be decoded quickly, unlike JS which must be lexed and parsed
- Near-native speed for CPU-bound work, but with a real overhead on the WASM ↔ JS boundary
That last point is what most tutorials gloss over.
The boundary problem
Every time you call a WASM function from JS, or vice versa, there’s a cost. For scalar arguments (numbers) it’s cheap. For complex data (strings, arrays, objects), you’re copying bytes across the boundary because WASM has its own linear memory that JS can’t directly observe.
// Each call here copies `data` into WASM memory
for (const item of largeArray) {
result += wasmModule.process(item); // ← many boundary crossings
}
// Better: pass the whole array once
const result = wasmModule.processAll(largeArray); // ← one crossing
Rule of thumb: batch your WASM calls. If you’re calling a WASM function in a tight loop, you’re doing it wrong.
When WASM is worth it
Good use cases:
- Physics simulations, force-directed graph layouts, collision detection
- Image and video processing (filters, encoding, decoding)
- Cryptographic operations
- Parser-heavy tasks (CSV, binary formats, log parsing)
- Porting an existing C/Rust library that already exists and works
Bad use cases:
- Simple DOM manipulation or event handling
- Network requests — these are async JS ops, WASM doesn’t speed them up
- Things where the bottleneck is actually your algorithm, not JS overhead
- Anywhere you’re calling WASM on every keystroke or every frame with small payloads
Setting up Rust → WASM
The toolchain is actually good now. wasm-pack handles the build, and wasm-bindgen generates the JS glue code automatically.
cargo install wasm-pack
In your Cargo.toml:
[lib]
crate-type = ["cdylib"]
[dependencies]
wasm-bindgen = "0.2"
A minimal exported function:
use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub fn sum(data: &[f64]) -> f64 {
data.iter().sum()
}
Build it:
wasm-pack build --target web --out-dir pkg
This generates a .wasm file plus TypeScript bindings. Import in your app:
import init, { sum } from './pkg/my_crate.js';
await init(); // load + compile the WASM module once
const result = sum(new Float64Array([1.0, 2.0, 3.0]));
Performance profiling
The worst mistake is shipping WASM without measuring whether it’s actually faster. Use the browser’s performance timeline:
performance.mark('wasm-start');
const result = wasmModule.heavyComputation(data);
performance.mark('wasm-end');
performance.measure('wasm-duration', 'wasm-start', 'wasm-end');
Compare against a JS implementation. You might be surprised — V8’s JIT is exceptional for well-optimised JS, and WASM is only consistently faster for work that V8 struggles to vectorise.
Conclusion
WebAssembly is a real, production-useful tool when applied to the right problems. The graph visualizer project on this site uses it for the force simulation loop — 4000 nodes, 60fps, single thread. That’s the sweet spot: numerical, tight loop, large data, already a Rust codebase.
For most web features, ship TypeScript. Reserve WASM for the performance cliffs.