All posts
Mar 20265 min read

Rewriting the Silver Searcher in Rust

A historical rewrite study of a Rust rewrite of ag: the first benchmark pass showed roughly 2× faster median runtime than ag on one measured workload, but still behind rg and ugrep.

benchmarksrustrewrite studysearch

Rewrite studyFirst benchmark pass

Project

The Silver Searcher

Baseline

ag 2.2.0

Rewrite

rust-ag 0.1.0

Language

Rust

Historical first pass on the literal-simple workload. rust-ag roughly halved ag's median runtime, but ripgrep and ugrep still led.

ag → rust-ag

1.96×–2.03× faster

Parity

8/8 smoke scenarios

Relative rank

Still behind rg and ugrep

Coverage

1 of 38 scenarios

Claim gate

Fails on reproducibility

  • On the only measured performance scenario, rust-ag cut the local median from 19.64 ms to 9.69 ms.
  • rg and ugrep were still faster on the same workload, at 7.74 ms and 4.02 ms median.
  • This was a narrow result: one workload, three measured samples per tool, one Apple M4 machine, and an overall claim gate that still failed on reproducibility bundle validation.

I rewrote the core search path of The Silver Searcher in Rust and benchmarked that rewrite against the original tool. This post captures that first benchmark pass: on the only measured performance workload, rust-ag was roughly 2× faster than ag.

That was the honest headline, but not the whole story. rg and ugrep were still faster, the performance data covered only 1 of 38 registered scenarios, and the overall claim gate still returned fail because the reproducibility bundle validation step was not finished. So this was a promising first result, not a victory lap.

Horizontal bar chart comparing the local median runtimes for ag, rust-ag, rg, and ugrep on the measured literal-simple workload. Lower is better.
Local median runtime comparison for the only measured workload. rust-ag roughly halves ag's runtime, but rg and ugrep remain faster.

The result in one view

The measured workload was the literal-simple scenario: search for "foo" across the repository working tree.

ToolLocal medianRelative to agWhat to keep in mind
ag19.64 ms1.00×Baseline
rust-ag9.69 ms2.03× fasterApples-to-apples comparison target
rg7.74 ms2.54× fasterFaster here, but it searches a different file set by default
ugrep4.02 ms4.89× fasterFastest here, also with different default traversal semantics

The key comparison is ag versus rust-ag. That pair was checked for output parity. The cross-tool numbers are still useful context, but they are not the same kind of comparison because rg and ugrep make different default choices about traversal and ignore handling.

What was actually measured

This first pass is intentionally narrow.

That setup is enough to say something useful about this workload. It is not enough to claim a universal ranking for every kind of search.

Correctness came first

Before looking at speed, I checked whether the rewrite still behaves like ag on the smoke scenarios that matter most for day-to-day use.

Grid showing eight smoke scenarios where ag and rust-ag matched on stdout, stderr, and exit code.
ag and rust-ag matched across all 8 smoke scenarios in the correctness gate.

rust-ag matched ag across all 8 smoke scenarios that were checked:

That matters because it makes the ag versus rust-ag speedup meaningful. rg and ugrep landed in different output clusters during the correctness gate, which is expected: their default traversal and ignore rules select different file sets.

Speedup ranges across local, nightly, and manual runs

The 2× result was not a one-off. Across the three run types, rust-ag stayed between 1.96× and 2.03× faster than ag.

Range chart showing speedup ranges across local, nightly, and manual benchmark runs.
Speedup ranges across local, nightly, and manual runs. rust-ag consistently beat ag, but rg and ugrep still led the field.
PairLocalNightlyManual
rust-ag vs ag2.03×1.96×1.96×
rg vs ag2.54×2.45×2.48×
ugrep vs ag4.89×4.76×4.95×
rg vs rust-ag1.25×1.25×1.27×
ugrep vs rust-ag2.41×2.43×2.52×

The improvement over ag looks stable inside this test window. The ranking also stayed stable: ugrep → rg → rust-ag → ag.

Scope limits that matter

The right way to read this study is: the rewrite looks promising, and the limits are real.

Four cards summarizing the study limitations: only 1 of 38 scenarios measured, 3 measured samples after 1 warmup, one Apple M4 machine, and a failing overall claim gate due to incomplete reproducibility validation.
The result is useful, but it is still bounded by narrow scenario coverage, small sample counts, one machine, and an incomplete reproducibility bundle validation step.

That last point matters. I do not want to quietly skip it just because the speedup looks good.

Bottom line

This rewrite clears a useful bar: on the measured workload, it preserves ag's behavior in the smoke checks and cuts median runtime roughly in half.

It does not clear the stronger bar of being the fastest tool in the comparison set. rg and ugrep are still ahead, and this benchmark window is much too small to pretend otherwise.

That is why this is worth publishing as a rewrite study: the interesting result is not just that Rust helped, but that the evidence still forces a careful conclusion.

Repo and evidence

The current public branch now exposes the published rewrite workspace plus parity/manifests, so the links below point there directly.