Vera replaces variable names with typed De Bruijn indices, mandates Z3-verified contracts, and compiles to WebAssembly so LLM-generated code is structurally checkable rather than semantically trusted.
Key Takeaways
No variable names: @Int.0 is the most recent Int binding, @Int.1 the one before, eliminating naming-related LLM errors like shadowing and misidentification.
Every function requires requires(), ensures(), and effects() clauses; the Z3 SMT solver statically proves Tier 1 contracts, runtime fallback handles undecidable cases.
Effect types are structural capability proofs: a function that calls an LLM must declare effects(<Inference>); callers that don’t permit it cannot invoke it at compile time.
Compiler diagnostics are machine-readable: stable error codes E001-E702, JSON output via --json, and each error includes a concrete fix example targeting the model, not a human.
VeraBench early results: Kimi K2.5 hits 100% run_correct on Vera vs 86% on Python and 91% on TypeScript; three models beat TypeScript on Vera across 50 problems, 5 tiers.
Hacker News Comment Review
Commenters are split on whether nameless syntax helps or hurts: one argument is that LLMs perform best on verbose, explicit languages like Go or Java where everything is on the page, not languages requiring a new mental model.
The effect type system drew the most agreement as genuinely useful: typed capability proofs at compile time let you verify agent-created code cannot do IO or network calls before running it, which has real sandboxing value independent of the naming debate.
The empirical claim that structural references reduce LLM errors is questioned directly, with at least one commenter calling it unsubstantiated obfuscation absent a controlled benchmark showing improvement.
Notable Comments
@unignorant: effect systems enable pre-runtime capability proofs – “you know it can’t do IO” – making agent code trustworthy enough to sandbox without inspection.
@danpalmer: argues nameless syntax may backfire; LLMs degrade on Haskell-like abstractions that require holding a mental model, not just reading the page.
@ginko: asks for evidence that structural references outperform named variables for LLM code generation; calls the current rationale unconvincing.