Agentic localization for your codebase
l10n translates your content with LLMs, keeps everything in-repo, validates output locally, and generates drafts your team can review without the round-trip of external platforms.
Why Tuist built this
We tried Crowdin and Weblate . The indirection of exporting and importing content slowed us down, and we could not run our own validation in the loop. We wanted the same workflow we use for code: change, test, review, ship.
Now translations are generated in place, checked by your tooling, and reviewed by humans only when it matters.
How it works
Add L10N.md context files alongside your content. l10n tracks their dependency so when context or content changes, only the affected translations get regenerated.
Pick one model to coordinate the agentic session and another to translate accurately. Use any OpenAI-compatible endpoint, Vertex AI, or your own hosted model.
Agents have access to built-in syntax checks and custom commands you define. They run validation after each translation and retry on errors, so output is correct before you review.
Tools
Agents use tools to automate verification after every translation.
Syntax validators
Parse output to ensure it is valid before it is saved.
- JSON
- YAML
- PO
- Markdown frontmatter
Preserve checks
Guarantee critical tokens survive translation.
- Code blocks
- Inline code
- URLs
- Placeholders
Custom commands
Bring your own validators with check_cmd and check_cmds.
- Linters
- Compilers
- Schema validators
Tool failures trigger retries until the output is valid.
Configuration lives in your repo
L10N.md
Define translation sources, targets, and output patterns in TOML frontmatter.
+++
[[translate]]
source = "site/src/_data/home.json"
targets = ["es", "de", "ko", "ja", "zh-Hans", "zh-Hant"]
output = "site/src/_data/i18n/{lang}/{basename}.{ext}"
+++
# Context for the translating agent...
CLI commands
Translate, validate, and track what needs updating.
- l10n init Interactive project setup
- l10n translate Generate translations
- l10n status Report missing or stale outputs
- l10n check Validate output syntax
- l10n clean Remove stale translation outputs
Use --force to re-translate everything.
Built for real workflows
Context-aware hashing
Translations update when source content or ancestor L10N.md context changes.
Validation hooks
JSON, YAML, and PO syntax checks plus optional external lint commands.
Agent pipeline
Separate coordinator and translator models with retry on validation errors.
Human review ready
Generate drafts fast, review when needed, and keep everything in Git.
FAQ
Traditional tools rely on translation memories, static databases of past translations that are matched by similarity. l10n replaces that with LLM context as memory: context files that agents read, learn from, and that your team can iterate over time. Instead of looking up a fuzzy match, agents understand your product's tone, terminology, and conventions. And just like developers validate code changes by compiling or linting, agents validate their translations using the same tools in your environment. Run it in CI or locally, and agents will use your linters, compilers, and validators to correct their own output.
Yes. l10n is a CLI tool, not a hosted service. You point it at any OpenAI-compatible endpoint, Vertex AI, or your own model. You control the cost, the data, and the quality.
Today, reviewers check translated content through pull requests and diffs, and can update context files to force re-translation when needed. In the future, we expect them to become part of the loop by running l10n locally, the same way developers already work with coding agents like Codex.