Here is the approach I plan to take. Start with three languages: Simplified Chinese, Japanese, and Korean. These represent the highest-demand languages based on community requests, and they're also where the documentation gap most visibly limits participation. Use AI to generate initial translations of all documentation pages, then set up Crowdin (a translation management platform with free support for open-source projects) so that native speakers can review, correct, and improve those AI-generated drafts over time. The AI translations will not be perfect. They will be serviceable—and, importantly, they will give native-speaking community members something concrete to improve, rather than asking them to translate 120 pages from scratch.
This workflow mirrors what other major open-source projects have done successfully. Docusaurus itself uses Crowdin for translations. So does Jest. The pattern is well established: English source files live in the repository, an automated sync keeps the translation platform up to date when the English docs change, translators work through a web interface that preserves formatting and syntax, and completed translations flow back into the repository through pull requests. The result is a documentation site that serves multiple languages from the same codebase, with each language improving incrementally as the community contributes.
There is a particular technical challenge worth calling out. OpenClaw's documentation uses MDX, which is Markdown extended with React components. Translating MDX means preserving JSX syntax, internal links, code blocks, and frontmatter metadata exactly as-is, while translating only the prose. This is the kind of task where AI translation can perform reasonably well, because "translate the natural language, leave the code untouched" is a rule you can state explicitly. But it also means translations must be tested by building the site for each locale and verifying that nothing broke. A broken JSX component in a translated page is worse than an untranslated page.
The phased approach matters here. Getting the infrastructure in place, and bootstrapping the first three languages—is the immediate goal. Once the workflow exists and the community can see how to participate, expanding to additional languages becomes much simpler. Each new language is essentially the same playbook: generate AI drafts, upload them to the platform, recruit reviewers. The nine languages already requested could plausibly reach "initial coverage" quickly after the first three are launched.
There is also a strategic reason to start with documentation localization rather than jumping directly to translating the application's user interface. You cannot recruit native-speaking contributors to help localize the UI if those contributors cannot read the project documentation. The docs are the on-ramp for every other kind of contribution. Localize the docs first, and you create the conditions for a broader, more diverse contributor community to help with everything else.
The practical question, then, is whether the OpenClaw community is ready for this. The GitHub issue suggests there is demand. The application already has UI translations in dozens of languages (managed through i18next in the main repository), which suggests the project has accepted the principle that localization matters. What's missing is the plumbing on the docs side. I'm planning to set that up and get the first translations into the pipeline. It isn't glamorous work. It's the kind of infrastructure investment that pays off slowly, and then all at once, the day a developer in Osaka reads the getting-started guide in Japanese and decides to become a contributor.
OpenClaw has the code and, increasingly, the community. Making the documentation accessible to that community, in their own languages, is one of those investments that costs relatively little upfront and compounds over time. The tools are available. The demand is documented. The question to be lived is picking it up and starting it.

