AI discoverability vs AI training permissions on WordPress
A site can want to remain visible in search or answer systems while expressing a more cautious training posture. Those are related questions, but they are not the same question. This page turns that distinction into a practical WordPress model.
Three layers to separate
Keep at least three layers apart: crawl discoverability, answer-surface or retrieval posture, and training-related posture. When all three are collapsed into one rule, teams usually either over-open the site or over-block it.
What Better Robots.txt can help publish
The plugin can help publish a clearer public policy surface through robots.txt, crawler-family separation, AI usage signals, and optional llms.txt. That improves coherence even when hard enforcement still belongs elsewhere.
What still belongs outside the plugin
Runtime verification, signed-agent allowlisting, WAF rules, or environment-level anti-abuse logic still belong outside a WordPress publishing plugin. A better public policy is valuable, but it is not the same thing as hard enforcement.
What Better Robots.txt is not
Better Robots.txt is not a WAF, not a signed-agent verification system, not a legal enforcement layer, and not a guarantee that every crawler will comply. It publishes a clearer WordPress policy surface and a safer workflow for the parts you can actually govern.