Executive Summary
AI models generate code at remarkable speed. The bottleneck isn't generation; it's trust.
Consider a typical scenario: you ask an AI assistant to add rate limiting to your API. It produces 200 lines of plausible-looking code in seconds. The syntax is correct. The types check. But buried in that code are three critical problems: it uses a naive implementation that doesn't distribute correctly across servers, it introduces a blocking call in your async handler, and it logs sensitive user data that violates your privacy policies. You discover these issues days later, during code review or worse, in production.
The fundamental problem isn't that AI writes bad code. It's that AI doesn't know what "good" means in your context. It optimizes for local plausibility without understanding your system's invariants: the performance budgets you've learned from production incidents, the architecture boundaries that maintain system coherence, the protocol contracts that ensure compatibility across services, the security patterns that defend against real attacks.
You can't prompt your way to "this handler must complete in 200ms" or "these repositories must maintain protocol compatibility." Those constraints exist outside any context window, encoded in how your system actually behaves, why past incidents happened, what your team has learned the hard way.
Ananke transforms this implicit knowledge into explicit, enforceable constraints during code generation. Named for the Greek goddess of necessity and inevitability, Ananke treats code generation not as text completion but as constrained search through the space of programs that are syntactically valid, type correct, semantically coherent, architecturally sound, and aligned with explicit user intent.
The system comprises three integrated subsystems working in concert:
Clew extracts constraints from your existing codebase, production telemetry, and organizational knowledge. It discovers the patterns you actually follow, the performance boundaries you actually respect, the security invariants you actually enforce. This offline extraction transforms implicit team knowledge into machine-actionable specifications.
Braid compiles constraints just-in-time for each specific generation task. It receives both comprehensive baseline constraints from Clew and immediate task context from the calling system: your IDE knows what file you're editing, your CI system knows what changed in the PR, your agent knows the conversation history. Braid synthesizes all of this into a structured program of constraints tailored to this specific edit, for this specific user, at this specific moment.
Maze orchestrates AI models to generate code under these compiled constraints. It's not itself a model; it's an orchestration layer that directs autoregressive models for interactive edits and diffusion models for structural transformations, applying constraints during generation rather than hoping for compliance afterward. Every generation produces rich provenance showing exactly which constraints were applied, which were violated, and why specific decisions were made.
Together, these subsystems reframe the code generation problem: from "generate plausible code and hope it's correct" to "search the space of valid programs shaped by explicit constraints." The result is code you can merge with confidence, generated at AI speed but bounded by your system's actual requirements.