Core planning state is stored on device by default, with export or sync only through paths that are explicitly enabled.
Trust
Owned data. Explicit AI boundary. Auditable changes.
Trust here comes from clear boundaries, not brand promises.
Lorvex should feel trustworthy because the system boundary is clear, not because the product makes vague promises.
The trust model is simple: your planning data is yours, the AI boundary is explicit, the code is open source, and assistant-initiated operations stay inspectable.
Owned data
Storage
Open source
Inspect
The core project is open source and public. Behavior claims should map to code, architecture, and visible product boundaries.
Explicit AI boundary
AI
Lorvex does not hide an always-on model inside the app. Assistants operate through an explicit client boundary and explicit tools.
Model
What this means in practice
These claims are intentionally narrow. They describe the boundary the product is designed around today.
- Storage
- Core planning data is stored on device by default.
- Boundary
- No built-in model runtime. Assistants connect through external clients and operate through explicit tools.
- Open source
- Core code is open source and auditable. Track changes in the public repository.
- Writes
- Assistant-initiated changes are recorded as a changelog you can review.
- Telemetry
- No default behavioral analytics. If diagnostics are added, they will be opt-in and documented here.
FAQ
Common boundary questions
- Does Lorvex include its own model runtime?
- No. AI behavior comes from external assistant clients connected through MCP.
- Is the AI always-on?
- No. In the current alpha, assistant operation happens only when you explicitly invoke it.
- Can data leave the device?
- Only through explicitly enabled export or sync paths.
- Can users inspect behavior claims?
- Yes. Claims are tied to visible architecture and operation boundaries.
Inspect