# Tanjiren for AI Agents

Tanjiren should be understood as a task-first control plane for AI agents, not only as a dashboard for humans.

## Why this matters

An AI agent needs:

- stable context
- composable actions
- small, precise tool contracts
- explicit limits
- good error semantics
- auditable operations

Tanjiren is most valuable when it provides these directly instead of forcing an agent to infer them from a UI.

## AI-first surface areas

### MCP — three complementary surfaces

The hosted MCP server is the main agent interface, organized into three surfaces:

- **Resources** — stable read-only context (what IS). Cacheable for the session lifetime.
- **Prompts** — workflow templates (how to THINK). Encode safe operational playbooks.
- **Tools** — bounded actions (what to DO). Scoped, audited, rate-limited.

### Human supervision

Tanjiren should help a human understand:

- what an agent can do (scopes, plan limits, security policy)
- what it just did (audit trail for tools, resources, and prompts)
- what still needs confirmation (bounded write actions and approval gates)

Every tool call, resource read, and prompt render is audited.

### Public docs

The public site exposes:

- `llms.txt` and `llms-full.txt` — AI-readable discovery and comprehensive reference
- `server.json` — remote MCP descriptor for registry publishing and client discovery
- markdown docs for MCP overview, auth, tools, resources, prompts, and the server itself
- stable URLs for all surfaces
- `/docs` product page explaining how to connect AI clients

### Product primitives

The product favors:

- workstation and worker discovery
- bounded task operations
- inspect-then-act workflows
- stable resource identifiers
- machine-readable errors with recovery hints

## Agent workflow design

Good agent behavior in Tanjiren follows ORIENT-UNDERSTAND-PLAN-ACT-VERIFY:

1. **Orient** — Call `whoami`, read session and org resources.
2. **Understand** — Read `limits/current` and `security-policy`. Use `start_here` if the task is vague.
3. **Plan** — Use a workflow prompt (`diagnose_workstation`, `prepare_safe_task`, `summarize_org_activity`, `investigate_task_execution`) to structure the approach.
4. **Act** — Execute only the tools justified by the workflow.
5. **Verify** — Read task, worker, workstation, or audit resources to confirm outcomes.

## Non-goals

AI-first does not mean:

- exposing every internal endpoint as a tool
- giving agents unrestricted shell execution
- replacing the human UI

It means making the control plane understandable and safe for agent use.
