Policy Engine
Deny-over-allow logic for prompts and tools. Define host, path, and method constraints at the kernel level — evaluated before any action fires.
A policy-driven kernel for local AI automation. Every action is evaluated, auditable, and explicitly scoped — before execution.
Deny-over-allow logic for prompts and tools. Define host, path, and method constraints at the kernel level — evaluated before any action fires.
Interactive prompts or deterministic pass-through for CI. Human-in-the-loop or fully automated — you decide per-context, not per-run.
Append-only JSONL audit events for every action. Filter, export, and rotate logs. Every policy decision leaves a permanent, tamper-evident trail.
Use the local mock provider for offline development or wire to any OpenAI-compatible API. Timeout and retry hardening built in.
No action executes without passing through all four gates. No exceptions. No shortcuts.
Parses the prompt, generates an action plan, and holds execution until all downstream gates respond.
Validates against default-deny rules. Deny rules take full precedence. Scoped capabilities only. No ambient authority.
http.fetch and fs.read execute behind explicit constraints. Typed error boundaries. No side-effects beyond declared scope.
Logs the full run lifecycle — policy decisions, action requests, results. Append-only. Every event timestamped and structured.
Define granular rules in YAML. Watch them evaluated in real-time against every action the agent proposes.
# Sentinex Policy Configuration
version: "1"
default:
action: deny # deny-first
rules:
- tool: fs.read
paths:
- /var/log/**
- /tmp/sentinex/**
action: allow
- tool: fs.write
action: deny
- tool: http.fetch
hosts:
- api.internal
methods:
- GET
action: allow
audit:
enabled: true
path: ~/.sentinex/audit.jsonl
rotate:
size: "10mb"
Sentinex is not a chat UI. It is a constrained execution kernel for teams that care about local automation boundaries and auditability.
Allow read-only filesystem and specific internal APIs while denying shell execution and arbitrary outbound requests.
Analyze logs and configs with an auditable trail for every tool action and policy decision.
Use non-interactive approvals with deterministic policies and `doctor`/`policy lint` checks before runtime.
Develop plans with the mock provider, then switch to OpenAI-compatible endpoints without changing runtime policy semantics.
Sentinex reduces accidental overreach and unsafe tool execution. It does not replace OS hardening, sandboxing, or endpoint security.
Clone, build, initialize, and fire your first prompt. The policy linter catches misconfigurations before they run.
# 1. Clone the repository
git clone https://github.com/MiBe1991/sentinex.git
cd sentinex
# 2. Install & build
npm install
npm run build
# 3. Initialize configuration
npx sentinex init
# 4. Lint your policy (catches errors before runtime)
npx sentinex policy lint --fail-on error
# 5. Run your first prompt
npx sentinex run "hello world"
# 6. Check system health
npx sentinex doctor --json
git clone https://github.com/MiBe1991/sentinex.git && cd sentinex && npm install && npm run build && npx sentinex init && npx sentinex policy lint --fail-on error && npx sentinex run "hello world" && npx sentinex doctor --json
No. Sentinex is a local, policy-gated execution runtime. It can use an LLM provider, but the product focus is controlled action execution and auditability.
Yes. Use the mock provider (default) for local development, dry-runs, and policy/runtime testing. OpenAI-compatible providers are optional.
Prompt/tool policy checks (deny > allow), action-plan validation, optional approval flow, and tool-specific scope checks (hosts, roots, byte limits).
Run npm test, read AGENTS.md, CONTRIBUTING.md, and check ROADMAP.md for current milestones and issues.
Sentinex ships with CI, policy linting, maintainer runbooks, security reporting, and a public roadmap. Treat it like infrastructure, not a demo script.