Security-First Agent Runtime

Local automation.
Without blind trust.

A policy-driven kernel for local AI automation. Every action is evaluated, auditable, and explicitly scoped — before execution.

  • Default Deny No implicit permissions
  • Audit Trail JSONL with rotation
  • Tool Boundaries Host/path-level controls
< 50ms policy evaluation
100% local execution
0 implicit permissions
audit log retention

Built for controlled execution.

Policy Engine

Deny-over-allow logic for prompts and tools. Define host, path, and method constraints at the kernel level — evaluated before any action fires.

default-deny

Approval Flow

Interactive prompts or deterministic pass-through for CI. Human-in-the-loop or fully automated — you decide per-context, not per-run.

interactive · ci-safe

Audit Layer

Append-only JSONL audit events for every action. Filter, export, and rotate logs. Every policy decision leaves a permanent, tamper-evident trail.

jsonl · append-only

Provider Abstraction

Use the local mock provider for offline development or wire to any OpenAI-compatible API. Timeout and retry hardening built in.

mock · openai-compatible

Every request. Every time.

No action executes without passing through all four gates. No exceptions. No shortcuts.

01

Runtime

Parses the prompt, generates an action plan, and holds execution until all downstream gates respond.

02

Policy

Validates against default-deny rules. Deny rules take full precedence. Scoped capabilities only. No ambient authority.

03

Tools

http.fetch and fs.read execute behind explicit constraints. Typed error boundaries. No side-effects beyond declared scope.

04

Audit

Logs the full run lifecycle — policy decisions, action requests, results. Append-only. Every event timestamped and structured.

Rules you write. Trust you earn.

Define granular rules in YAML. Watch them evaluated in real-time against every action the agent proposes.

sentinex.policy.yaml
# Sentinex Policy Configuration
version: "1"

default:
  action: deny      # deny-first

rules:
  - tool: fs.read
    paths:
      - /var/log/**
      - /tmp/sentinex/**
    action: allow

  - tool: fs.write
    action: deny

  - tool: http.fetch
    hosts:
      - api.internal
    methods:
      - GET
    action: allow

audit:
  enabled: true
  path:   ~/.sentinex/audit.jsonl
  rotate:
    size: "10mb"
live evaluation ● running

Where Sentinex fits today.

Sentinex is not a chat UI. It is a constrained execution kernel for teams that care about local automation boundaries and auditability.

DevOps Runbooks

Allow read-only filesystem and specific internal APIs while denying shell execution and arbitrary outbound requests.

ops · read-only

Security Triage

Analyze logs and configs with an auditable trail for every tool action and policy decision.

forensics · audit

CI Guardrails

Use non-interactive approvals with deterministic policies and `doctor`/`policy lint` checks before runtime.

ci · deterministic

Local LLM Prototyping

Develop plans with the mock provider, then switch to OpenAI-compatible endpoints without changing runtime policy semantics.

mock → openai

What Sentinex is designed to prevent.

Sentinex reduces accidental overreach and unsafe tool execution. It does not replace OS hardening, sandboxing, or endpoint security.

Mitigates

  • Implicit tool access without explicit policy allow rules
  • Prompt-triggered access to non-whitelisted hosts or paths
  • Unlogged tool execution and missing operator visibility
  • Provider output shape drift via action-plan validation

Does Not Replace

  • OS/user permissions and filesystem ACLs
  • Network segmentation, proxies, and firewalls
  • Secrets management and endpoint monitoring
  • Review of business logic or prompt intent
Recommended posture: run Sentinex under least-privileged OS accounts and treat policy files as code reviewed artifacts.

Zero to running in 60 seconds.

Clone, build, initialize, and fire your first prompt. The policy linter catches misconfigurations before they run.

terminal
# 1. Clone the repository
git clone https://github.com/MiBe1991/sentinex.git
cd sentinex

# 2. Install & build
npm install
npm run build

# 3. Initialize configuration
npx sentinex init

# 4. Lint your policy (catches errors before runtime)
npx sentinex policy lint --fail-on error

# 5. Run your first prompt
npx sentinex run "hello world"

# 6. Check system health
npx sentinex doctor --json
PowerShell one-liner: git clone https://github.com/MiBe1991/sentinex.git && cd sentinex && npm install && npm run build && npx sentinex init && npx sentinex policy lint --fail-on error && npx sentinex run "hello world" && npx sentinex doctor --json

Common maintainer and user questions.

Is Sentinex a chatbot framework?

No. Sentinex is a local, policy-gated execution runtime. It can use an LLM provider, but the product focus is controlled action execution and auditability.

Can I run it without an API key?

Yes. Use the mock provider (default) for local development, dry-runs, and policy/runtime testing. OpenAI-compatible providers are optional.

What is enforced before a tool runs?

Prompt/tool policy checks (deny > allow), action-plan validation, optional approval flow, and tool-specific scope checks (hosts, roots, byte limits).

Where should I start as a contributor?

Run npm test, read AGENTS.md, CONTRIBUTING.md, and check ROADMAP.md for current milestones and issues.

Open development, explicit boundaries.

Sentinex ships with CI, policy linting, maintainer runbooks, security reporting, and a public roadmap. Treat it like infrastructure, not a demo script.