Skip to content

Quick Start

Get EnforceCore protecting your AI agent in under 5 minutes.


Install

pip install enforcecore

Requires Python 3.11+. Dependencies: Pydantic v2, PyYAML, structlog, cryptography.

Note: EnforceCore is stable at v1.0.1. The 30-symbol Tier 1 API is frozen.


1. Create a Policy

Create a policy.yaml file that defines what your agent is allowed to do:

name: "my-agent-policy"
version: "1.0"

rules:
  allowed_tools:
    - "search_web"
    - "read_file"
    - "calculator"

  denied_tools:
    - "execute_shell"
    - "send_email"

  pii_redaction:
    enabled: true
    categories:
      - email
      - phone
      - ssn
      - credit_card
      - ip_address
      - passport

  content_rules:
    enabled: true
    categories:
      - shell_injection
      - path_traversal
      - sql_injection
      - code_execution

  rate_limits:
    per_tool: 10         # max calls per tool per minute
    global: 50           # max total calls per minute

  network:
    allowed_domains:
      - "api.example.com"
      - "*.trusted.io"
    denied_domains:
      - "*.evil.com"

  resource_limits:
    max_call_duration_seconds: 30
    max_memory_mb: 256
    max_cost_usd: 1.00

on_violation: "block"

2. Protect Your Tools

Use the @enforce decorator on any function your agent calls:

from enforcecore import enforce

# Async functions
@enforce(policy="policy.yaml")
async def search_web(query: str) -> str:
    return await api.search(query)

# Sync functions
@enforce(policy="policy.yaml")
def read_file(path: str) -> str:
    return open(path).read()

That's it. Every call now passes through the enforcement pipeline:

  1. Pre-call — Policy evaluation, tool name validation, input size check
  2. Redact inputs — PII detected and replaced before the tool sees the data
  3. Execute — Tool runs inside resource guard (time/memory/cost limits)
  4. Redact outputs — PII detected and replaced before response is returned
  5. Audit — Merkle-chained entry recorded in tamper-proof trail

3. Inline Policies

For quick prototyping, skip the YAML file:

@enforce(
    allowed_tools=["search_web", "calculator"],
    pii_redaction=True,
    max_cost_usd=5.0,
)
async def my_tool(args: dict) -> str:
    ...

4. Lifecycle Hooks

Build custom observability by hooking into the enforcement pipeline:

from enforcecore import on_violation, on_post_call

@on_violation
def alert_security(ctx):
    """Called whenever a tool call is blocked."""
    print(f"⚠️ {ctx.tool_name} blocked: {ctx.violation_type}")

@on_post_call
def log_metrics(ctx):
    """Called after every successful tool call."""
    print(f"✅ {ctx.tool_name} completed in {ctx.overhead_ms:.1f}ms")

5. Programmatic Control

For full control, use the Enforcer class directly:

from enforcecore import Enforcer, Policy

# Load policy
enforcer = Enforcer(Policy.from_file("policy.yaml"))

# Enforce a call
result = await enforcer.enforce_async(search_fn, "query", tool_name="search_web")

# Access metadata
print(enforcer.policy_name)                 # "my-agent-policy"
print(enforcer.guard.cost_tracker.total_cost)  # Cumulative spend

6. Verify Your Audit Trail

Every enforced call (allowed or blocked) is recorded in a Merkle-chained JSONL file:

from enforcecore import verify_trail, load_trail

# Verify integrity
result = verify_trail("audit.jsonl")
print(result.is_valid)        # True
print(result.total_entries)   # Number of recorded calls
print(result.chain_intact)    # No tampering

# Load for analysis
trail = load_trail("audit.jsonl")
for entry in trail:
    print(f"{entry.tool_name} → {entry.decision}")

7. Test Your Policies

Run the built-in adversarial evaluation suite to verify your policies actually block threats:

from enforcecore.core.policy import Policy
from enforcecore.eval import ScenarioRunner

policy = Policy.from_file("policy.yaml")
runner = ScenarioRunner(policy)
suite = runner.run_all()

print(f"Containment rate: {suite.containment_rate:.0%}")
# Expected: 100% — all 20 adversarial scenarios blocked

Configuration

All settings can be overridden via environment variables:

ENFORCECORE_DEFAULT_POLICY=policies/default.yaml
ENFORCECORE_AUDIT_PATH=./audit_logs/
ENFORCECORE_AUDIT_ENABLED=true
ENFORCECORE_REDACTION_ENABLED=true
ENFORCECORE_LOG_LEVEL=INFO
ENFORCECORE_COST_BUDGET_USD=100.0
ENFORCECORE_FAIL_OPEN=false   # NEVER set to true in production
ENFORCECORE_DEV_MODE=false
ENFORCECORE_AUDIT_IMMUTABLE=false       # OS-enforced append-only audit files
ENFORCECORE_AUDIT_WITNESS_FILE=         # Hash-only witness JSONL path

8. Harden Your Audit Trail (v1.0.0b4+)

For maximum tamper-evidence, enable append-only files and a witness backend:

# Via environment variables (zero code changes)
ENFORCECORE_AUDIT_IMMUTABLE=true
ENFORCECORE_AUDIT_WITNESS_FILE=/secure/witness.jsonl

Or programmatically:

from enforcecore import Auditor
from enforcecore.auditor.witness import FileWitness

auditor = Auditor(
    output_path="audit.jsonl",
    immutable=True,  # OS-level append-only (chattr +a / chflags uappend)
    witness=FileWitness("/secure/witness.jsonl"),  # Hash-only remote witness
)

Verify with witness cross-check:

from enforcecore.auditor.witness import verify_with_witness

result = verify_with_witness(
    trail_path="audit.jsonl",
    witness_path="/secure/witness.jsonl",
)
assert result.is_valid  # Detects chain-rebuild attacks

Next Steps

ESC