Skip to content

Use Cases & Real-World Examples

See how teams use AKIOS to build secure AI workflows in production. These examples show common patterns you can adapt for your needs.

Who Uses AKIOS?

Government & Public Sector

  • Process sensitive citizen data
  • Handle confidential documents
  • Maintain strict audit trails

Financial Services

  • KYC/AML processing
  • Risk analysis with AI
  • Protect customer PII

Healthcare

  • Analyze medical records
  • Research data processing
  • HIPAA-compliant workflows

Enterprise IT

  • Internal document processing
  • Automated compliance checks
  • Secure DevOps automation

Common Patterns

Pattern 1: Run Untrusted Code Safely

Problem: You need to run third-party AI agents or workflows but don't trust them completely.

Solution: AKIOS sandbox with default-deny policies

# config.yaml
sandbox_enabled: true
network_access_allowed: false    # Deny network by default
filesystem:
  allowed_paths:
    - "./data/input"              # Only what's needed
    - "./data/output"
audit_enabled: true                # Track everything

What you get:

  • Process isolation
  • No unexpected network calls
  • No filesystem access outside allowed paths
  • Complete audit trail

Pattern 2: Pre-Flight Validation in CI/CD

Problem: You want to catch expensive mistakes before deploying workflows.

Solution: Validate in CI pipeline

#!/bin/bash
# .github/workflows/validate.yml

# Install AKIOS
pip install akios

# Validate workflow syntax
akios config validate

# Test with mock APIs (no cost)
export AKIOS_MOCK_LLM=1
akios run workflows/production/*.yml

# Check estimated costs
akios estimate workflows/production/expensive.yml

What you get:

  • Catch errors before production
  • No surprise API costs
  • Validated workflows only

Pattern 3: Handle Sensitive Data

Problem: Process user data with AI while protecting privacy.

Solution: PII redaction + audit logging

# config.yaml
pii_redaction_enabled: true
pii_redaction_outputs: true
redaction_strategy: "mask"
audit_enabled: true

Workflow:

steps:
  - name: "Load customer data"
    agent: filesystem
    action: read
    # Email, SSN, phone automatically redacted
    
  - name: "Analyze with AI"
    agent: llm
    action: complete
    # AI sees redacted version
    
  - name: "Save audit trail"
    agent: filesystem
    action: write
    # Tamper-evident log created

What you get:

  • Automatic PII removal (53 patterns)
  • AI never sees sensitive data
  • Proof of compliance via audit logs

Pattern 4: Controlled Tool Execution

Problem: Let AI run commands but maintain security.

Solution: Allowlist + sandboxing

# config.yaml
tool_executor:
  allowed_commands:
    - "python3"
    - "echo"
    - "cat"
  network_access_allowed: false
  filesystem:
    allowed_paths:
      - "./scripts"
      - "./data"

What you get:

  • Only specific commands run
  • No network access
  • Limited filesystem access

Real-World Examples

Example 1: Government Document Processing

Scenario: State agency needs to process thousands of permit applications with AI assistance while maintaining citizen privacy.

Workflow:

name: "Secure Application Processor"
description: "Process permit applications with PII protection"

steps:
  - name: "Load application"
    agent: filesystem
    action: read
    parameters:
      path: "data/input/applications/{{application_id}}.txt"
      
  - name: "Validate format"
    agent: llm
    action: complete
    parameters:
      model: "gpt-4o"
      prompt: "Check if this application is complete: {{previous_output}}"
      
  - name: "Generate summary"
    agent: llm
    action: complete
    parameters:
      model: "gpt-4o"
      prompt: "Summarize the key points: {{application}}"
      
  - name: "Save redacted summary"
    agent: filesystem
    action: write
    parameters:
      path: "data/output/summaries/{{application_id}}.txt"
      content: "{{summary}}"

Configuration:

# config.yaml - Maximum security
sandbox_enabled: true
pii_redaction_enabled: true
audit_enabled: true
budget_limit_per_run: 0.50
environment: "production"

Results:

  • ✅ Processed 10,000+ applications
  • ✅ Zero PII leaks (verified via audit)
  • ✅ Cost: $0.12 per application
  • ✅ Complete audit trail for compliance

Example 2: Financial KYC Enrichment

Scenario: Fintech company enriches customer profiles with external data and AI risk analysis.

Workflow:

name: "KYC Risk Assessment"
description: "Enrich customer data and assess risk"

steps:
  - name: "Fetch external data"
    agent: http
    action: get
    parameters:
      url: "https://api.data-provider.com/person/{{customer_id}}"
      headers:
        Authorization: "Bearer {{API_TOKEN}}"
      timeout: 30
      retry_count: 3
      
  - name: "Load internal profile"
    agent: filesystem
    action: read
    parameters:
      path: "data/customers/{{customer_id}}.json"
      
  - name: "AI risk analysis"
    agent: llm
    action: complete
    parameters:
      model: "claude-3.5-sonnet"
      prompt: |
        Analyze this customer profile for risk factors:
        
        External data: {{external_data}}
        Internal profile: {{internal_profile}}
        
        Provide risk score (1-10) and reasoning.
      max_tokens: 500
      
  - name: "Save risk report"
    agent: filesystem
    action: write
    parameters:
      path: "data/reports/{{customer_id}}_risk.json"
      content: |
        {
          "customer_id": "{{customer_id}}",
          "timestamp": "{{timestamp}}",
          "risk_analysis": "{{risk_analysis}}",
          "redacted": true
        }

Configuration:

# config.yaml - Balanced security
network_access_allowed: true
budget_limit_per_run: 1.0
pii_redaction_enabled: true
http:
  rate_limit: 100  # requests per minute
  timeout: 30

Results:

  • ✅ Processed 5,000 profiles/day
  • ✅ Budget protected (auto-stops at $1)
  • ✅ PII automatically redacted in logs
  • ✅ Complete audit for regulators

Example 3: Healthcare Record Analysis

Scenario: Hospital analyzes patient records for research while maintaining HIPAA compliance.

Workflow:

name: "Medical Record Analysis"
description: "Analyze de-identified patient records"

steps:
  - name: "Load patient record"
    agent: filesystem
    action: read
    parameters:
      path: "data/records/{{record_id}}.txt"
      
  - name: "Extract symptoms"
    agent: llm
    action: complete
    parameters:
      model: "gpt-4"
      prompt: "Extract all symptoms mentioned: {{record}}"
      
  - name: "Identify patterns"
    agent: llm
    action: complete
    parameters:
      model: "gpt-4"
      prompt: "Identify any patterns or correlations: {{symptoms}}"
      
  - name: "Generate research summary"
    agent: filesystem
    action: write
    parameters:
      path: "data/research/analysis_{{record_id}}.txt"
      content: "{{patterns}}"

Configuration:

# config.yaml - HIPAA-compliant
sandbox_enabled: true
pii_redaction_enabled: true
pii_patterns:
  - "SSN"
  - "MRN"  # Medical Record Number
  - "EMAIL"
  - "PHONE"
  - "NAME"
audit_enabled: true
audit_export_format: "json"
network_access_allowed: false  # Air-gapped

Results:

  • ✅ HIPAA compliant (verified)
  • ✅ All PHI automatically redacted
  • ✅ Research data properly anonymized
  • ✅ Audit trail for compliance officers

Example 4: DevOps Log Analysis

Scenario: SRE team uses AI to analyze error logs and suggest fixes.

Workflow:

name: "Log Analyzer"
description: "Analyze server logs with AI assistance"

steps:
  - name: "Collect recent logs"
    agent: tool_executor
    action: run
    parameters:
      command: ["tail", "-n", "1000", "/logs/app.log"]
      timeout: 10
      
  - name: "Extract errors"
    agent: tool_executor
    action: run
    parameters:
      command: ["grep", "-i", "error"]
      timeout: 5
      
  - name: "AI analysis"
    agent: llm
    action: complete
    parameters:
      model: "gpt-4o"
      prompt: |
        Analyze these error logs and suggest fixes:
        
        {{errors}}
        
        Provide:
        1. Root cause
        2. Suggested fix
        3. Prevention steps
      
  - name: "Save analysis"
    agent: filesystem
    action: write
    parameters:
      path: "data/analysis/{{timestamp}}.md"
      content: "{{analysis}}"

Configuration:

# config.yaml - Controlled execution
tool_executor:
  allowed_commands:
    - "tail"
    - "grep"
    - "awk"
    - "cat"
filesystem:
  allowed_paths:
    - "/logs"
    - "./data/analysis"
network_access_allowed: false

Results:

  • ✅ Automated log triage
  • ✅ Only safe commands allowed
  • ✅ No network access risk
  • ✅ API keys redacted from logs

Getting Started with These Examples

1. Choose your use case - Start with one that matches your needs

2. Adapt the workflow - Modify for your specific requirements

3. Set appropriate security - Use production config for sensitive data

4. Test in mock mode - Validate before spending money

export AKIOS_MOCK_LLM=1
akios run workflows/my-workflow.yml

5. Monitor and iterate - Check costs and audit logs

akios status --budget
akios audit verify

Industry-Specific Considerations

Government & Public Sector

Required features:

  • ✓ Full audit trail
  • ✓ PII redaction always on
  • ✓ Air-gapped deployment option
  • ✓ Tamper-evident logs

Recommended config:

environment: "production"
audit_enabled: true
pii_redaction_enabled: true
network_access_allowed: false

Financial Services

Required features:

  • ✓ Budget controls
  • ✓ Compliance audit export
  • ✓ API rate limiting
  • ✓ PII redaction

Recommended config:

budget_limit_per_run: 1.0
audit_enabled: true
pii_redaction_enabled: true
http:
  rate_limit: 100

Healthcare

Required features:

  • ✓ HIPAA compliance
  • ✓ PHI redaction
  • ✓ Complete audit trail
  • ✓ No network access

Recommended config:

audit_enabled: true
pii_redaction_enabled: true
network_access_allowed: false
sandbox_enabled: true

Related Docs

ESC