About 5 min read

Sovereign AI

Sovereign AI lets you run AI models on infrastructure you control, so your data never passes through third-party services. Karma One offers two tiers of sovereign deployment: Advanced (Local) and Cloud, each serving different security and operational needs.

Why Sovereign AI?

When you use standard AI models (Claude, GPT, Gemini), your prompts and data travel through the model provider's servers. For most users this is fine -- these providers have strong privacy commitments and do not train on your data. But certain scenarios demand stronger guarantees:

  • Regulated industries -- Finance, healthcare, legal, and government organizations may have compliance rules that prohibit sending data to external services.
  • Sensitive data -- Trade secrets, personal medical records, classified documents, or proprietary research.
  • Air-gapped environments -- Networks with no internet access for security reasons.
  • Data residency -- Requirements to keep data within a specific geographic region or physical location.

Sovereign AI addresses all of these by keeping the AI processing within your own infrastructure boundary.

Two Tiers

Advanced Sovereign AI (Local)

Advanced Sovereign AI runs models directly on your own hardware through Karma Box. Data never leaves your local machine or network.

How it works:

  1. Karma Box is installed on your Mac (Apple Silicon recommended).
  2. AI models are downloaded and run locally via Ollama or a compatible runtime.
  3. When you request sovereign processing, Karma Box intercepts the request and routes it to the local model instead of a cloud API.
  4. Results are returned directly to the Karma One app.

Characteristics:

| Aspect | Detail | |--------|--------| | Data location | Your local machine only | | Network required | No (fully offline after model download) | | Performance | Depends on your hardware (M1 Pro or better recommended) | | Model updates | Manual download of new model versions | | Best for | Maximum privacy, air-gapped environments, individual use |

Supported capabilities:

  • Image generation -- Generate images from text prompts using local models.
  • Vision analysis -- Analyze images and screenshots using local vision models.
  • Text generation -- General conversation and text processing.

How to activate in conversation:

Tell the AI to use the local model by saying something like:

  • "Use the Advanced Sovereign AI model"
  • "Process this with the local model"
  • "Analyze this image using Karma Box's own model"

Cloud Sovereign AI

Cloud Sovereign AI runs models on cloud servers that you control or that are operated within a trusted boundary. Data does not pass through third-party public AI services.

How it works:

  1. Your organization deploys AI models on servers you manage (your own data center, private cloud, or a dedicated hosting environment).
  2. Karma One is configured to route requests to your sovereign endpoint.
  3. Processing happens entirely within your infrastructure.

Characteristics:

| Aspect | Detail | |--------|--------| | Data location | Your controlled cloud infrastructure | | Network required | Yes (private network or VPN) | | Performance | High (server-grade hardware) | | Model updates | Managed by your IT team | | Best for | Enterprise teams, shared sovereign infrastructure, compliance |

Supported capabilities:

  • Image generation -- Generate images using sovereign cloud models.
  • Vision analysis -- Analyze images using sovereign cloud vision models.
  • Text generation -- General conversation and text processing.

How to activate in conversation:

Tell the AI to use the sovereign model:

  • "Use the Sovereign AI model"
  • "Process this with the cloud sovereign model"
  • "Analyze this image using the sovereign model"

Comparison

| Feature | Standard AI | Cloud Sovereign | Advanced (Local) | |---------|-------------|-----------------|-------------------| | Data leaves your device | Yes | Yes (to your servers) | No | | Third-party processing | Yes | No | No | | Internet required | Yes | Yes (private network) | No | | Performance | Highest | High | Hardware-dependent | | Setup complexity | None | Moderate | Low (Karma Box) | | Cost | Included in subscription | Your own infrastructure | Your own hardware | | Web search available | Yes | Limited | No | | Model variety | Full catalog | Deployed models only | Downloaded models only |

Setup: Advanced (Local) Sovereign AI

Prerequisites

  • macOS 13 or later with Apple Silicon (M1 or newer).
  • Karma Box installed and signed in.
  • At least 16 GB of RAM recommended (32 GB+ for larger models).
  • Sufficient disk space for model files (typically 4--20 GB per model).

Steps

  1. Install Ollama -- Download and install Ollama on your Mac. Ollama manages model downloads and provides a local inference server.

  2. Download models -- Open Terminal and pull the models you want:

    ollama pull llama3          # Text generation
    ollama pull llava           # Vision analysis
    
  3. Verify Ollama is running -- Ollama runs as a background service. Confirm it is available:

    curl http://localhost:11434/api/tags
    
  4. Connect Karma Box -- Karma Box automatically detects the local Ollama instance. No additional configuration is needed.

  5. Test in conversation -- Open a chat in Karma One and say: "Use the Advanced Sovereign AI model to describe this image" (attach an image). The request will be processed entirely on your Mac.

Troubleshooting

| Issue | Solution | |-------|----------| | "Model not found" | Run ollama pull <model_name> to download the required model | | Slow responses | Ensure no other heavy processes are using your GPU. Larger models need more RAM. | | Karma Box not detecting Ollama | Restart Ollama and Karma Box. Verify Ollama is serving on localhost:11434. |

Setup: Cloud Sovereign AI

Cloud Sovereign AI setup is typically managed by your organization's IT team. The general process:

  1. Deploy models on your chosen infrastructure (private cloud, on-premise servers, etc.).
  2. Expose an API endpoint that is compatible with the expected request/response format.
  3. Configure the endpoint in Karma One's organization settings or through your administrator.
  4. Test connectivity from the Karma One app to confirm requests are routed correctly.

For detailed enterprise deployment guidance, contact the Karma One team.

Privacy Guarantees

When using Sovereign AI:

  • No third-party data sharing -- Your prompts, files, and responses are never sent to external AI providers.
  • No training on your data -- Local and sovereign cloud models do not send data back for training purposes.
  • Full auditability -- Since the infrastructure is yours, you have complete visibility into data flows.
  • Encryption in transit -- Communication between Karma One and sovereign endpoints uses TLS encryption.

Availability

| Plan | Cloud Sovereign | Advanced (Local) | |------|-----------------|-------------------| | Free | Not available | Not available | | Pro | Not available | Not available | | Team | Optional add-on | Optional add-on | | Enterprise | Included | Included |

Note: Some features that require external services (e.g., web search, social media publishing) are not available when using Sovereign AI exclusively. You can switch between standard and sovereign modes as needed.