Platform overview, installation guides, configuration reference, and security policies — all in one place.
SphereIQ deploys autonomous AI teammates into operational workflows across four regulated industries.
Unlike generic chatbots, AI teammates are trained on industry-specific processes, connected to core enterprise systems, and governed by audit-ready controls. Each teammate handles a defined function — claims intake, prior authorization, loan processing, quality inspection — and operates 24/7 with human oversight for high-stakes decisions.
Inbound work arrives via API, email, portal, or system event.
The teammate searches internal knowledge bases and connected systems for context.
An LLM processes the request using domain prompts, rules, and few-shot examples.
Tool calls create records, route tasks, generate documents, or escalate to humans.
Every decision, retrieval, and action is recorded in a complete audit trail.
Claims, underwriting, policy servicing. Guidewire, Duck Creek.
Prior auth, clinical docs, patient intake. HL7 FHIR, Epic.
KYC/AML, risk analytics, loan processing. Core banking.
Predictive maintenance, QC, supply chain. OPC UA, SAP.
Prepare your environment and deploy — from provisioning to live production in days.
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 vCPUs | 8+ vCPUs |
| Memory | 16 GB | 32 GB |
| Storage | 100 GB SSD | 500 GB NVMe |
| Database | PostgreSQL 15+ | PostgreSQL 16 + pgvector |
| Runtime | Docker 24+ | Kubernetes 1.28+ |
Provision a PostgreSQL instance with the pgvector extension, pull container images from the private registry, and configure SSO via OpenID Connect.
For development, staging, or small-scale production:
For production. Includes autoscaling, ingress, and monitoring:
curl https://your-instance.sphereiq.ai/health — all components should report healthy.All platform behavior is controlled through a single YAML file — tenants, data sources, prompts, and tools.
| Key | Type | Description |
|---|---|---|
llm_provider | string | openai, azure, anthropic, or local |
llm_model | string | Model identifier, e.g. gpt-4o |
embedding_model | string | Embedding model for vector search |
audit_enabled | bool | Full decision audit trails (default: true) |
max_tokens | int | Max response tokens (default: 4096) |
Each tenant gets its own data sources, prompts, tools, and human-review threshold. Tenants share a deployment but cannot access each other's data. Setting human_review_threshold: 0.7 escalates any decision below 70% confidence.
Set industry on a tenant to auto-load domain prompts, tools, synonyms, and eval sets. Available modules:
insurance — claims, underwriting, policy, broker toolshealthcare — prior auth, clinical docs, FHIR connectorsmanufacturing — maintenance, QC, OPC UA/MQTT ingestionfinance — KYC/AML, risk, regulatory reportingBuilt for regulated industries — SOC 2, HIPAA, PCI DSS controls in every layer.
All customer data is encrypted at rest (AES-256) and in transit (TLS 1.3). The platform undergoes annual penetration testing. Customer data is never used to train models.
| Certification | Status | Scope |
|---|---|---|
| SOC 2 Type II | Active | All services |
| HIPAA | BAA available | Healthcare |
| PCI DSS | Compliant | Financial services |
| ISO 27001 | In progress | All services |
| GDPR | Compliant | EU data subjects |
| Channel | Availability | Response |
|---|---|---|
| 24/7 | < 4 hours | |
| Slack Connect | Business hours | < 1 hour |
| Phone (enterprise) | 24/7 | Immediate |