Loading...
private.me Docs
Get xAfe
PRIVATE.ME · Technical White Paper

Xafe: Ransomware-Proof Multi-Cloud Backup

Split files across cloud providers using XorIDA threshold secret sharing. No single provider can reconstruct your data, yet any K-of-N providers can restore the original. Streaming architecture handles files of any size with constant memory usage. Information-theoretic security against ransomware and cloud compromise.

v0.1.0 39 tests passing 4 modules 0 npm deps 1MB chunks Dual ESM/CJS
Section 01

Executive Summary

Xafe makes ransomware-proof backups achievable with two functions: splitFile() and reconstructFile(). Files are chunked, each chunk is split into K-of-N XorIDA shares with HMAC verification, and shares are distributed across independent storage providers.

The core security guarantee is information-theoretic: compromise of any K-1 providers reveals zero information about the original data. This is not a computational assumption that quantum computers might break — it is mathematically impossible to reconstruct without K shares, even with infinite computational power.

Streaming architecture processes files in configurable chunks (default 1MB). Each chunk is independently split, signed with HMAC-SHA256, and distributed. Memory usage stays constant regardless of file size. A cryptographic manifest with SHA-256 file hash ensures end-to-end integrity verification after reconstruction.

The default configuration is 2-of-3: three cloud providers, any two can restore. This survives single-provider compromise, ransomware, outage, or data loss. Enterprises can configure higher thresholds: 3-of-5 for critical infrastructure, 4-of-7 for defense applications.

Section 02

Developer Experience

Xafe provides structured error handling and actionable error messages to help developers build reliable backup systems.

Result Pattern

All operations return Result<T, E> — never thrown exceptions in library code. Errors are values that must be explicitly handled.

Result-based error handling
const result = await splitFile(fileData, 'backup.db', {
  totalShares: 3,
  threshold: 2
});

if (!result.ok) {
  // Error is a typed discriminated union
  switch (result.error.code) {
    case 'INVALID_CONFIG':
      console.error('Config validation failed:', result.error.message);
      break;
    case 'SPLIT_FAILED':
      console.error('XorIDA split failed:', result.error.message);
      break;
  }
  return;
}

const { manifest, shares } = result.value;

Error Codes

Six error codes cover all failure modes. Each error includes a machine-readable code and human-readable message.

Code When Cause
INVALID_CONFIG splitFile() totalShares < 2, threshold < 2, threshold > totalShares, or chunkSize < 1
SPLIT_FAILED splitFile() XorIDA threw during chunk split
HMAC_FAILED reconstructFile() HMAC-SHA256 verification failed — data corrupted or tampered
RECONSTRUCT_FAILED reconstructFile() XorIDA reconstruction failed, PKCS#7 unpadding invalid, or file hash mismatch
INSUFFICIENT_SHARES reconstructFile() Fewer than threshold shares available for a chunk
CHUNK_MISMATCH reconstructFile() Number of chunk groups does not match manifest.totalChunks
Section 03

The Problem

Traditional backup strategies fail when the attacker controls the backup infrastructure or when the cloud provider is compromised.

Single-Provider Risk

Storing backups with a single cloud provider creates a single point of failure. If the provider experiences an outage, ransomware infection via compromised credentials, or insider threat, all backups are lost simultaneously. The 2024 Google Cloud deletion incident erased entire customer tenants. The 2023 CircleCI breach exposed environment variables including backup encryption keys.

Encrypted Backups Are Not Enough

Encrypting backups with AES-256 before upload protects confidentiality, but the encryption key becomes a single point of failure. If ransomware exfiltrates the key from your key management system, all backups decrypt instantly. Key rotation does not help — past backups remain encrypted with compromised keys.

Multi-Cloud Is Not Enough

Uploading the same encrypted backup to three providers improves availability but not security. An attacker with the encryption key can decrypt backups from any provider. You gain redundancy, not defense in depth.

The Xafe Approach

Xafe eliminates single points of failure by splitting data at the information-theoretic layer. Each provider receives shares that are individually worthless. Any K-1 shares reveal zero information about the plaintext — not computationally hard to extract, but mathematically impossible. Compromise of any single provider yields no data. Ransomware must simultaneously compromise K providers to reconstruct anything.

Strategy Single provider compromise Key compromise Quantum threat
Single cloud + AES Full data loss Full decrypt Vulnerable
Multi-cloud + same AES key Survives outage Full decrypt Vulnerable
Xafe (2-of-3 split) Zero disclosure N/A — no keys Quantum-proof
Section 04

Real-World Use Cases

Six scenarios where Xafe replaces traditional backup strategies with information-theoretic protection.

🏥
Healthcare
PHI Backup

HIPAA-covered entities split patient databases across AWS, Azure, and on-prem. Any single provider compromise reveals zero PHI. Restore from any two after ransomware.

2-of-3 split + HMAC verification
💹
Financial
Transaction Archive

SEC 17a-4 compliant archival with 3-of-5 split across geographically distributed providers. Survives two-provider simultaneous failure while maintaining auditability.

3-of-5 threshold + SHA-256 integrity
🔒
Legal
Discovery Archive

Legal hold data split across independent cloud regions. No single subpoena or data request can reconstruct privileged communications without multi-party cooperation.

2-of-3 with geographic distribution
🏛
Government
Classified Backup

Defense applications split across JWICS, SIPRNET, and NIPRNET. Information-theoretic security exceeds AES-256 classification requirements. Quantum-proof by construction.

4-of-7 for critical systems
💾
Enterprise
Database Backup

PostgreSQL dumps chunked and split to S3, GCS, and Backblaze B2. Ransomware encrypting production cannot touch split shares. Restore from any two providers in under an hour.

Streaming 1MB chunks
📁
Media
Archive Storage

Multi-GB video files streamed through XorIDA split without loading into memory. Constant memory usage regardless of file size. Glacier-class storage for shares.

Streaming architecture + S3 Glacier
Section 05

Solution Architecture

Four layers: chunking, splitting, verification, and distribution. Each layer is independently testable and composable.

Splitting
XorIDA
K-of-N threshold sharing
GF(2) finite field arithmetic
PKCS#7 padding before split
Sub-millisecond typical
Verification
HMAC-SHA256
HMAC key generated per chunk
HMAC before reconstruction
Corrupted chunks rejected
SHA-256 file hash in manifest
Distribution
Multi-cloud
Column-wise share distribution
Provider independence
Geographic diversity
Pluggable storage adapters
File Data Any size Chunk 1MB default + PKCS#7 pad Split XorIDA GF(2) K-of-N shares Share 1 Share 2 Share 3 Provider A Provider B Provider C Chunk → Pad → Split → Distribute (column-wise)
Section 05a

Streaming Split Architecture

Constant memory usage regardless of file size. Each chunk is independently processed and can be garbage collected after distribution.

Files are split into fixed-size chunks (default 1MB, configurable). Each chunk is padded via PKCS#7, split into N shares via XorIDA, and distributed. The splitter does not hold the entire file in memory — only the current chunk.

Streaming multi-GB files
// 10 GB file processed in 1 MB chunks
const largeFile = await fs.readFile('backup-10gb.db');
const result = await splitFile(largeFile, 'backup-10gb.db', {
  totalShares: 3,
  threshold: 2,
  chunkSize: 1_048_576 // 1 MB
});

// Memory usage: ~3 MB peak (1 MB chunk x 3 shares)
// Total chunks: 10,240
// Processing: sequential, garbage collected per chunk
Chunk size tuning
Smaller chunks (256KB) increase share count and manifest size but enable finer-grained parallelization. Larger chunks (4MB) reduce metadata overhead but increase peak memory usage. Default 1MB balances both for typical database and media workloads.
Section 05b

HMAC Verification Layer

HMAC-SHA256 on padded chunks before reconstruction. Corrupted or tampered shares are rejected immediately.

During split, a random 256-bit HMAC key is generated for each chunk. The HMAC is computed over the PKCS#7-padded chunk data before XorIDA splitting. The HMAC key and signature are embedded in each share's metadata field in the format base64(key):base64(signature).

During reconstruction, shares are collected for each chunk. Before unpadding or returning data, the HMAC is recomputed and verified. If verification fails, the chunk is rejected with an HMAC_FAILED error. This prevents accepting corrupted data from bit rot, provider storage errors, or tampering.

HMAC before reconstruct
Security-critical ordering: HMAC verification happens BEFORE XorIDA reconstruction and BEFORE PKCS#7 unpadding. This prevents processing tampered data. Never trust reconstructed data until HMAC passes.
Section 05c

End-to-End File Integrity

SHA-256 file hash in the manifest verifies that the fully reassembled file matches the original byte-for-byte.

After all chunks are reconstructed and concatenated, the SHA-256 hash of the reassembled file is computed and compared against the manifest's fileHash field. If the hashes do not match, reconstruction fails with a RECONSTRUCT_FAILED error.

This provides two-layer verification: HMAC protects individual chunks, SHA-256 protects the entire file. An attacker who corrupts a single byte in any chunk will fail HMAC verification. An attacker who reorders chunks or omits chunks will fail SHA-256 verification.

Layer Algorithm Protects Against
Chunk HMAC HMAC-SHA256 Bit rot, tampering, storage corruption within a chunk
File Hash SHA-256 Chunk reordering, omitted chunks, cross-chunk tampering
Section 06

Complete Flow: Split and Reconstruct

Seven steps to split, distribute, collect, and verify.

Split Flow

Full split example
import { splitFile } from '@private.me/backup';

const fileData = await fs.readFile('database.db');
const result = await splitFile(fileData, 'database.db', {
  totalShares: 3,
  threshold: 2
});

if (!result.ok) {
  console.error('Split failed:', result.error);
  return;
}

const { manifest, shares } = result.value;

// shares[chunkIndex][shareIndex]
// Distribute column-wise to providers:
const providerAShares = shares.map(chunk => chunk[0]);
const providerBShares = shares.map(chunk => chunk[1]);
const providerCShares = shares.map(chunk => chunk[2]);

await s3.upload('manifest.json', JSON.stringify(manifest));
await s3.upload('shares-a.json', JSON.stringify(providerAShares));
await gcs.upload('shares-b.json', JSON.stringify(providerBShares));
await azure.upload('shares-c.json', JSON.stringify(providerCShares));

Reconstruct Flow

Reconstruction from any 2 providers
import { reconstructFile } from '@private.me/backup';

// Download manifest + shares from 2 of 3 providers
const manifest = JSON.parse(await s3.download('manifest.json'));
const sharesA = JSON.parse(await s3.download('shares-a.json'));
const sharesB = JSON.parse(await gcs.download('shares-b.json'));

// Organize by chunk: collect[chunkIndex] = [share0, share1]
const collected = sharesA.map((a, i) => [a, sharesB[i]]);

const result = await reconstructFile(manifest, collected);

if (!result.ok) {
  console.error('Reconstruction failed:', result.error);
  return;
}

const restoredFile = result.value;
await fs.writeFile('database-restored.db', restoredFile);
Section 07

Integration Guide

Three integration patterns: CLI wrapper, scheduled backup, and pluggable storage adapters.

Pattern 1: CLI Wrapper

Build a command-line tool that splits a file and uploads shares to S3, GCS, and Azure. Store the manifest locally or in a separate secure location.

CLI tool example
#!/usr/bin/env node
import { splitFile } from '@private.me/backup';
import { S3, GCS, Azure } from './cloud-adapters.js';

const [filePath] = process.argv.slice(2);
const data = await fs.readFile(filePath);
const result = await splitFile(data, path.basename(filePath), {
  totalShares: 3, threshold: 2
});

const { manifest, shares } = result.value;

await Promise.all([
  S3.upload(manifest.id, shares.map(c => c[0])),
  GCS.upload(manifest.id, shares.map(c => c[1])),
  Azure.upload(manifest.id, shares.map(c => c[2]))
]);

console.log('Backup complete. Manifest ID:', manifest.id);

Pattern 2: Scheduled Database Backup

Integrate with pg_dump or mysqldump to split database exports automatically on a cron schedule.

Postgres backup cron
#!/bin/bash
# Run daily at 2am: 0 2 * * * /opt/backup/pg-backup.sh

pg_dump production > /tmp/pg_dump.sql
node ./split-and-upload.js /tmp/pg_dump.sql
rm /tmp/pg_dump.sql

Pattern 3: Pluggable Storage Adapters

Build adapters for S3, GCS, Azure, Backblaze B2, Wasabi, or on-prem storage. Each adapter implements upload(id, shares) and download(id).

Section 08

Security Guarantees

Information-theoretic security, HMAC integrity, end-to-end verification, and quantum resistance.

Information-Theoretic Security

XorIDA threshold secret sharing over GF(2) provides unconditional security. Any K-1 or fewer shares reveal zero information about the plaintext. This is not a computational assumption — it is mathematically provable. Shannon's perfect secrecy holds: an attacker with infinite computational power and all K-1 shares still cannot extract a single bit of information about the original data.

HMAC Integrity

Every chunk is HMAC-SHA256 signed with a random 256-bit key. The HMAC is computed over the PKCS#7-padded chunk data before XorIDA splitting. During reconstruction, HMAC verification happens before unpadding or returning data. Tampered or corrupted chunks are rejected immediately. HMAC keys are embedded in share metadata, so they reconstruct along with the data.

End-to-End File Integrity

The manifest contains a SHA-256 hash of the original file. After reconstructing all chunks and concatenating them, the hash of the reassembled file is compared against the manifest. Any byte-level corruption, chunk reordering, or omitted chunks will fail SHA-256 verification.

No Keys to Compromise

There are no encryption keys to rotate, store, or protect. The split itself is the security. Ransomware cannot exfiltrate a master key because no such key exists. Insider threats cannot decrypt backups because decryption requires collecting K shares from K independent providers — an action that leaves audit trails across multiple organizations.

Quantum Resistance

XorIDA security does not depend on computational hardness assumptions. Shor's algorithm, Grover's algorithm, and future quantum attacks cannot reduce the security of information-theoretic sharing. The security guarantee is based on the mathematical impossibility of reconstructing from K-1 shares, not on the difficulty of factoring integers or solving discrete logarithms.

Property Traditional AES backup Xafe (2-of-3)
Single provider compromise Full data exposure (if key leaks) Zero information disclosure
Key management Requires KMS, rotation, escrow No keys to manage
Quantum threat Vulnerable if key derived from RSA/ECDH Information-theoretically secure
Insider threat Single admin with key access Requires K-party collusion
Ransomware recovery Depends on key availability Restore from any K providers
Section 09

Performance Benchmarks

Sub-millisecond chunk split, constant memory usage, parallel distribution.

<1ms
1MB chunk split (2-of-3)
~3MB
Peak memory (1MB chunk × 3 shares)
10K
Chunks for 10GB file
3x
Parallel provider uploads

Chunk Size Impact

Chunk Size 10GB file chunks Peak memory Split time (2-of-3)
256 KB 40,960 ~768 KB ~0.25ms per chunk
1 MB (default) 10,240 ~3 MB ~1ms per chunk
4 MB 2,560 ~12 MB ~4ms per chunk

Threshold Configuration Impact

Configuration Shares per chunk Storage overhead Split time (1MB chunk)
2-of-2 2 2x ~0.7ms
2-of-3 (default) 3 3x ~1ms
3-of-5 5 5x ~1.8ms
4-of-7 7 7x ~2.5ms
Section 10

Honest Limitations

Every system has tradeoffs. Here are Xafe's.

Storage Overhead

A 2-of-3 configuration stores 3x the original file size across providers. A 3-of-5 configuration stores 5x. This is the cost of information-theoretic security and fault tolerance. Storage is cheap compared to data loss or ransomware recovery costs.

Manifest Security

The manifest contains metadata: filename, file size, total chunks, and SHA-256 file hash. The manifest is not encrypted by default. If the manifest contains sensitive filename or file size information, encrypt it at the application layer before storage. The SHA-256 hash does not leak plaintext content.

HMAC Key Exposure

HMAC keys are embedded in each share's metadata field in the format base64(key):base64(signature). An attacker with access to shares can extract HMAC keys, but this does not compromise data confidentiality — HMAC protects integrity, not confidentiality. The HMAC key reconstructs along with the chunk data, so it is only accessible to parties who can already reconstruct the chunk (i.e., those with K shares).

Chunk Ordering Responsibility

The caller is responsible for maintaining the mapping between chunks and storage providers. Xafe does not enforce or track which shares go to which provider — it is the application's responsibility to distribute shares column-wise (e.g., all share[*][0] to Provider A, all share[*][1] to Provider B).

Network Dependency

Reconstruction requires downloading shares from K providers. If K-1 providers are offline or unreachable, restoration fails. This is a fundamental property of K-of-N threshold schemes. Choose K and N based on your availability requirements: 2-of-3 survives single provider outage, 3-of-5 survives two-provider outage.

No Incremental Updates

Xafe does not support differential or incremental backups. Every split operation processes the entire file. For append-only workloads like database WAL files, consider chunking at the application layer and using Xafe to split individual WAL segments.

Section 11

Cloud Provider Matrix

Compatible with any S3-compatible storage, blob storage, or object storage API.

Provider API Geographic diversity Cost (per GB/month)
AWS S3 S3 API 26 regions $0.023
Google Cloud Storage GCS API 37 regions $0.020
Azure Blob Storage Azure API 60+ regions $0.018
Backblaze B2 S3-compatible US, EU $0.005
Wasabi S3-compatible 6 regions $0.0059
DigitalOcean Spaces S3-compatible 9 regions $0.020
Cloudflare R2 S3-compatible Global edge $0.015

Recommended Configurations

Enterprise
AWS + GCS + Azure

Three major providers, independent failure domains, global regions. Cost: ~$0.06/GB/month for 2-of-3.

Cost-Optimized
Backblaze + Wasabi + Cloudflare R2

Budget providers with S3-compatible APIs. Cost: ~$0.014/GB/month for 2-of-3.

Government
AWS GovCloud + Azure Gov + On-Prem

FedRAMP-certified cloud + on-premises for air-gapped share. ITAR/CMMC compliant.

Hybrid
S3 + GCS + NAS

Two cloud providers + local network-attached storage for fast local recovery.

Section 12

Disaster Recovery Scenarios

Four failure modes and their recovery paths.

Scenario 1: Single Provider Outage

Impact: AWS S3 experiences multi-hour outage in us-east-1.
Recovery: Download shares from GCS + Azure (2-of-3). Reconstruct file. Zero data loss. Typical recovery time: under 1 hour for 10GB database.

Scenario 2: Ransomware Encrypts Production + One Backup Provider

Impact: Ransomware compromises AWS credentials, encrypts production database and AWS S3 backup shares.
Recovery: AWS shares are compromised but contain zero information (1-of-3). Download shares from GCS + Azure. Reconstruct from uncompromised providers. Production restored, ransomware thwarted.

Scenario 3: Provider Data Loss (Rare)

Impact: GCS experiences silent data corruption in one region, returns corrupted shares.
Recovery: HMAC verification fails on corrupted chunks. Xafe rejects GCS shares. Download shares from AWS + Azure. Reconstruct succeeds with integrity verified.

Scenario 4: Two Provider Simultaneous Failure (2-of-3 Threshold Breached)

Impact: AWS and GCS both unavailable due to DNS outage affecting both providers.
Recovery: Cannot reconstruct — threshold not met. Wait for either AWS or GCS to recover. This is a fundamental K-of-N property. For higher availability, use 3-of-5 or 4-of-7.

Advanced Topics

Error Handling & API Reference

Deep dive into error structures, manifest format, and full API surface.

Advanced 01

Error Handling

Result pattern with discriminated union errors. Never thrown exceptions.

All Xafe operations return Result<T, BackupError>. The Result type is a discriminated union:

Result type definition
type Result<T, E> =
  | { ok: true;  value: T }
  | { ok: false; error: E };

type BackupError =
  | { code: 'INVALID_CONFIG';      message: string }
  | { code: 'SPLIT_FAILED';        message: string }
  | { code: 'HMAC_FAILED';         message: string }
  | { code: 'RECONSTRUCT_FAILED';  message: string }
  | { code: 'INSUFFICIENT_SHARES'; message: string }
  | { code: 'CHUNK_MISMATCH';      message: string };

Errors are values, not exceptions. Use result.ok to check success, then narrow to result.value or result.error.

Advanced 02

Manifest Format

Metadata structure for backup verification and reconstruction.

Manifest structure
interface BackupManifest {
  id: string;              // UUID v4
  filename: string;        // Original filename
  fileSize: number;        // Original file size in bytes
  totalChunks: number;     // Number of chunks
  config: {
    totalShares: number;  // N in K-of-N
    threshold: number;    // K in K-of-N
    chunkSize?: number;   // Bytes per chunk (default 1MB)
  };
  fileHash: string;        // Hex-encoded SHA-256 hash
  createdAt: string;       // ISO 8601 timestamp
}

The manifest is JSON-serializable and should be stored independently of shares. The manifest does not contain sensitive data beyond filename and file size. If filename is sensitive, encrypt the manifest at the application layer.

Advanced 03

Full API Surface

Two primary functions, four types, six error codes.

splitFile(data: Uint8Array, filename: string, config: BackupSplitConfig): Promise<Result<BackupSplitResult, BackupError>>

Splits a file into chunks, then splits each chunk into K-of-N XorIDA shares. Generates a manifest with SHA-256 file hash. Returns shares organized as shares[chunkIndex][shareIndex].

reconstructFile(manifest: BackupManifest, shares: BackupShare[][]): Promise<Result<Uint8Array, BackupError>>

Reconstructs the original file from shares and manifest. HMAC-verifies each chunk before trusting the data. Validates SHA-256 file hash after full reconstruction.

Type Definitions

BackupSplitConfig

Configuration for split operation. Fields: totalShares (N), threshold (K), chunkSize (optional, default 1MB).

BackupShare

A single share from a backup split. Fields: index, total, threshold, chunkIndex, totalChunks, data (base64), hmac (base64 key:signature), originalSize.

BackupManifest

Metadata about a split backup. Fields: id (UUID), filename, fileSize, totalChunks, config, fileHash (SHA-256 hex), createdAt (ISO 8601).

BackupSplitResult

Result of a backup split operation. Fields: manifest (BackupManifest), shares (BackupShare[][], organized by chunk then share index).

Advanced 04

Error Code Taxonomy

Six error codes with causes and resolution paths.

Code Function Cause Resolution
INVALID_CONFIG splitFile totalShares < 2, threshold < 2, threshold > totalShares, chunkSize < 1 Fix configuration: threshold must be 2 ≤ K ≤ N, N ≥ 2, chunkSize ≥ 1
SPLIT_FAILED splitFile XorIDA threw during chunk split (internal error) Report to developers with chunk size and threshold config
HMAC_FAILED reconstructFile HMAC-SHA256 verification failed — data corrupted or tampered Try alternative provider shares. If all fail, data is corrupted.
RECONSTRUCT_FAILED reconstructFile XorIDA reconstruction failed, PKCS#7 unpadding invalid, or file hash mismatch Verify shares match manifest config. Check file hash manually if needed.
INSUFFICIENT_SHARES reconstructFile Fewer than threshold shares available for a chunk Download additional shares from another provider to meet threshold.
CHUNK_MISMATCH reconstructFile Number of chunk groups does not match manifest.totalChunks Verify all chunks downloaded. Check share organization by chunk index.
Advanced 05

Codebase Statistics

Package structure, test coverage, and dependency analysis.

39
Test cases
4
Source modules
0
npm dependencies
2
Peer dependencies

Module Breakdown

Module Lines Purpose
types.ts 65 Type definitions for config, shares, manifest, errors
splitter.ts 150 Chunk, pad, HMAC, XorIDA split, manifest generation
reconstructor.ts 135 Validate, reconstruct chunks, HMAC verify, hash check
errors.ts 81 Error code definitions and error detail structures
index.ts 14 Barrel export

Dependencies

Zero npm runtime dependencies. Two peer dependencies:

  • @private.me/crypto — XorIDA split/reconstruct, HMAC, PKCS#7 padding, base64, UUID
  • @private.me/shared — Result<T, E> type and ok()/err() constructors

Deployment Options

📦

SDK Integration

Embed directly in your application. Runs in your codebase with full programmatic control.

  • npm install @private.me/backup
  • TypeScript/JavaScript SDK
  • Full source access
  • Enterprise support available
Get Started →
🏢

On-Premise Upon Request

Enterprise CLI for compliance, air-gap, or data residency requirements.

  • Complete data sovereignty
  • Air-gap capable deployment
  • Custom SLA + dedicated support
  • Professional services included
Request Quote →

Enterprise On-Premise Deployment

While Backup is primarily delivered as SaaS or SDK, we build dedicated on-premise infrastructure for customers with:

  • Regulatory mandates — HIPAA, SOX, FedRAMP, CMMC requiring self-hosted processing
  • Air-gapped environments — SCIF, classified networks, offline operations
  • Data residency requirements — EU GDPR, China data laws, government mandates
  • Custom integration needs — Embed in proprietary platforms, specialized workflows

Includes: Enterprise CLI, Docker/Kubernetes orchestration, RBAC, audit logging, and dedicated support.

Contact sales for assessment and pricing →