xStore: Universal Split-Storage
Store data across pluggable backends with information-theoretic security. xStore provides a universal split-storage layer — one interface for multi-cloud, multi-jurisdiction, and air-gapped deployments. Every write is XorIDA-split. Every read is HMAC-verified. No single backend ever holds enough to reconstruct.
The Problem
Every ACI reinvents storage. Split data is scattered across ad-hoc locations. There is no standard interface, no backend portability, and no cross-ACI reuse. The result: fragile, one-off storage code duplicated across every product that handles XorIDA shares.
When an application splits data with XorIDA, it produces k-of-n shares that must be stored across independent backends. Today, each application writes its own storage logic — one stores shares on S3, another on local disk, another in a database column. There is no common abstraction, no manifest format, no integrity verification at the storage layer.
This creates three problems. First, backend lock-in — moving from AWS to Azure means rewriting storage code. Second, no portability — shares stored by one ACI cannot be consumed by another without bespoke integration. Third, no integrity guarantees — a corrupted share is discovered only at reconstruction time, when it is too late.
The Old Way
The PRIVATE.ME Solution
xStore is a universal split-storage layer. One StorageBackend interface. Multi-backend routing. Manifest-based metadata. HMAC integrity on every share. Write once, store anywhere.
xStore abstracts the storage of XorIDA shares behind a pluggable backend interface. You implement StorageBackend for your target — S3, Azure Blob, local filesystem, SQLite, Redis, or any custom store. xStore handles the rest: splitting data via XorIDA, routing each share to its designated backend, generating a manifest with integrity tags, and verifying HMAC-SHA256 on every read.
Backends are interchangeable. Move from S3 to GCP by swapping a backend implementation — zero changes to application code. Store Share 1 in the EU and Share 2 in the US to satisfy data residency requirements. Keep Share 3 on an air-gapped filesystem for disaster recovery. xStore does not care where the bytes go — it cares that they come back intact.
Architecture
Data flows through a five-stage pipeline: pad, authenticate, split, route, and store. Every stage is independently verifiable. The manifest ties it all together.
Manifest Structure
Every put() operation produces a manifest — a JSON document that records the threshold parameters, per-share HMAC tags, backend identifiers, creation timestamp, and optional TTL. The manifest is the only artifact needed to reconstruct the original data. It contains no plaintext, no shares, and no keys. It is safe to store alongside application metadata.
API
Three functions and one interface. That is the entire surface area. xStore is designed to disappear into your application code.
interface StorageBackend { /** Unique identifier for this backend (e.g. 'aws-s3-us-east-1') */ readonly id: string; /** Store a share by key. Returns Result indicating success. */ put(key: string, data: Uint8Array): Promise<Result<void, StorageError>>; /** Retrieve a share by key. Returns Result with data or error. */ get(key: string): Promise<Result<Uint8Array, StorageError>>; /** Delete a share by key. Idempotent. */ delete(key: string): Promise<Result<void, StorageError>>; }
import { createStore } from '@private.me/xstore'; // Configure with backends and threshold const store = createStore({ backends: [s3Backend, azureBackend, fsBackend], threshold: { k: 2, n: 3 }, ttl: 86400 * 30, // optional: 30-day TTL });
// Store data — returns a manifest const data = new TextEncoder().encode('sensitive payload'); const putResult = await store.put('doc-001', data); if (!putResult.ok) throw putResult.error; const manifest = putResult.value; // manifest.shares → [{backendId, key, hmac}, ...] // manifest.threshold → {k: 2, n: 3} // manifest.created → ISO timestamp // Retrieve data — verifies HMAC, reconstructs from k shares const getResult = await store.get(manifest); if (!getResult.ok) throw getResult.error; const restored = getResult.value; // restored === data (Uint8Array, byte-identical)
Developer Experience
xStore provides progress callbacks, detailed error codes, and multi-backend visibility. Track upload/download progress, handle failures gracefully, and debug storage orchestration with structured telemetry.
Progress Callbacks
Every put() and get() operation accepts optional progress callbacks that emit per-backend status updates. This enables real-time UI feedback for large payloads or multi-region uploads.
const result = await store.put('large-dataset', data, {
onProgress: (event) => {
// Per-backend progress updates
console.log(`Backend ${event.backendId}: ${event.shareIndex}/${event.totalShares}`);
console.log(`Status: ${event.status}`); // 'pending' | 'uploading' | 'complete' | 'failed'
console.log(`Progress: ${event.bytesTransferred}/${event.totalBytes} bytes`);
}
});
const result = await store.get(manifest, {
onProgress: (event) => {
// Shows which backends are being contacted for reconstruction
console.log(`Fetching share ${event.shareIndex} from ${event.backendId}`);
if (event.status === 'complete') {
console.log(`Downloaded: ${event.bytesTransferred} bytes in ${event.durationMs}ms`);
}
if (event.status === 'failed') {
console.warn(`Backend ${event.backendId} failed, retrying others...`);
}
}
});
Error Codes
xStore errors are typed, categorized, and actionable. Every error includes a machine-readable code, human-readable message, and recovery guidance. No vague "storage failed" errors.
| Category | Error Code | Recovery Guidance |
|---|---|---|
| Backend Management | BACKEND_UNAVAILABLE |
Retry with exponential backoff. xStore will auto-failover to other backends if threshold allows. |
BACKEND_QUOTA_EXCEEDED |
Backend storage quota reached. Add capacity or configure TTL-based expiry to reclaim space. | |
BACKEND_AUTH_FAILED |
Backend credentials invalid or expired. Refresh credentials and retry. | |
INSUFFICIENT_BACKENDS |
Fewer than n backends available for threshold. Add more storage backends or reduce threshold. | |
BACKEND_TIMEOUT |
Backend did not respond within timeout window. Check network connectivity or increase timeout config. | |
| Share Operations | SHARE_INTEGRITY_FAILED |
HMAC verification failed on a retrieved share. Share was tampered with or corrupted. Request fresh share from other backends. |
SHARE_NOT_FOUND |
Share missing from backend (possibly expired or deleted). Retrieve from other backends if threshold allows. | |
SHARE_UPLOAD_FAILED |
Failed to write share to backend. Retry or failover to alternate backend. | |
INSUFFICIENT_SHARES |
Fewer than k valid shares retrieved. Cannot reconstruct. Check backend availability. | |
SHARE_VERSION_MISMATCH |
Share format version incompatible with current xStore implementation. Upgrade xStore or migrate data. | |
| Data Operations | MANIFEST_INVALID |
Manifest missing required fields or HMAC verification failed. Manifest was corrupted or tampered with. |
MANIFEST_EXPIRED |
TTL exceeded. Data may have been garbage-collected from backends. Re-upload if source is available. | |
RECONSTRUCTION_FAILED |
XorIDA reconstruction failed despite k valid shares. Report to maintainers (should never happen). | |
PADDING_INVALID |
PKCS7 padding verification failed after reconstruction. Data corrupted or wrong HMAC key used. | |
PAYLOAD_HMAC_FAILED |
Whole-payload HMAC verification failed after reconstruction. Data was tampered with or wrong key. |
const result = await store.get(manifest);
if (!result.ok) {
const error = result.error;
switch (error.code) {
case 'SHARE_INTEGRITY_FAILED':
// Log tamper attempt, alert security team
logger.alert('Share tampered', { backendId: error.backendId });
break;
case 'BACKEND_UNAVAILABLE':
// Retry with exponential backoff
await retryWithBackoff(() => store.get(manifest));
break;
case 'INSUFFICIENT_BACKENDS':
// Suggest user action: add more storage backends
ui.showAlert('Add more storage backends to meet threshold');
break;
default:
// Generic fallback
console.error(`xStore error: ${error.message}`);
}
}
Split-Storage Orchestration Example
This example demonstrates how xStore orchestrates multi-backend storage operations with full visibility into the threshold-sharing process.
import { createStore } from '@private.me/xstore';
import { S3Backend, AzureBlobBackend, GCPStorageBackend } from './backends';
// Configure three cloud backends
const s3 = new S3Backend({ region: 'us-east-1', bucket: 'private-shares' });
const azure = new AzureBlobBackend({ account: 'eu-storage', container: 'shares' });
const gcp = new GCPStorageBackend({ project: 'xstore-prod', bucket: 'apac-shares' });
// Universal split-storage layer: 2-of-3 threshold across clouds
const store = createStore({
backends: [s3, azure, gcp],
threshold: { k: 2, n: 3 },
ttl: 86400 * 90, // 90-day retention
});
// Track upload progress across all three backends
const data = new Uint8Array(1024 * 1024 * 10); // 10MB dataset
const uploadProgress = new Map();
const putResult = await store.put('patient-genome-12345', data, {
onProgress: (event) => {
uploadProgress.set(event.backendId, event);
// Real-time visibility into multi-cloud orchestration
const completed = [...uploadProgress.values()]
.filter(e => e.status === 'complete').length;
console.log(`Upload: ${completed}/3 backends complete`);
console.log(` AWS: ${uploadProgress.get('s3')?.bytesTransferred || 0} bytes`);
console.log(` Azure: ${uploadProgress.get('azure')?.bytesTransferred || 0} bytes`);
console.log(` GCP: ${uploadProgress.get('gcp')?.bytesTransferred || 0} bytes`);
}
});
if (!putResult.ok) {
throw new Error(`Storage failed: ${putResult.error.message}`);
}
// Store manifest for retrieval (manifest contains backend routing + HMAC tags)
const manifest = putResult.value;
// Later: retrieve data from any 2 of the 3 backends
// Even if AWS S3 goes down, Azure + GCP can still reconstruct
const getResult = await store.get(manifest, {
onProgress: (event) => {
if (event.status === 'failed') {
console.warn(`Backend ${event.backendId} unavailable, using others...`);
}
}
});
if (!getResult.ok) {
throw new Error(`Retrieval failed: ${getResult.error.message}`);
}
// getResult.value === data (byte-identical, HMAC-verified)
Use Cases
Store shares across AWS S3, Azure Blob, and GCP Cloud Storage. No single provider sees your data. A compromised cloud account reveals only cryptographic noise.
Cloud AgnosticEU data stays in EU backends. US data stays in US backends. Both protected by the same threshold. Satisfy GDPR, CCPA, and data residency requirements with one storage layer.
Data ResidencyOffline filesystem backends for classified or regulated data. Shares never touch a network. Combine with online backends for hybrid resilience — k-of-n means you only need a subset to recover.
Air-GappedSplit proprietary datasets across independent storage backends to prevent database copying and exfiltration. No single backend breach yields usable data. Ship to customers as threshold-split shares — prevents unauthorized duplication and resale. IP protection for data vendors.
Data Asset ProtectionCross-ACI Composition
xStore is the storage layer that other ACIs build on. Instead of each product reinventing how to persist XorIDA shares, they delegate to xStore and focus on their domain logic.
Enhanced Identity with Xid
xStore can optionally integrate with Xid to enable unlinkable data access patterns — verifiable within each storage context, but uncorrelatable across shards, time, or access requests.
Three Identity Modes
Market Positioning
| Industry | Use Case | Compliance Driver |
|---|---|---|
| Healthcare | Patient data access with unlinkable audit trails | HIPAA, 42 CFR Part 2, HITECH |
| Finance | Trading data access with uncorrelatable queries | GLBA, SOX, MiFID II, SEC 17a-4 |
| Government | Classified storage with IAL3 access control | FISMA, CJIS, FedRAMP, Zero Trust |
| EU Compliance | GDPR data minimization, unlinkable access logs | eIDAS 2.0 ARF, DORA, GDPR Art. 5 |
Security Properties
xStore inherits XorIDA's information-theoretic guarantees and adds storage-layer integrity, TTL expiry, manifest authentication, and backend isolation.
Core Guarantees
| Property | Mechanism | Guarantee |
|---|---|---|
| Confidentiality | XorIDA k-of-n splitting | Any k-1 shares reveal zero information (information-theoretic) |
| Integrity (share) | HMAC-SHA256 per share | Tampered shares rejected before reconstruction |
| Integrity (payload) | HMAC-SHA256 over padded data | Whole-payload verification after reconstruction |
| Backend Isolation | One share per backend | Compromising one backend yields only noise |
| TTL Expiry | Manifest timestamp + TTL | Expired manifests rejected; backends can garbage-collect |
| Manifest Authenticity | HMAC over manifest fields | Tampered manifests detected before any share fetch |
Traditional Encryption vs. xStore Split-Storage
| Dimension | Traditional Encryption | xStore Split-Storage |
|---|---|---|
| Key Management | Requires secure key storage, rotation, and distribution | No keys — the split IS the security |
| Single Point of Trust | One key compromise exposes all data | Must compromise k-of-n independent backends |
| Quantum Resistance | AES vulnerable to Grover's (halved key strength) | Information-theoretic — unconditionally secure |
| Breach Impact | Full plaintext exposure | Noise — zero information per share |
| Backend Portability | Re-encrypt on migration | Swap backend, keep shares — no re-encryption |
| Compliance Audit | Must prove key was not compromised | Must prove k backends were not simultaneously compromised |
Benchmarks
Performance characteristics measured on Node.js 22, Apple M2. xStore adds minimal overhead to storage operations while eliminating single-point-of-compromise risk.
| Operation | Time | Notes |
|---|---|---|
| XorIDA split (1 KB) | ~58µs | 2-of-2 threshold split over GF(2) |
| HMAC-SHA256 tag per share | <0.1ms | Integrity verification before routing |
| Route to local filesystem | <1ms | Direct write, no network overhead |
| Route to S3 | ~50–200ms | Network-dependent, regional endpoint |
| Route to Azure Blob | ~50–200ms | Network-dependent, regional endpoint |
| Retrieve from backends | ~1–200ms | Parallel retrieval, slowest backend dominates |
| HMAC verify + reconstruct | <1ms | Verify before XOR reconstruction |
| Full store pipeline (local) | ~1ms | Split → HMAC → route → verify → reconstruct |
| Full store pipeline (cloud) | ~200–400ms | Dominated by network round-trip to cloud backends |
Storage Architecture Comparison
| Property | Encrypted Blob Storage | Client-side Encryption | xStore |
|---|---|---|---|
| Breach exposure | Full data if key stolen | Full data if key stolen | Individual share is meaningless |
| Key management | Required (KMS) | Required (client-side) | No keys — the split IS the security |
| Quantum resistance | AES vulnerable to Grover | AES vulnerable to Grover | Information-theoretic — unbreakable |
| Vendor lock-in | Single provider | Provider-agnostic encryption | Multi-backend by design |
| Storage overhead | ~0% (encrypted = same size) | ~0% | ~100% (2 shares = 2x data) |
Ship Proofs, Not Source
xStore generates cryptographic proofs of correct execution without exposing proprietary algorithms. Verify integrity using zero-knowledge proofs — no source code required.
- Tier 1 HMAC (~0.7KB)
- Tier 2 Commit-Reveal (~0.5KB)
- Tier 3 IT-MAC (~0.3KB)
- Tier 4 KKW ZK (~0.4KB)
Use Cases
Honest Limitations
Five known limitations documented transparently. xStore trades storage efficiency for security guarantees.
| Limitation | Impact | Mitigation |
|---|---|---|
| Network latency for remote backends | Cloud storage backends (S3, Azure Blob) add 50–200ms per operation. The slowest backend determines overall pipeline latency. | Parallel retrieval minimizes impact. Local caching of frequently-accessed shares reduces repeat access latency. Use local backends for latency-critical workloads. |
| No built-in replication | xStore routes shares to backends but does not replicate within a backend. If a backend loses data, the share is gone. | Cloud backends (S3, Azure Blob) provide their own replication. For local backends, use filesystem-level replication or RAID. 2-of-3 configurations provide share-level redundancy. |
| Adapter implementation required | Each new storage backend requires a custom adapter implementing the XstoreBackend interface. No universal connector exists. | The adapter interface is minimal (put/get/delete). Reference adapters for local, S3, and Azure Blob serve as templates. Community adapters can be contributed for additional backends. |
| No streaming for large files | The entire file must fit in memory for XorIDA splitting. Files larger than available RAM cannot be processed. | Chunk large files before splitting. xStore can store each chunk independently with sequential chunk IDs. Reassemble chunks after reconstruction. |
| No cross-backend consistency | xStore does not guarantee atomic writes across backends. A failure mid-write can leave shares in an inconsistent state. | HMAC verification detects incomplete writes. Applications should implement write-ahead logging for critical data. Retrieval fails safely if any share is missing or corrupted. |
Verifiable Split-Storage
Every xStore operation produces integrity artifacts that xProve can chain into a verifiable audit trail. Prove that data was stored correctly, retrieved intact, and never tampered with — without revealing the data itself.
Read the xProve white paper →
xStore Enterprise CLI
Self-hosted split-storage server. Deploy xStore on your own infrastructure with Docker-ready, air-gapped capable deployment.
Get Started
Install xStore, implement a backend, and start storing threshold-split data in minutes.
npm install @private.me/xstore
import { createStore } from '@private.me/xstore'; import { S3Backend } from './backends/s3'; import { AzureBackend } from './backends/azure'; import { FsBackend } from './backends/fs'; // Create store with 2-of-3 threshold const store = createStore({ backends: [ new S3Backend({ bucket: 'shares-us-east' }), new AzureBackend({ container: 'shares-eu-west' }), new FsBackend({ path: '/mnt/airgap/shares' }), ], threshold: { k: 2, n: 3 }, }); // Store sensitive data const data = new TextEncoder().encode('patient record #4291'); const { value: manifest } = await store.put('record-4291', data); // Retrieve — HMAC-verified, reconstructed from any 2 of 3 backends const { value: restored } = await store.get(manifest); // restored is byte-identical to data
Ready to deploy xStore?
Talk to Sol, our AI platform engineer, or book a live demo with our team.