Loading...
PRIVATE.ME Docs
Explore ACIs
PRIVATE.ME PLATFORM

xStore: Universal Split-Storage

Store data across pluggable backends with information-theoretic security. xStore provides a universal split-storage layer — one interface for multi-cloud, multi-jurisdiction, and air-gapped deployments. Every write is XorIDA-split. Every read is HMAC-verified. No single backend ever holds enough to reconstruct.

v0.1.0 @private.me/xstore PLATFORM ACI XorIDA-POWERED
Section 01

The Problem

Every ACI reinvents storage. Split data is scattered across ad-hoc locations. There is no standard interface, no backend portability, and no cross-ACI reuse. The result: fragile, one-off storage code duplicated across every product that handles XorIDA shares.

When an application splits data with XorIDA, it produces k-of-n shares that must be stored across independent backends. Today, each application writes its own storage logic — one stores shares on S3, another on local disk, another in a database column. There is no common abstraction, no manifest format, no integrity verification at the storage layer.

This creates three problems. First, backend lock-in — moving from AWS to Azure means rewriting storage code. Second, no portability — shares stored by one ACI cannot be consumed by another without bespoke integration. Third, no integrity guarantees — a corrupted share is discovered only at reconstruction time, when it is too late.

The Old Way

App A Custom S3 code Custom FS code Custom DB code Every app writes its own storage layer
Section 02

The PRIVATE.ME Solution

xStore is a universal split-storage layer. One StorageBackend interface. Multi-backend routing. Manifest-based metadata. HMAC integrity on every share. Write once, store anywhere.

xStore abstracts the storage of XorIDA shares behind a pluggable backend interface. You implement StorageBackend for your target — S3, Azure Blob, local filesystem, SQLite, Redis, or any custom store. xStore handles the rest: splitting data via XorIDA, routing each share to its designated backend, generating a manifest with integrity tags, and verifying HMAC-SHA256 on every read.

Backends are interchangeable. Move from S3 to GCP by swapping a backend implementation — zero changes to application code. Store Share 1 in the EU and Share 2 in the US to satisfy data residency requirements. Keep Share 3 on an air-gapped filesystem for disaster recovery. xStore does not care where the bytes go — it cares that they come back intact.

Design Principle
xStore is infrastructure, not policy. It does not decide where shares are stored — your application does. It does not decide how many shares exist — XorIDA thresholds do. It guarantees that shares stored through it are integrity-verified, manifest-tracked, and backend-portable.
Section 03

Architecture

Data flows through a five-stage pipeline: pad, authenticate, split, route, and store. Every stage is independently verifiable. The manifest ties it all together.

XSTORE PIPELINE DATA PKCS7 PAD HMAC SHA-256 XorIDA SPLIT BACKEND 1 e.g. AWS S3 BACKEND 2 e.g. Azure Blob BACKEND N e.g. Local FS MANIFEST threshold, share HMACs, backend refs, TTL, created
~14µs
64B split time
k-of-n
Threshold
0 bits
Per-share leakage
HMAC-SHA256
Integrity

Manifest Structure

Every put() operation produces a manifest — a JSON document that records the threshold parameters, per-share HMAC tags, backend identifiers, creation timestamp, and optional TTL. The manifest is the only artifact needed to reconstruct the original data. It contains no plaintext, no shares, and no keys. It is safe to store alongside application metadata.

Manifest Is Not a Secret
The manifest tells xStore where to find shares and how to verify them. It does not contain any share data. An attacker who obtains the manifest still needs to compromise k-of-n independent backends to reconstruct anything. The manifest is metadata — treat it like a database index, not like a key.
Section 04

API

Three functions and one interface. That is the entire surface area. xStore is designed to disappear into your application code.

StorageBackend Interface
interface StorageBackend {
  /** Unique identifier for this backend (e.g. 'aws-s3-us-east-1') */
  readonly id: string;

  /** Store a share by key. Returns Result indicating success. */
  put(key: string, data: Uint8Array): Promise<Result<void, StorageError>>;

  /** Retrieve a share by key. Returns Result with data or error. */
  get(key: string): Promise<Result<Uint8Array, StorageError>>;

  /** Delete a share by key. Idempotent. */
  delete(key: string): Promise<Result<void, StorageError>>;
}
Create a Store
import { createStore } from '@private.me/xstore';

// Configure with backends and threshold
const store = createStore({
  backends: [s3Backend, azureBackend, fsBackend],
  threshold: { k: 2, n: 3 },
  ttl: 86400 * 30,  // optional: 30-day TTL
});
Put and Get
// Store data — returns a manifest
const data = new TextEncoder().encode('sensitive payload');
const putResult = await store.put('doc-001', data);
if (!putResult.ok) throw putResult.error;
const manifest = putResult.value;
// manifest.shares → [{backendId, key, hmac}, ...]
// manifest.threshold → {k: 2, n: 3}
// manifest.created → ISO timestamp

// Retrieve data — verifies HMAC, reconstructs from k shares
const getResult = await store.get(manifest);
if (!getResult.ok) throw getResult.error;
const restored = getResult.value;
// restored === data (Uint8Array, byte-identical)
createStore(config: StoreConfig): XStore
Creates an xStore instance. Requires an array of StorageBackend implementations and a threshold configuration (k, n). Optional: TTL in seconds, custom HMAC key. Validates that backends.length >= n. Returns an XStore with put(), get(), and delete() methods.
store.put(key: string, data: Uint8Array): Promise<Result<Manifest, StoreError>>
Pads data (PKCS7), computes HMAC-SHA256 over padded payload, splits via XorIDA into n shares, routes each share to its designated backend, and returns a manifest containing per-share HMAC tags, backend references, threshold parameters, and creation timestamp.
store.get(manifest: Manifest): Promise<Result<Uint8Array, StoreError>>
Reads k shares from backends (referenced in manifest), verifies HMAC-SHA256 on each share before reconstruction, reconstructs original data via XorIDA, strips PKCS7 padding, and verifies the whole-payload HMAC. Returns the original byte-identical data or a typed error.
store.delete(manifest: Manifest): Promise<Result<void, StoreError>>
Deletes all shares referenced in the manifest from their respective backends. Idempotent — succeeds even if some shares are already gone. Returns a typed error only if a backend is unreachable.
Section 05

Developer Experience

xStore provides progress callbacks, detailed error codes, and multi-backend visibility. Track upload/download progress, handle failures gracefully, and debug storage orchestration with structured telemetry.

Progress Callbacks

Every put() and get() operation accepts optional progress callbacks that emit per-backend status updates. This enables real-time UI feedback for large payloads or multi-region uploads.

Track Upload Progress Across Backends
const result = await store.put('large-dataset', data, {
  onProgress: (event) => {
    // Per-backend progress updates
    console.log(`Backend ${event.backendId}: ${event.shareIndex}/${event.totalShares}`);
    console.log(`Status: ${event.status}`);  // 'pending' | 'uploading' | 'complete' | 'failed'
    console.log(`Progress: ${event.bytesTransferred}/${event.totalBytes} bytes`);
  }
});
Track Download Progress with Multi-Backend Visibility
const result = await store.get(manifest, {
  onProgress: (event) => {
    // Shows which backends are being contacted for reconstruction
    console.log(`Fetching share ${event.shareIndex} from ${event.backendId}`);

    if (event.status === 'complete') {
      console.log(`Downloaded: ${event.bytesTransferred} bytes in ${event.durationMs}ms`);
    }

    if (event.status === 'failed') {
      console.warn(`Backend ${event.backendId} failed, retrying others...`);
    }
  }
});
Split-Storage Orchestration Visibility
Progress callbacks expose the parallel multi-backend workflow that xStore orchestrates behind the scenes. You see exactly which shares are being written to which backends, in real-time. This is critical for debugging multi-cloud deployments, monitoring latency across regions, and providing users with meaningful progress indicators for large datasets.

Error Codes

xStore errors are typed, categorized, and actionable. Every error includes a machine-readable code, human-readable message, and recovery guidance. No vague "storage failed" errors.

Category Error Code Recovery Guidance
Backend Management BACKEND_UNAVAILABLE Retry with exponential backoff. xStore will auto-failover to other backends if threshold allows.
BACKEND_QUOTA_EXCEEDED Backend storage quota reached. Add capacity or configure TTL-based expiry to reclaim space.
BACKEND_AUTH_FAILED Backend credentials invalid or expired. Refresh credentials and retry.
INSUFFICIENT_BACKENDS Fewer than n backends available for threshold. Add more storage backends or reduce threshold.
BACKEND_TIMEOUT Backend did not respond within timeout window. Check network connectivity or increase timeout config.
Share Operations SHARE_INTEGRITY_FAILED HMAC verification failed on a retrieved share. Share was tampered with or corrupted. Request fresh share from other backends.
SHARE_NOT_FOUND Share missing from backend (possibly expired or deleted). Retrieve from other backends if threshold allows.
SHARE_UPLOAD_FAILED Failed to write share to backend. Retry or failover to alternate backend.
INSUFFICIENT_SHARES Fewer than k valid shares retrieved. Cannot reconstruct. Check backend availability.
SHARE_VERSION_MISMATCH Share format version incompatible with current xStore implementation. Upgrade xStore or migrate data.
Data Operations MANIFEST_INVALID Manifest missing required fields or HMAC verification failed. Manifest was corrupted or tampered with.
MANIFEST_EXPIRED TTL exceeded. Data may have been garbage-collected from backends. Re-upload if source is available.
RECONSTRUCTION_FAILED XorIDA reconstruction failed despite k valid shares. Report to maintainers (should never happen).
PADDING_INVALID PKCS7 padding verification failed after reconstruction. Data corrupted or wrong HMAC key used.
PAYLOAD_HMAC_FAILED Whole-payload HMAC verification failed after reconstruction. Data was tampered with or wrong key.
Structured Error Handling
const result = await store.get(manifest);

if (!result.ok) {
  const error = result.error;

  switch (error.code) {
    case 'SHARE_INTEGRITY_FAILED':
      // Log tamper attempt, alert security team
      logger.alert('Share tampered', { backendId: error.backendId });
      break;

    case 'BACKEND_UNAVAILABLE':
      // Retry with exponential backoff
      await retryWithBackoff(() => store.get(manifest));
      break;

    case 'INSUFFICIENT_BACKENDS':
      // Suggest user action: add more storage backends
      ui.showAlert('Add more storage backends to meet threshold');
      break;

    default:
      // Generic fallback
      console.error(`xStore error: ${error.message}`);
  }
}

Split-Storage Orchestration Example

This example demonstrates how xStore orchestrates multi-backend storage operations with full visibility into the threshold-sharing process.

Multi-Cloud 2-of-3 Storage with Progress Tracking
import { createStore } from '@private.me/xstore';
import { S3Backend, AzureBlobBackend, GCPStorageBackend } from './backends';

// Configure three cloud backends
const s3 = new S3Backend({ region: 'us-east-1', bucket: 'private-shares' });
const azure = new AzureBlobBackend({ account: 'eu-storage', container: 'shares' });
const gcp = new GCPStorageBackend({ project: 'xstore-prod', bucket: 'apac-shares' });

// Universal split-storage layer: 2-of-3 threshold across clouds
const store = createStore({
  backends: [s3, azure, gcp],
  threshold: { k: 2, n: 3 },
  ttl: 86400 * 90,  // 90-day retention
});

// Track upload progress across all three backends
const data = new Uint8Array(1024 * 1024 * 10);  // 10MB dataset
const uploadProgress = new Map();

const putResult = await store.put('patient-genome-12345', data, {
  onProgress: (event) => {
    uploadProgress.set(event.backendId, event);

    // Real-time visibility into multi-cloud orchestration
    const completed = [...uploadProgress.values()]
      .filter(e => e.status === 'complete').length;

    console.log(`Upload: ${completed}/3 backends complete`);
    console.log(`  AWS: ${uploadProgress.get('s3')?.bytesTransferred || 0} bytes`);
    console.log(`  Azure: ${uploadProgress.get('azure')?.bytesTransferred || 0} bytes`);
    console.log(`  GCP: ${uploadProgress.get('gcp')?.bytesTransferred || 0} bytes`);
  }
});

if (!putResult.ok) {
  throw new Error(`Storage failed: ${putResult.error.message}`);
}

// Store manifest for retrieval (manifest contains backend routing + HMAC tags)
const manifest = putResult.value;

// Later: retrieve data from any 2 of the 3 backends
// Even if AWS S3 goes down, Azure + GCP can still reconstruct
const getResult = await store.get(manifest, {
  onProgress: (event) => {
    if (event.status === 'failed') {
      console.warn(`Backend ${event.backendId} unavailable, using others...`);
    }
  }
});

if (!getResult.ok) {
  throw new Error(`Retrieval failed: ${getResult.error.message}`);
}

// getResult.value === data (byte-identical, HMAC-verified)
Universal Split-Storage in Action
This example shows xStore's core value: one interface, multi-backend orchestration, information-theoretic security. The same code works with S3, Azure Blob, GCP Cloud Storage, filesystem, or custom backends. No single cloud provider ever sees enough to reconstruct. Progress callbacks provide real-time visibility into the parallel upload/download workflow across regions. Error codes enable graceful degradation when backends fail. This is the universal split-storage layer the entire PRIVATE.ME platform builds on.
Section 06

Use Cases

☁️
INFRASTRUCTURE
Multi-Cloud

Store shares across AWS S3, Azure Blob, and GCP Cloud Storage. No single provider sees your data. A compromised cloud account reveals only cryptographic noise.

Cloud Agnostic
🌍
COMPLIANCE
Multi-Jurisdiction

EU data stays in EU backends. US data stays in US backends. Both protected by the same threshold. Satisfy GDPR, CCPA, and data residency requirements with one storage layer.

Data Residency
🔒
SECURITY
Air-Gapped Backup

Offline filesystem backends for classified or regulated data. Shares never touch a network. Combine with online backends for hybrid resilience — k-of-n means you only need a subset to recover.

Air-Gapped
📦
DATA ASSETS (ANTI-PIRACY)
Protected Database Exfiltration Prevention

Split proprietary datasets across independent storage backends to prevent database copying and exfiltration. No single backend breach yields usable data. Ship to customers as threshold-split shares — prevents unauthorized duplication and resale. IP protection for data vendors.

Data Asset Protection
Section 07

Cross-ACI Composition

xStore is the storage layer that other ACIs build on. Instead of each product reinventing how to persist XorIDA shares, they delegate to xStore and focus on their domain logic.

XGHOST + XSTORE
xGhost splits algorithms into shares for ephemeral protection. xStore handles where those shares live — vendor share on S3, customer share in the npm package, backup share on an air-gapped volume. xGhost focuses on reconstruct/execute/purge. xStore focuses on durable, portable, integrity-verified persistence.
XREDACT + XSTORE
xRedact produces redacted inference results that must be stored for audit and compliance. xStore persists the redacted shares across jurisdiction-aware backends — ensuring that no single storage location holds a complete redacted document.
XPROVE + XSTORE
xProve generates cryptographic proofs (HMAC chains, commit-reveal, MPC-in-the-Head). Proof artifacts can be large and must be durable. xStore provides threshold-split storage for proof bundles — tamper-evident at both the proof layer and the storage layer.
VAULTDROP + XSTORE
VaultDrop provides encrypted backup and recovery workflows. Under the hood, VaultDrop uses xStore to persist backup shares across geographically distributed backends. The user sees “backup to cloud.” The system stores k-of-n shares across three continents.
Section 08

Enhanced Identity with Xid

xStore can optionally integrate with Xid to enable unlinkable data access patterns — verifiable within each storage context, but uncorrelatable across shards, time, or access requests.

Three Identity Modes

Basic (Default)
Static Accessor DIDs
One DID per accessor, persistent across all data operations and shards. Simple, but linkable — same identity can be tracked across storage backends, queries, and time.
Current xStore behavior
xStore+ (With Xid)
Ephemeral Per-Access DIDs
Each data operation gets a unique DID derived from an XorIDA-split master seed. DIDs are unlinkable across shards and rotate per epoch. Verifiable within shard context, but cross-shard correlation is impossible. ~50µs overhead per access.
Unlinkable access patterns
xStore Enterprise
K-of-N Converged High-Assurance Data Access
Require 3-of-5 signals (biometric + device TPM + location + time + YubiKey) to generate an access DID. IAL2/3 assurance levels for classified, patient, or financial data. Continuous refresh ensures only authorized personnel can access sensitive shards.
IAL2/3 compliance
Integration Pattern
xStore+ is not a new ACI — it's an integration of two existing ACIs (xStore + Xid). This demonstrates ACI composability — building blocks combine to create enhanced capabilities without requiring new primitives.

Market Positioning

Industry Use Case Compliance Driver
Healthcare Patient data access with unlinkable audit trails HIPAA, 42 CFR Part 2, HITECH
Finance Trading data access with uncorrelatable queries GLBA, SOX, MiFID II, SEC 17a-4
Government Classified storage with IAL3 access control FISMA, CJIS, FedRAMP, Zero Trust
EU Compliance GDPR data minimization, unlinkable access logs eIDAS 2.0 ARF, DORA, GDPR Art. 5
Section 09

Security Properties

xStore inherits XorIDA's information-theoretic guarantees and adds storage-layer integrity, TTL expiry, manifest authentication, and backend isolation.

Core Guarantees

PropertyMechanismGuarantee
ConfidentialityXorIDA k-of-n splittingAny k-1 shares reveal zero information (information-theoretic)
Integrity (share)HMAC-SHA256 per shareTampered shares rejected before reconstruction
Integrity (payload)HMAC-SHA256 over padded dataWhole-payload verification after reconstruction
Backend IsolationOne share per backendCompromising one backend yields only noise
TTL ExpiryManifest timestamp + TTLExpired manifests rejected; backends can garbage-collect
Manifest AuthenticityHMAC over manifest fieldsTampered manifests detected before any share fetch

Traditional Encryption vs. xStore Split-Storage

DimensionTraditional EncryptionxStore Split-Storage
Key ManagementRequires secure key storage, rotation, and distributionNo keys — the split IS the security
Single Point of TrustOne key compromise exposes all dataMust compromise k-of-n independent backends
Quantum ResistanceAES vulnerable to Grover's (halved key strength)Information-theoretic — unconditionally secure
Breach ImpactFull plaintext exposureNoise — zero information per share
Backend PortabilityRe-encrypt on migrationSwap backend, keep shares — no re-encryption
Compliance AuditMust prove key was not compromisedMust prove k backends were not simultaneously compromised
HMAC Before Reconstruction
xStore verifies the HMAC-SHA256 tag on every share before passing it to XorIDA for reconstruction. A corrupted or tampered share is rejected at the storage layer, not at the application layer. This is a hard rule — no share enters the reconstruction pipeline without passing integrity verification.
~14µs
64B split time
k-of-n
Configurable threshold
0 bits
Per-share leakage
2x HMAC
Share + payload integrity
Section 10

Benchmarks

Performance characteristics measured on Node.js 22, Apple M2. xStore adds minimal overhead to storage operations while eliminating single-point-of-compromise risk.

37
Test Cases
~1ms
Split + Route
~1ms
Reconstruct
0 bits
Per-backend Leakage
OperationTimeNotes
XorIDA split (1 KB)~58µs2-of-2 threshold split over GF(2)
HMAC-SHA256 tag per share<0.1msIntegrity verification before routing
Route to local filesystem<1msDirect write, no network overhead
Route to S3~50–200msNetwork-dependent, regional endpoint
Route to Azure Blob~50–200msNetwork-dependent, regional endpoint
Retrieve from backends~1–200msParallel retrieval, slowest backend dominates
HMAC verify + reconstruct<1msVerify before XOR reconstruction
Full store pipeline (local)~1msSplit → HMAC → route → verify → reconstruct
Full store pipeline (cloud)~200–400msDominated by network round-trip to cloud backends

Storage Architecture Comparison

PropertyEncrypted Blob StorageClient-side EncryptionxStore
Breach exposureFull data if key stolenFull data if key stolenIndividual share is meaningless
Key managementRequired (KMS)Required (client-side)No keys — the split IS the security
Quantum resistanceAES vulnerable to GroverAES vulnerable to GroverInformation-theoretic — unbreakable
Vendor lock-inSingle providerProvider-agnostic encryptionMulti-backend by design
Storage overhead~0% (encrypted = same size)~0%~100% (2 shares = 2x data)
Honest trade-off
xStore doubles storage costs (2-of-2 = two full-size shares). The security benefit — eliminating single-point-of-compromise and achieving information-theoretic protection — is the explicit trade-off for increased storage. For 2-of-3 configurations, storage is 3x.
VERIFIABLE WITHOUT CODE EXPOSURE

Ship Proofs, Not Source

xStore generates cryptographic proofs of correct execution without exposing proprietary algorithms. Verify integrity using zero-knowledge proofs — no source code required.

XPROVE CRYPTOGRAPHIC PROOF
Download proofs:

Verify proofs online →

Use Cases

🏛️
REGULATORY
FDA / SEC Submissions
Prove algorithm correctness for split storage without exposing trade secrets or IP.
Zero IP Exposure
🏦
FINANCIAL
Audit Without Access
External auditors verify split-storage routing without accessing source code or production systems.
FINRA / SOX Compliant
🛡️
DEFENSE
Classified Verification
Security clearance holders verify split storage correctness without clearance for source code.
CMMC / NIST Ready
🏢
ENTERPRISE
Procurement Due Diligence
Prove security + correctness during RFP evaluation without NDA or code escrow.
No NDA Required
Section 11

Honest Limitations

Five known limitations documented transparently. xStore trades storage efficiency for security guarantees.

LimitationImpactMitigation
Network latency for remote backendsCloud storage backends (S3, Azure Blob) add 50–200ms per operation. The slowest backend determines overall pipeline latency.Parallel retrieval minimizes impact. Local caching of frequently-accessed shares reduces repeat access latency. Use local backends for latency-critical workloads.
No built-in replicationxStore routes shares to backends but does not replicate within a backend. If a backend loses data, the share is gone.Cloud backends (S3, Azure Blob) provide their own replication. For local backends, use filesystem-level replication or RAID. 2-of-3 configurations provide share-level redundancy.
Adapter implementation requiredEach new storage backend requires a custom adapter implementing the XstoreBackend interface. No universal connector exists.The adapter interface is minimal (put/get/delete). Reference adapters for local, S3, and Azure Blob serve as templates. Community adapters can be contributed for additional backends.
No streaming for large filesThe entire file must fit in memory for XorIDA splitting. Files larger than available RAM cannot be processed.Chunk large files before splitting. xStore can store each chunk independently with sequential chunk IDs. Reassemble chunks after reconstruction.
No cross-backend consistencyxStore does not guarantee atomic writes across backends. A failure mid-write can leave shares in an inconsistent state.HMAC verification detects incomplete writes. Applications should implement write-ahead logging for critical data. Retrieval fails safely if any share is missing or corrupted.
VERIFIED BY XPROVE

Verifiable Split-Storage

Every xStore operation produces integrity artifacts that xProve can chain into a verifiable audit trail. Prove that data was stored correctly, retrieved intact, and never tampered with — without revealing the data itself.

XPROVE STORAGE AUDIT
xStore's per-share HMAC tags and manifest checksums feed directly into xProve's HMAC chain verification. For regulated industries (healthcare, finance, government), this provides cryptographic proof of data integrity at rest — not just encryption, but verifiable correct storage. Upgrade to zero-knowledge proofs when auditors need public verification without data disclosure.

Read the xProve white paper →
ENTERPRISE DEPLOYMENT

xStore Enterprise CLI

Self-hosted split-storage server. Deploy xStore on your own infrastructure with Docker-ready, air-gapped capable deployment.

xstore-cli — Port 5000
@private.me/xstore-cli wraps the xStore library in a standalone HTTP server with 12 REST endpoints. 65 tests, all passing. Store instance management, data put/get/delete/list/has with pluggable backends (memory, filesystem). XorIDA-split shares, HMAC verification, namespace isolation, TTL expiry. 3-role RBAC (admin/operator/auditor), JSONL audit, AES-256-GCM at rest. Zero external dependencies. Part of the Enterprise CLI Suite — 21 self-hosted servers, 1,924 tests.
Section 12

Get Started

Install xStore, implement a backend, and start storing threshold-split data in minutes.

Install
npm install @private.me/xstore
Quick Start
import { createStore } from '@private.me/xstore';
import { S3Backend } from './backends/s3';
import { AzureBackend } from './backends/azure';
import { FsBackend } from './backends/fs';

// Create store with 2-of-3 threshold
const store = createStore({
  backends: [
    new S3Backend({ bucket: 'shares-us-east' }),
    new AzureBackend({ container: 'shares-eu-west' }),
    new FsBackend({ path: '/mnt/airgap/shares' }),
  ],
  threshold: { k: 2, n: 3 },
});

// Store sensitive data
const data = new TextEncoder().encode('patient record #4291');
const { value: manifest } = await store.put('record-4291', data);

// Retrieve — HMAC-verified, reconstructed from any 2 of 3 backends
const { value: restored } = await store.get(manifest);
// restored is byte-identical to data
GET STARTED

Ready to deploy xStore?

Talk to Sol, our AI platform engineer, or book a live demo with our team.

Book a Demo