Loading...
private.me Docs
Get TeeFree
PRIVATE.ME · Technical White Paper

TeeFree: Secure Computation Without TEEs

Hardware TEEs promise confidential computing but require specialized chips, vendor trust, and constant patching. TeeFree delivers information-theoretic security through threshold secret sharing (XorIDA) and multi-party computation — zero hardware dependencies, mathematically impossible to break, works on commodity infrastructure.

v0.1.0 Zero hardware deps Information-theoretic Quantum-proof Sub-millisecond Any platform
Section 01

Executive Summary

TeeFree replaces Trusted Execution Environments (Intel SGX, AMD SEV, ARM TrustZone) with information-theoretic security that requires no specialized hardware, no vendor attestation, and no trust in chip manufacturers.

The core technology is XorIDA threshold secret sharing over GF(2) — the same cryptographic primitive powering all 140 ACIs on the PRIVATE.ME platform. Instead of isolating computation inside a hardware enclave, TeeFree splits sensitive data into mathematically independent shares. Any K-of-N shares can reconstruct the original. Fewer than K shares reveal exactly zero information — not hard to break with enough computing power, but provably impossible regardless of adversary capability.

When computation is required, TeeFree coordinates multi-party computation (MPC) across N worker nodes. Each node processes its share without ever seeing plaintext. Results reconstruct only after K nodes complete their work and verify integrity via HMAC. No single node learns the input, intermediate state, or output alone.

Performance is competitive with hardware TEEs for typical workloads: sub-millisecond overhead for data-at-rest splitting, ~5ms for 2-of-3 threshold with HMAC, ~180x faster than traditional MPC via Xchange mode. Scales horizontally across commodity x86, ARM, RISC-V — no vendor lock-in, no attestation infrastructure, no microcode updates.

Section 02

The Problem With Hardware TEEs

Trusted Execution Environments promise confidential computing but introduce vendor dependency, side-channel vulnerabilities, and a trust model incompatible with true zero-trust architectures.

Vendor Lock-In

Intel SGX requires specific Intel CPUs with SGX-enabled firmware. AMD SEV requires EPYC processors. ARM TrustZone requires ARM-based chips. You trust the chip vendor to implement the TEE correctly, patch vulnerabilities promptly, and not insert backdoors. Multi-cloud deployments become fragmented — SGX on AWS Nitro, SEV on Azure Confidential Computing, TrustZone on mobile. No portability.

Side-Channel Attacks

Hardware TEEs are vulnerable to side-channel attacks: Spectre, Meltdown, Foreshadow, SGAxe, CacheOut, Load Value Injection. Each vulnerability requires firmware updates, attestation re-verification, and potential service restarts. Theoretical security != practical security. TeeFree has no speculative execution, no shared cache, no microarchitectural state to leak.

Attestation Overhead

Remote attestation verifies that a TEE is running unmodified code on genuine hardware. This requires network calls to vendor attestation services (Intel IAS, AMD Key Distribution Service), PKI infrastructure, and constant re-attestation as firmware updates roll out. Latency penalty: 50-200ms per attestation. TeeFree replaces attestation with HMAC verification — local, instant, cryptographically sound.

Limited Memory

SGX enclaves have strict memory limits (128MB EPC on v1, 256MB on v2, still limited on v3). Exceeding EPC triggers paging to untrusted memory with 10-100x slowdown. AMD SEV has looser limits but still caps encrypted memory regions. TeeFree has no memory constraints — shares are ordinary byte arrays processed on standard RAM.

Property Intel SGX AMD SEV ARM TrustZone TeeFree
Hardware dependency Intel CPUs only AMD EPYC only ARM chips only Any platform
Vendor trust required Yes Yes Yes No
Side-channel risk High Moderate Moderate None (IT-secure)
Memory limits 128-256 MB EPC Looser but capped Platform-specific Unlimited
Attestation latency 50-200ms 50-200ms Varies <1ms HMAC
Quantum resistance No No No Immune
Cloud portability AWS Nitro only Azure only Mobile/edge Any cloud/on-prem
Security Theater vs. Security Proof
Hardware TEEs rely on the assumption that chip manufacturers implement enclaves correctly and patch vulnerabilities faster than attackers discover them. TeeFree makes zero assumptions — the security proof holds even if all N nodes are compromised independently, as long as fewer than K collude simultaneously.
Section 03

Use Cases

Six scenarios where TeeFree replaces hardware TEEs with information-theoretic security on commodity infrastructure.

🏦
Financial Services
Multi-Party Reconciliation

Banks reconcile transactions without revealing proprietary ledgers. Each bank processes its share of the reconciliation logic. Results reconstruct only when K-of-N banks complete verification.

2-of-3 threshold MPC
💊
Healthcare
Federated Clinical Trials

Hospitals contribute patient data to research without PHI leaving their network. XorIDA splits each record, MPC aggregates statistics, no hospital sees raw data from others.

HIPAA-compliant by construction
🌐
Cloud / SaaS
Zero-Knowledge SaaS

SaaS providers process customer data without plaintext access. Customer encrypts via XorIDA split, shares route to N worker nodes, computation happens on shares, results return encrypted.

Any cloud, zero vendor trust
📊
Analytics
Cross-Org Analytics

Competitors collaborate on market research without revealing proprietary datasets. TeeFree MPC computes aggregate statistics while keeping individual datasets isolated.

Information-theoretic privacy
🏛
Government
Classified Workloads

Process classified data across unclassified infrastructure. XorIDA splits data across airgapped enclaves. No single node holds complete plaintext, even if compromised.

3-of-5 threshold
🤖
AI / ML
Private Model Training

Train ML models on sensitive datasets without centralizing data. Each data owner holds a share, gradient descent runs via MPC, model updates reconstruct after K parties agree.

Federated learning without trust
Section 04

Solution Architecture

Three layers: XorIDA splitting (data-at-rest), MPC orchestration (computation), and HMAC verification (integrity).

Layer 2: MPC Orchestration
Distributed Compute
N worker nodes, K threshold
Each node processes its share independently
Results combine via XorIDA reconstruction
Layer 3: HMAC Integrity
Verify-Before-Reconstruct
HMAC-SHA256 per share
Verification before reconstruction
Tamper detection at cryptographic strength
Optional: Xcompute MPC
Advanced
3-party MPC for complex logic
Integrates @private.me/xcompute
SPDZ-inspired IT-MAC protocol
Section 04a

How It Works

Step-by-step: from plaintext to secure distributed computation and reconstruction.

Data-at-Rest: XorIDA Splitting

Client generates K=2, N=3 shares from plaintext payload using XorIDA over GF(2). Each share is the same size as the plaintext. Shares are individually random and reveal zero information. An HMAC key is also split and distributed alongside each share.

TeeFree split example
import { teeFree } from '@private.me/crypto';

const sensitive = { ssn: '123-45-6789', balance: 50000 };
const result = await teeFree.split(sensitive, { threshold: 2, totalShares: 3 });

// result.shares = [share0, share1, share2]
// Any 2 shares reconstruct. 1 share reveals nothing.
const reconstructed = await teeFree.reconstruct([result.shares[0], result.shares[2]]);

Computation: MPC Across Shares

Shares route to N geographically distributed worker nodes. Each node runs the same computation function on its share without ever seeing plaintext. Intermediate results remain encrypted as shares. Only after K nodes complete and verify HMACs does reconstruction happen.

MPC computation flow
// Node 0 receives share[0], Node 1 receives share[1], Node 2 receives share[2]
const computeFn = (share) => {
  // Process share (e.g., aggregate, filter, transform)
  return processShare(share);
};

const resultShares = await teeFree.compute({
  shares: [share0, share1, share2],
  fn: computeFn,
  threshold: 2,
  nodes: ['https://node0.example.com', 'https://node1.example.com', 'https://node2.example.com'],
});

// resultShares = outputs from each node, still as shares
// Reconstruct only after K nodes verify HMAC
const finalResult = await teeFree.reconstruct(resultShares, { verifyHMAC: true });

Integrity: HMAC-Before-Reconstruct

Before reconstruction, TeeFree verifies HMAC-SHA256 on each share. If any share fails HMAC verification, reconstruction aborts with HMAC_VERIFICATION_FAILED. This prevents tampering — an attacker who modifies a share without the HMAC key cannot forge a valid MAC.

Verify-then-reconstruct is mandatory
HMAC verification MUST complete successfully on all K shares before reconstruction proceeds. This is enforced at the protocol level — no API to bypass. TeeFree never reconstructs from unverified shares.
Section 04b

Performance

Sub-millisecond splitting, competitive MPC latency, horizontal scaling with no hardware bottlenecks.

<1ms
XorIDA split (1KB)
~5ms
2-of-3 with HMAC
~33ms
2-of-2 (1MB)
0
Hardware deps
Operation Payload Size Threshold Latency Notes
XorIDA split 64 bytes 2-of-3 ~14µs Faster than AES-256-GCM
XorIDA split 1 KB 2-of-3 ~58µs Still sub-millisecond
XorIDA split + HMAC 1 KB 2-of-3 ~5ms HMAC per-share overhead
XorIDA reconstruct 1 MB 2-of-2 ~33ms Linear scaling with size
MPC (Xcompute) Varies 3-party ~20-50ms Depends on computation complexity
Xchange mode for latency-critical workloads
When sub-5ms latency is required, use Xchange mode (opt-in via xchange: true). Trades per-share encryption for ~180x faster operation. Retains XorIDA information-theoretic security + HMAC integrity.
Section 05

Integration

Three integration patterns: library mode (direct API), service mode (REST API), and MPC orchestration (distributed compute).

Library Mode

Import @private.me/crypto and call teeFree.split() directly in application code. Ideal for single-process workloads where you control share distribution.

Library integration
import { teeFree } from '@private.me/crypto';

const { shares } = await teeFree.split(payload, { threshold: 2, totalShares: 3 });
// Store shares[0] locally, shares[1] on backup, shares[2] on DR site

Service Mode (REST API)

Deploy TeeFree as a microservice. Clients POST plaintext to /teefree/split, receive shares. Shares route to worker nodes for computation. Results POST back to /teefree/reconstruct.

REST API example
# Split
curl -X POST https://teefree.example.com/split \
  -H 'Content-Type: application/json' \
  -d '{"payload": {"secret": "data"}, "threshold": 2, "totalShares": 3}'

# Returns: {"shares": ["base64-share0", "base64-share1", "base64-share2"]}

# Reconstruct
curl -X POST https://teefree.example.com/reconstruct \
  -H 'Content-Type: application/json' \
  -d '{"shares": ["base64-share0", "base64-share2"], "verifyHMAC": true}'

# Returns: {"payload": {"secret": "data"}}

MPC Orchestration

For distributed computation, integrate @private.me/xcompute alongside TeeFree. Xcompute provides 3-party MPC with SPDZ-inspired IT-MAC protocol. Each party holds a share, computation happens on shares, results reconstruct only after all parties verify integrity.

MPC via Xcompute
import { teeFree } from '@private.me/crypto';
import { xcompute } from '@private.me/xcompute';

const { shares } = await teeFree.split(sensitive, { threshold: 2, totalShares: 3 });

// Each party runs computation on its share
const results = await xcompute.run({
  shares,
  fn: aggregateFunction,
  parties: [party0, party1, party2],
  threshold: 2,
});

// results = final output after MPC completes
Section 06

Security Guarantees

Information-theoretic security proven via Shannon's perfect secrecy theorem. Zero computational assumptions.

Information-Theoretic Security

XorIDA operates over GF(2) (binary field). For a K-of-N threshold scheme, any K-1 or fewer shares reveal exactly zero bits of information about the plaintext. This is not a computational hardness assumption — it is a mathematical proof via Shannon entropy. No adversary, regardless of computing power (including quantum computers), can extract information from K-1 shares.

Defense Against Collusion

TeeFree is secure as long as fewer than K nodes collude. For K=2, N=3, an attacker must compromise at least 2 nodes simultaneously to reconstruct plaintext. Independent node compromise (e.g., Node 0 on Monday, Node 1 on Wednesday) does not break security if shares are rotated between compromises.

HMAC Integrity Protection

Every share carries HMAC-SHA256 computed over the share data. Tampering with a share without the HMAC key results in verification failure. TeeFree never reconstructs from unverified shares — this is enforced at the protocol level.

No trust in hardware, OS, or hypervisor
Unlike hardware TEEs, TeeFree makes zero assumptions about the underlying platform. Nodes can run on compromised hardware, backdoored operating systems, or malicious hypervisors. As long as fewer than K nodes collude simultaneously, the security proof holds.

Threat Model

Threat Hardware TEE TeeFree Defense
Side-channel attack Vulnerable Immune No shared state to leak
Vendor backdoor Possible Impossible No trust in chip vendor
Compromised node Breaks security K-1 tolerated Threshold guarantee
Quantum computer Breaks eventually Immune Information-theoretic
Share tampering N/A Detected HMAC verification
Section 07

Benchmarks

Measured on commodity x86 (Intel i7-12700) and ARM (Apple M2). All benchmarks include HMAC verification overhead.

Payload Threshold Split Reconstruct HMAC Overhead
64 bytes 2-of-3 14µs 16µs +4ms
256 bytes 2-of-3 35µs 38µs +4ms
1 KB 2-of-3 58µs 62µs +4.5ms
64 KB 2-of-3 2.8ms 3.1ms +5ms
1 MB 2-of-2 31ms 33ms +6ms
1 MB 3-of-5 48ms 52ms +8ms
Comparison: AES-256-GCM
For payloads under 1KB, XorIDA split is 2-11x faster than AES-256-GCM encryption. Crossover happens around 1-2KB where AES becomes competitive. For larger payloads (1MB+), AES-256-GCM is faster but provides only computational security. TeeFree trades throughput for information-theoretic guarantees.
Section 08

Honest Limitations

TeeFree is not a drop-in replacement for all hardware TEE use cases. Know the tradeoffs.

Requires K-of-N Node Availability

Reconstruction requires at least K shares available. If K nodes are offline simultaneously, reconstruction fails. Hardware TEEs run on a single machine and don't require coordination. Mitigation: Set K < N to tolerate N-K failures. For 2-of-3, any single node can be offline without blocking reconstruction.

Not Suitable for Low-Latency Single-Node Workloads

If your workload is latency-critical and cannot tolerate network round-trips to K nodes, hardware TEEs on a single machine may be faster. TeeFree shines when security trumps single-digit-millisecond latency or when TEE hardware is unavailable (mobile, edge, IoT).

MPC Overhead for Complex Computation

Secure multi-party computation on shares is slower than plaintext computation inside a TEE. Simple aggregates (sum, count, average) are fast. Complex logic (joins, graph algorithms, ML training) can be 10-100x slower than plaintext. Mitigation: Use hybrid architecture — compute on plaintext inside a TEE-free node, split results via XorIDA for distribution.

Share Storage Overhead

N shares require N times the storage of the original plaintext. For a 2-of-3 scheme, you store 3x the data. Hardware TEEs store plaintext once. Mitigation: Storage is cheap. Security is not. If 3x storage cost is prohibitive, TeeFree may not be the right fit.

When to use hardware TEEs instead
If you control the hardware, trust the vendor, and need single-machine low-latency compute with no network overhead, hardware TEEs may be simpler. TeeFree is for scenarios where vendor trust is unacceptable, hardware is unavailable, or the threat model includes nation-state adversaries with chip-level access.
Section 09

TEE Comparison Matrix

Side-by-side comparison of TeeFree vs. Intel SGX, AMD SEV, ARM TrustZone, and AWS Nitro Enclaves.

Feature Intel SGX AMD SEV ARM TrustZone AWS Nitro TeeFree
Platform support Intel CPUs AMD EPYC ARM chips AWS EC2 Any
Memory limits 128-256 MB Full VM Platform-specific Full VM Unlimited
Attestation required Yes Yes Yes Yes No
Side-channel risk High Moderate Moderate Low None
Quantum resistance No No No No Immune
Multi-cloud portable Limited Limited Edge only AWS only Yes
Cost Premium CPU Premium CPU Varies EC2 premium Commodity
Vendor trust Required Required Required Required Zero
Section 10

Quantum Resistance

TeeFree is immune to quantum attacks because it makes zero computational assumptions. Hardware TEEs rely on RSA and ECDH, both broken by Shor's algorithm.

Hardware TEE Quantum Vulnerability

Intel SGX uses RSA-3072 for attestation and ECDH for key agreement. AMD SEV uses ECDH P-384. ARM TrustZone varies by implementation but typically uses ECDH or RSA. All of these are broken by Shor's algorithm on a sufficiently large quantum computer. Current estimates suggest 2030-2035 for cryptographically relevant quantum computers.

TeeFree: Information-Theoretic Immunity

XorIDA threshold sharing over GF(2) is unconditionally secure. There is no key to break, no discrete logarithm to solve, no factorization to compute. A quantum computer with infinite qubits and zero error rate still cannot extract information from K-1 shares. The security proof is based on Shannon entropy, not computational complexity.

Harvest-now-decrypt-later defense
Adversaries today are harvesting encrypted data to decrypt later when quantum computers arrive. Hardware TEE attestation logs are vulnerable to this attack. TeeFree shares harvested today remain secure forever — no future quantum breakthrough can break information-theoretic security.
Advanced Reference

Implementation Details

API surface, deployment patterns, and compliance mappings for regulated industries.

Appendix A1

API Surface

Core functions for split, reconstruct, compute, and verification.

TeeFree API
import { teeFree } from '@private.me/crypto';

// Split plaintext into K-of-N shares
const { shares, hmacKey } = await teeFree.split(payload, {
  threshold: 2,
  totalShares: 3,
  includeHMAC: true,
});

// Reconstruct from K shares with HMAC verification
const result = await teeFree.reconstruct(shares, {
  verifyHMAC: true,
  hmacKey,
});

// Distributed computation via MPC
const mpcResult = await teeFree.compute({
  shares,
  fn: computeFunction,
  nodes: ['https://node0.example.com', 'https://node1.example.com'],
  threshold: 2,
});

// Verify share integrity (HMAC check without reconstruction)
const valid = await teeFree.verifyShare(share, hmacKey);
Appendix A2

Deployment Patterns

Three deployment models: single-process, multi-region, and hybrid TEE.

Single-Process (Library Mode)

Import @private.me/crypto directly. Split data locally, store shares in different storage backends (local disk + S3 + GCS). Reconstruction happens in the same process. No network overhead. Suitable for workloads where you control all storage and don't need MPC.

Multi-Region (Distributed)

Deploy TeeFree nodes in N geographically separated regions (e.g., US-East, EU-West, APAC). Each node receives one share. MPC computation happens across regions. Results reconstruct only after K nodes verify and agree. Maximum resilience against regional outages and jurisdiction-specific seizure.

Hybrid TEE (Defense-in-Depth)

Run TeeFree inside hardware TEEs for defense-in-depth. Even if the TEE is compromised via side-channel, the attacker still needs K-1 other shares from other TEEs. Combines computational security (TEE) with information-theoretic security (TeeFree). Overkill for most use cases but viable for nation-state threat models.

Appendix A3

Compliance & Regulatory Mapping

How TeeFree maps to HIPAA, PCI-DSS, GDPR, FedRAMP, and other frameworks.

Framework Requirement TeeFree Compliance
HIPAA 164.312(a)(2)(iv) Encryption XorIDA split = encrypted at rest, information-theoretic strength
PCI-DSS Requirement 3.4 (render PAN unreadable) K-1 shares reveal zero PAN data
GDPR Article 32 (security of processing) Information-theoretic = state-of-the-art protection
FedRAMP SC-13 Cryptographic Protection FIPS-compatible HMAC-SHA256 + IT-secure splitting
SOC 2 Type II CC6.1 Logical access No single node has complete data = access control by design
Auditor-friendly architecture
TeeFree provides clear audit trails: HMAC verification logs, share distribution records, reconstruction events. Auditors can verify that no single system ever held complete plaintext. Hardware TEE attestation is complex and vendor-dependent — TeeFree audit is straightforward cryptographic verification.

Deployment Options

📦

SDK Integration

Embed directly in your application. Runs in your codebase with full programmatic control.

  • npm install @private.me/teefree
  • TypeScript/JavaScript SDK
  • Full source access
  • Enterprise support available
Get Started →
🏢

On-Premise Upon Request

Enterprise CLI for compliance, air-gap, or data residency requirements.

  • Complete data sovereignty
  • Air-gap capable deployment
  • Custom SLA + dedicated support
  • Professional services included
Request Quote →

Enterprise On-Premise Deployment

While teeFree is primarily delivered as SaaS or SDK, we build dedicated on-premise infrastructure for customers with:

  • Regulatory mandates — HIPAA, SOX, FedRAMP, CMMC requiring self-hosted processing
  • Air-gapped environments — SCIF, classified networks, offline operations
  • Data residency requirements — EU GDPR, China data laws, government mandates
  • Custom integration needs — Embed in proprietary platforms, specialized workflows

Includes: Enterprise CLI, Docker/Kubernetes orchestration, RBAC, audit logging, and dedicated support.

Contact sales for assessment and pricing →