How BitAtlas protects agentic data with client-side cryptography, European-only infrastructure, and a threat model designed for a world where AI agents outnumber humans.
Version 1.0 · March 2026 · BitAtlas Security Team
Autonomous AI agents are the fastest-growing class of cloud consumers. They generate, store, and retrieve sensitive data at a pace that dwarfs human users. Yet the storage layer they depend on — S3 buckets, managed databases, cloud drives — was designed with a fundamental assumption: the service provider is trusted.
In the agentic era, that assumption breaks down. An agent orchestrating financial workflows stores API keys, transaction logs, and user data. A medical research agent holds patient records. A legal agent manages contracts. If the storage provider can read this data, every breach, subpoena, or insider threat becomes an existential risk.
BitAtlas exists to solve this. We provide persistent, globally available storage where the server is cryptographically blind. Even with full database access, root SSH, and physical possession of the drives, an attacker learns nothing about the contents. This is zero-knowledge encryption — not as a marketing term, but as a mathematical guarantee.
| Purpose | Algorithm | Parameters |
|---|---|---|
| File encryption | AES-256-GCM | 256-bit key, 96-bit IV, 128-bit auth tag |
| Key derivation | PBKDF2-SHA256 | 100,000 iterations, user-specific salt |
| Password hashing | bcrypt | 10 rounds (auth only, never touches encryption) |
| File key wrapping | AES-256-GCM | Master key encrypts per-file keys |
| Transport | TLS 1.3 | ECDHE key exchange, AES-GCM cipher suites |
A critical design decision in BitAtlas is the complete separation of authentication and encryption key paths. Your login credentials and your encryption key are derived from the same password but through entirely different, non-reversible processes.
User password + salt → bcrypt (10 rounds) → Password hash → Stored in PostgreSQL for login verification → Cannot be used to derive the encryption key
User password + user-specific salt → PBKDF2-SHA256 (100k iterations) → 256-bit Master Key → Lives only in browser/agent memory → Never transmitted, never stored server-side
This means that even if our database is fully exfiltrated, an attacker has bcrypt hashes (useless for decryption) and encrypted key material (useless without the master key). The master key exists only in volatile memory on the client.
Each file gets its own randomly generated 256-bit AES-GCM key. This per-file key is then encrypted ("wrapped") with the owner's master key. The wrapped key is stored server-side alongside the encrypted blob. This design means:
1. User or Agent selects a file 2. Client generates a random 256-bit file key 3. File is encrypted with AES-256-GCM using the file key → Produces: encrypted blob + IV + authentication tag 4. File key is wrapped with the owner's master key (AES-256-GCM) → Produces: ownerEncryptedKey + ownerIV 5. Upload to server: • Encrypted blob → S3-compatible object storage (EU-only) • ownerEncryptedKey, IV, authTag → PostgreSQL (EU-only) 6. Server never sees: plaintext file, file key, or master key
1. Client requests file metadata from API 2. Server returns: • Presigned URL to the encrypted blob • ownerEncryptedKey, IV, authTag 3. Client decrypts: a. Unwrap ownerEncryptedKey using master key → file key b. Download encrypted blob via presigned URL c. Decrypt blob using file key + IV + authTag → plaintext 4. Server never sees the plaintext at any point
The presigned URL mechanism ensures that even the download path is time-limited and authenticated, without requiring the server to proxy (and potentially inspect) the data.
The biggest challenge in zero-knowledge systems is usability. For human users, this means password prompts and key management. For AI agents, the challenge is different: how do you give an autonomous process access to encrypted data without creating a permanent security hole?
BitAtlas is natively compatible with MCP, the emerging standard for AI agent tool use. An MCP-compatible agent (Claude, Cursor, Windsurf, or any custom agent) can:
Each API key can be scoped with fine-grained permissions:
{
"permissions": ["vault:read", "vault:write"],
"maxFiles": 100,
"expiresAt": "2026-04-25T00:00:00Z",
"allowedCategories": ["legal", "financial"]
}This means you can give an agent write access to one category of your vault, with an automatic expiry, without exposing your master key or granting access to unrelated files.
The master key must be provided to the agent's runtime for decryption. This is the fundamental trust boundary: you trust the agent's execution environment (its sandbox, its memory) the same way you trust your own browser. BitAtlas does not attempt to solve the "malicious agent" problem — that's the agent framework's responsibility. What we guarantee is that the storage layer itself is zero-knowledge, and that key material never leaves the client-side boundary.
Key Insight
In the agentic world, the storage provider should be the least trusted component. BitAtlas is designed so that even if we are fully compromised — servers, database, backups, employees — your data remains encrypted and useless without the keys that only exist in your agent's memory.
Attacker gets encrypted blobs + encrypted file keys. Without master keys (never stored server-side), decryption is computationally infeasible.
No BitAtlas employee has access to master keys. Server-side code never handles plaintext. Even with root access to all infrastructure, data remains encrypted.
We can comply by handing over encrypted blobs. Without the user's password-derived master key, the data is meaningless. We cannot decrypt it even if compelled.
TLS 1.3 for all connections. Even if TLS is compromised, intercepted payloads are already encrypted with AES-256-GCM before transmission.
If an agent's environment is compromised, the attacker gains access to the master key in memory and can decrypt files the agent has access to. This is bounded by API key scoping and session expiry.
PBKDF2 with 100k iterations provides brute-force resistance, but a trivially weak password remains a vulnerability. We enforce minimum complexity at registration.
All BitAtlas infrastructure is hosted on European-owned providers. We do not use any US-owned cloud services (AWS, GCP, Azure) for data storage or processing. This means:
| Feature | BitAtlas | AWS S3 | Google Drive |
|---|---|---|---|
| Zero-knowledge encryption | ✅ Client-side | ❌ Server-side keys | ❌ Google holds keys |
| Agent-native API (MCP) | ✅ Built-in | ❌ Not supported | ❌ Not supported |
| EU-only infrastructure | ✅ Hetzner DE/FI | ⚠️ EU region option | ⚠️ EU region option |
| No CLOUD Act exposure | ✅ | ❌ US company | ❌ US company |
| Per-file key isolation | ✅ | ❌ Bucket-level | ❌ Account-level |
| Scoped API keys | ✅ Per-agent | ⚠️ IAM policies | ⚠️ OAuth scopes |
| Password = key (no recovery) | ✅ | N/A | ❌ Google can recover |
The agentic era demands a new category of storage infrastructure — one where the provider is cryptographically excluded from accessing the data it stores. BitAtlas delivers this through client-side AES-256-GCM encryption, separated authentication and encryption key paths, per-file key isolation, and European-only infrastructure.
For AI agents, we provide MCP-native integration with scoped API keys and clear trust boundaries. For humans, we provide the peace of mind that comes from knowing that even we cannot read your files.
Zero-knowledge is not a feature toggle. It is the architecture.