Jump to

Share directly to

Security

How we keep your data safe in AI processing.

When you upload sensitive business data to Lumis, security isn't optionalit''s foundational. Here's how we protect your information throughout the AI processing pipeline.

Tao Zhang

Co-founder & Engineer

Nathan Scott

Integrations Engineer

When you upload sensitive business data to Teli, security isn't optional—it's foundational. Here's how we protect your information throughout the AI processing pipeline.

Zero-Trust Architecture

We assume every request could be malicious and verify everything. Your data is encrypted at rest, in transit, and during processing.

Security by design means your data privacy is protected even from us. We''ve built systems where even our own engineers can''t access user data without explicit permission.

Processing Isolation

Stage

Encryption

Isolation

Upload

AES-256 in transit

User-specific channels

Storage

AES-256 at rest

Account-isolated

Processing

Memory encryption

Sandboxed containers

def process_dataset(user_data, user_id):
    # Encrypt immediately on upload
    encrypted_data = encrypt_aes256(user_data, get_user_key(user_id))
    
    # Process in isolated environment
    with secure_sandbox(user_id) as sandbox:
        results = ai_analyze(encrypted_data)
    
    # Clean up immediately
    secure_delete(encrypted_data)
    return results

Security Metrics

Zero data breaches in 24 months. 100% encryption coverage. 98/100 security audit score. 15 minutes maximum response time to security updates.

Transparent Practices

We publish detailed security documentation and undergo regular audits. Your data security isn''t a black box—it''s an open book.

Performance Optimizations

Achieving our target latency of sub-600ms response time for AI agents required implementing several critical performance optimizations. These optimizations work together to minimize network overhead, reduce server processing time, and improve overall system throughput.

Intelligent Update Batching: Rather than sending individual data point updates, we group related requests by groups and time window. Updates that arrive within a 100ms window are automatically batched together, reducing the number of network requests while maintaining the perception of real-time updates. This approach reduced network traffic by up to 85% during peak usage periods.

Differential Data Transmission: Instead of sending complete datasets with each update, our system calculates and transmits only the differences between the current state and the previous state. This differential approach reduces payload sizes by up to 95% for typical business data, where only small portions of large datasets change between updates. We use efficient binary diff algorithms optimized for numerical data common in business intelligence applications.

Connection Pooling and Multiplexing: We maintain persistent WebSocket connections and reuse database connections wherever possible. Our ConnectionPool service manages thousands of concurrent database connections efficiently, while our WebSocketManager handles connection lifecycle events, automatic reconnection logic, and graceful degradation when clients experience network issues.

Multi-tier Caching Strategy: We implement a sophisticated caching hierarchy and intelligent preloading for predicted user actions. Cache invalidation is handled through event-driven patterns that ensure data consistency while minimizing cache misses.

Connection Management

Managing thousands of concurrent connections while maintaining system stability required building robust fault tolerance mechanisms. Users might have multiple browser tabs open, mobile applications connected, shared dashboards viewed by team members, and various integration clients accessing the same data streams simultaneously.

Our ConnectionManager service implements sophisticated logic to track all these connections while ensuring updates reach every relevant endpoint without overwhelming the network or creating duplicate processing overhead. Each connection is tagged with metadata including user permissions, dashboard subscriptions, data source access rights, and client capabilities.

When connections are lost due to network issues, our system implements exponential backoff retry logic with jitter to prevent thundering herd problems. Missed updates during disconnection periods are queued and delivered when connections are re-established, ensuring users never lose critical data changes even during temporary network disruptions.

Monitoring and Observability

Operating a real-time system at scale requires comprehensive monitoring and observability. We track dozens of metrics including update latency percentiles, connection counts by geographic region, data processing throughput, error rates by component, cache hit ratios, and user engagement patterns.

We maintain detailed dashboards showing real-time system health, allowing us to identify and resolve performance issues before they impact user experience.

Results

After six months of optimization and production hardening, our real-time update system consistently delivers exceptional performance across all key metrics. Dashboard updates now arrive with an average latency of 145ms, with 99% of agent responses delivered within 600ms even during peak traffic periods.

The system successfully handles peak loads of 50,000 concurrent WebSocket connections with minimal performance degradation. Memory usage per connection has been optimized to just 2.3KB, allowing us to maintain cost efficiency while scaling to support enterprise customers with thousands of simultaneous users. Database query performance remains consistent even with millions of data points being processed per minute.

Subscribe to get daily insights and company news straight to your inbox.

Ready to scale your business with AI agents?