The Eavesdropping Dilemma

An interactive exploration of data privacy in the age of AI assistants. Discover the trade-offs between convenience and control.

The Privacy Problem

This section explores the core of the privacy challenge: the vast amount of data AI assistants collect and the risks this creates. Understand what your assistant knows about you and how that information can be vulnerable.

AI assistants are fueled by data. This data comes from what you say directly, but also from a wide array of passive sources, creating a detailed, and often surprisingly intimate, profile of your life.

πŸ—£οΈ Voice & Transcripts

Audio of your commands and surrounding conversations from "false wakes," which are often stored indefinitely.

πŸ“ Location Data

Precise GPS data provides context for requests but also tracks your movements for profiling.

πŸ‘€ Biometric Data

Your unique voiceprint for speaker identification, which can also be used for other authentication purposes.

πŸ’» Device & Usage Data

IP addresses, device serial numbers, and logs of your interaction patterns and feature usage.

🌐 Web & Inferred Data

Browse habits and, critically, sensitive inferences about your health, politics, and income, even if never stated.

βš™οΈ Sensor Data

Information from accelerometers, gyroscopes, and light sensors can infer activities like walking, driving, or sleeping.

The Players: A Company Comparison

The major voice assistants operate under different business models, which directly influences their approach to data collection. This chart compares the number of data points collected, while the cards below reveal specific company practices and controversies.

Amazon Alexa

Commerce-Driven

Google Assistant

Advertising-Centric

Apple Siri

Privacy-as-Feature

The Rules: Global Regulations

This section provides a high-level comparison of the two most influential data privacy laws: Europe's GDPR and California's CCPA/CPRA. See how they approach key issues like automated decision-making and the right to delete your data, and understand the compliance challenges AI companies face.

πŸ‡ͺπŸ‡Ί GDPR / EU AI Act

    πŸ‡ΊπŸ‡Έ CCPA / CPRA (California)

      A key challenge for both frameworks is the "un-baking the cake" problem: legal rights like data deletion are technically infeasible to apply to a fully trained AI model without starting over.

      The Solutions: A Path Forward

      Addressing AI privacy requires a multi-faceted approach. This section explores cutting-edge Privacy-Enhancing Technologies (PETs) that build protection into system design, followed by actionable recommendations for key stakeholders.

      Privacy-Enhancing Technologies (PETs)

      Recommendations for a Trustworthy Future