Ever Thought Your iPhone Was Listening to You?

decorative image
Photo by Jakub Żerdzicki on Unsplash

The real privacy problem runs deeper

Many people are convinced that their iPhones, Alexas, or Google Homes are secretly recording their every word. You mention going on holiday with your husband, and hours later an ad for cheap flights or new luggage pops up on your social feed. Or you’re chatting with a friend about worn-out running shoes, and soon after you’re shown an ad for Nike or Brooks. Coincidence? It feels too precise to be random.

In truth, that’s not how these devices work—they aren’t recording your every word. Voice assistants use a local wake-word detector—software that continuously listens only for a short sequence of sounds, like “Hey Siri” or “Alexa.” It keeps a few seconds of audio in a rolling buffer, overwriting it every fraction of a second. Nothing leaves your phone until the wake word is detected, at which point a short clip is sent to the cloud to process your request. Independent audits and peer-reviewed studies confirm this: your device is not streaming or storing full conversations 24/7.

The real trade-off: not listening, but learning

Even without constant recording, smart devices are relentless data collectors. Each interaction produces metadata—what you asked, when, from where, which device, which voice profile, and even background noise. Combined with your search history, shopping patterns, and location, this feeds a powerful web of profiling algorithms. So, the ads that feel like they’re based on a conversation aren’t, but they still result from a quiet infiltration into your daily behavior.

AI and big data enable “surveillance and profiling of users at a hitherto unseen scale.” What begins as innocuous personalization (“you may also like…”) can evolve into deep inferences about your religion, sexual orientation, health, or political leaning. These profiles then drive ad targeting, pricing, and sometimes discrimination. Even without being “listened in on,” our online activity is still turned into predictions about who we are and how to influence us.

The collapse of consent: why “ticking the box” isn’t enough

The issue isn’t just that our devices collect data, but that the systems governing how that data is handled haven’t caught up. Privacy laws and consent frameworks were written for a time when data exchange was deliberate and limited—when it was clear who was collecting what, and why. Today’s environment is continuous and networked, with information flowing across devices, platforms, and contexts.

The default defencse of data collection is “you consented.” You accepted the terms when you set up your device, downloaded the app, or traded data for a discount. Yet philosophers and legal scholars increasingly argue that this kind of “notice-and-choice” consent is a fiction. Consent, meant to give individuals “control over potential negative effects,” has become largely theoretical. Users cannot realistically predict how their data will be recombined, inferred upon, or sold. It has become a symbolic click that transfers power without understanding.

Helen Nissenbaum’s theory of contextual integrity underscores this idea: privacy isn’t about secrecy, it’s about appropriate information flows. When data leaves its original context—say, a voice command in your kitchen—and reappears in an advertising or insurance context, the “consent” you gave no longer covers what’s being done.

Daniel Solove, a privacy scholar, argues that even fully informed consent cannot function in systems so complex that their consequences are unknowable. No matter how it’s obtained, consent isn’t meaningful without adequate understanding—and in a world of constant data mining, adequate understanding is practically impossible.

Research consistently shows that:

  • People do not read privacy policies and cannot possibly read them all.
  • The volume and complexity of data collection make it impossible to weigh costs vs. benefits.
  • Policies are lengthy, frequently updated, and data is shared across many entities.
  • Algorithms infer sensitive information from seemingly harmless data—things we would never consent to if we understood the inference.

No easy solutions: autonomy, paternalism, and the illusion of control

After two decades dissecting the limits of consent, Solove concludes that efforts to “fix” it don’t work. Making privacy notices more visible doesn’t make people read them; simplifying language often introduces ambiguity. Yet the main alternative—paternalism, where governments dictate data use—risks curbing autonomy.

In other words, consent doesn’t work, but not having consent also doesn’t work. Eliminating it would remove individual choice; relying on it gives only the illusion of control. Beneath that illusion lie the real problems of modern data collection: opacity, repurposing, and power imbalance—data collected for one reason can be used for another, without transparency or meaningful limits.

Solove proposes murky consent as a novel way forward. It acknowledges that perfect, fully informed consent is impossible, but treats consent as a partial and context-dependent justification backed by legal duties on organizations—such as respecting reasonable expectations, acting loyally toward users, and avoiding unreasonable risk. Companies cannot rely on perfunctory click-throughs; they must ensure their practices align with what a reasonable person would expect, or find another lawful basis for processing data.

As discussed in last month’s post, this reinforces the need for deployers of technology—not just designers—to take accountability. Murky consent is a kind of “consent-plus”: a model that combines personal choice with non-waivable duties for companies, designers, and regulators. It acts as ethical scaffolding—so even when users skip the fine print, the structure itself limits harm.

Structural duties could include:

  • Purpose limitation: use data only for its original function.
  • Data minimisation: collect only what’s necessary and store it briefly.
  • Local first & short memory: keep computation on-device and delete data quickly.
  • Privacy by design/default: build privacy into hardware and software, with visible indicators when microphones or cameras are active.
  • No sensitive inference: prohibit analytics on sexuality, health, or politics unless explicitly approved.
  • True revocation: deletion must remove not just files but their influence on machine-learning models.

These obligations align with moral reasons for limiting access to personal data, grounded in respect for autonomy and dignity.

Why this isn’t anti-innovation

There’s a temptation to frame privacy and innovation as opposites—guardrails versus progress. In reality, trust is the condition for adoption. People will only integrate voice assistants, health trackers, and AI systems into their lives if they feel those systems respect their boundaries.

The goal isn’t to stop listening—it’s to listen responsibly. Privacy isn’t about hiding; it’s about preserving the space to think, speak, and become ourselves without algorithmic interruption.

Alexandra Frye
The Digital Ethos Group

Alexandra Frye edits the Technology & Society blog, where she brings philosophy into conversations about tech and AI. With a background in advertising and a master’s in philosophy focused on tech ethics, she now works as a responsible AI consultant and advocate.

Previous articleTreating Each Other Well in Online Spaces
Next articleAPA Member Interview, Ilgin Aksoy

LEAVE A REPLY

Please enter your comment!
Please enter your name here