Recently, US classrooms have dealt with several forms of authoritarian and dystopian policies, ranging from Texas A&M banning Plato to UNC Chapel Hill secretly filming faculty and students in the classroom. Now, they may be facing digital authoritarianism. Since 2023, students, faculty, and staff have integrated various Generative Artificial Intelligence (GenAI) platforms into their day-to-day. In particular, they have increasingly adopted the use of AI note-taking apps like Otter.ai, as well as various recording devices like smart glasses. As a result, individuals are progressively recording faculty, staff, and students without consent or transparency, oftentimes targeting women, and students and faculty of color.
Although many states like West Virginia are a one-party consent state, the increased and unchecked use of AI recording devices at universities poses serious safety risks for the campus communities across the United States. Academic freedom is essential for scholarly dialogue, and that comes with the reasonable expectation of the right to privacy in the classroom setting. Universities maintaining that right to privacy is even more crucial as Higher Ed in the United States continues to see sustained and continuous attacks. This entails addressing the growing problem of digital authoritarianism and the normalization of constant surveillance.
The University as the New Techno-Panopticon
I’m not arguing against ethical public shaming or questioning the importance of holding bad actors accountable for social norm transgressions. In her article, “Deepfakes and Epistemic Backstop” Regina Rini notes that, “Our awareness of the possibility of being recorded provides a quasi-independent check on reckless testifying, thereby strengthening the reasonability of relying upon the words of others.” Rini calls this an epistemic backstop. Individuals having the ability to dispute narratives that aim to manipulate, subvert, or discredit true events, through epistemic backdrops is important to combat those accounts.
However, recording on a phone or an approved accommodation device is relevantly different than an individual within the institution using an unregulated and free AI note-taking assistant, like Otter.ai. For example, Glean is a widely-used and FERPA-compliant note-taking platform, with a clear data retention policy. On the other hand, Otter.ai has invasive and unclear data use and collection policies.
Additionally, these AI assistants are increasingly collecting biometric data (e.g., voice, facial features) which contain socially sensitive information about people around the student using the AI note-taker. The one-party consent statute in many state laws only applies to the recording of communications, not to the collection of biometric data. Likewise, it is not apparent that note-taking assistants like Otter.ai adhere to FERPA regulations. Regardless, it remains one of the most popular note-taking AI assistant platforms used in Higher Ed. Thus, there are serious privacy concerns with regard to safeguarding student data.
Smart Glasses and Campus Influencer Surveillance
Already, university campuses have several student influencers, with more aspiring influencers, waiting to go viral. Subsequently, influencer hopefuls replicate trending content in order to go viral and get their first break. Right now, this increasingly includes “smart glasses” content, which is gaining popularity. This content entails a wearer, typically a male, of smart glasses approaching victims, who do not know they are being recorded, and trolling them to elicit responses that are either immediately uploaded to either Facebook and Instagram, or stored for the user to later edit. This is because Meta glasses also allow the user to instantaneously livestream this content onto Facebook or Instagram.
Adding to this dynamic, tech companies like Meta are increasingly integrating facial recognition into the glasses, as well. This means anyone on campus could have a recording of them shared and uploaded online without their consent. More concerningly, the unsuspecting victim’s information could be linked to the wearer’s glasses with one click. This would allow the wearer to later access even more personal data that they obtained through the glasses’ facial recognition capabilities. Not only does this violate ethical concerns around consent, but also the right to privacy.
No one on campus should have to worry that going to class or attending a meeting, public lecture, or event could result in a secret recording of them being uploaded to social media so that the smart glasses wearer can go viral or get monetized.
If universities do not address this pressing surveillance issue, then my worry is that this will create a hostile and unsafe learning environment, including violating FERPA regulations that could further undermine Higher Ed.
From Meta Glasses to Image-Based Sexual Abuse
This is not just a speculative concern that I have. Recently, X’s Chatbot Grok came under fire for users of the platform producing and spreading sexually abusive material of both women and children. Now, in order to make such deepfakes it only takes 3-10 seconds of audio and 1 still image. The easy access to these deepfake platforms coupled with the increased use of AI note-taking platforms and smart glasses poses serious risks for anyone who visits a university’s campus. A single disgruntled individual could easily upload a surreptitious recorded audio clip from the AI notetaker or smart glasses, and a photo of the victim, easily searchable and attainable through Google, and upload that information into the deepfake generators.
Currently, most university policies do not have adequate ethical guidelines to safeguard against such abuses. Instead, most universities’ AI policies continue to apply only to the limited use of GenAI platforms, like ChatGPT. This is leading to crucial and detrimental ethical and legal gaps in guidance of AI usage and accountability. However, universities have the ability to take preemptive action, adopt ethical guidelines that take seriously consent, transparency, and the right to privacy.

Siobhain Lash
Siobhain Lash is a Teaching Assistant Professor through the Kendrick Center for an Ethical Economy in the John Chambers College of Business and Economics at West Virginia University.
Dr. Lash completed her PhD in Philosophy in two years at Tulane University under the direction of Chad Van Schoelandt, Oliver Sensen, and Caroline Arruda. Her work has appeared, among other places, in Constitutional Political Economy, Ethics, Policy & Environment, and Public Philosophy Journal. She works at the intersection of political economy, environmental justice, and urban ecology. Her research also focuses on ethics in business and information and AI.






