ResearchThe Forefront of Research: The Canadian Society for Epistemology

The Forefront of Research: The Canadian Society for Epistemology

Editor’s note: This post is the first in a new ongoing series for the APA Blog, The Forefront of Research. The purpose of this series is to draw attention to the work done at conferences by interviewing conference organizers, presenters, and keynotes about upcoming or recently finished conferences. Please contact us if you have ideas for this series.

The meeting of the Canadian Society for Epistemology (CSE) is a conference venue for new research in epistemology. The CSE was founded in 2003 for the purposes of promoting research in epistemology and encouraging the dissemination of this research in both French and English. Answering the questions for this interview is Jordan Walters, who is currently working on his M.A. in philosophy at Concordia University. Over the summer he is working with Dr. Ulf Hlobil as a research assistant. Part of his work involves organizing the CSE’s 2019 conference. Although Jordan does not work directly in these areas, he was delighted to conduct the interview, with the help of Dr. Hlobil’s editing.

Describe your conference? How often is it held, and what is its goal?

Usually, the CSE holds annual meetings. But we missed one last year. The topic of the CSE’s 2019 meeting is: Knowledge in the Digital Age. The goal of CSE meetings is to provide a venue for exciting new research in epistemology. We’re hoping to address some of the following questions at the CSE’s 2019 conference:

  • When and how can we gain knowledge by using digital technologies that involve opaque algorithms?
  • Is there a risk that we are building systemic epistemic injustice into our digital technology? Can AI commit epistemic injustices?
  • Does the digital age call for new belief forming methods or new ways of assessing the epistemic quality of doxastic states such as those formed on the basis of online searches or content from social media?
  • How should we use big data in our scientific theorizing?

We want to provide a rich interdisciplinary context for scholars to address these and similar questions.

As you said, the theme of the upcoming conference is “Knowledge in a Digital World: Epistemic Injustice, Bias, and other Challenges in the Age of Artificial Intelligence.” Why did you choose that theme, and what types of submissions are you hoping to receive?

Within the public sphere there is a growing interest in digital technology and AI. And as of late, philosophers have begun to take notice with some of the problems that AI will bring with it. While it might seem intuitive to think that these problems will, by and large, fall into the moral and political domain, we think that there are considerable epistemic issues at play as well. Thus, the topic of the 2019 CSE seeks to tease out some of the epistemic problems that arise from AI and digital technology more broadly. We hope that we receive submissions on topics like epistemic injustice in the digital age, the epistemic role and risks of opaque algorithms, trust in online sources of information, etc. We expect the collaborative spirit of this conference to bring together research on the epistemic risks associated with AI and digital technology more broadly. So, while it’s likely that our event will predominantly consist of philosophers, we hope that it includes work being done from the perspective of other disciplines.

Your keynote speakers are Karen Frost-Arnold and J. Adam Carter. What about their work made you feel they would be good choices?

We are really excited that Karen Frost-Arnold and Adam Carter agreed to give the keynotes. Frost-Arnold’s work focuses on the intersection of epistemology and the ethics of trust. She has addressed questions such as: “What is the nature of trust?”, “What is the role of trust in knowledge, science, and the internet?” and “How is trust betrayed and manipulated by individuals and institutions?” We think that Frost-Arnold’s research will orient the discussion about the conference theme, allowing participants to gain new perspectives on the role that trust plays with new AI technologies.

Adam Carter’s current research addresses some of the major themes of the CSE conference, such as: “Does the digital age call for new belief forming methods or new ways of assessing the epistemic quality of search results?” and “When and how can we gain knowledge by using digital technologies that involve opaque algorithms?” He has written on topics like online self-radicalization and the moral consequences of the view that some of our cognitive states are realized in digital devices. Moreover, Carter’s important work on extended knowledge, education, and understanding-why are all relevant when we try to think about knowledge and justification in a digital world.

The CFP and suggestions for topics uses the word “epistemic injustice” in reference to new technologies multiple times. Describe examples of epistemic injustices you believe machines produce.

Take Facebook and Google. Both of these firms rely heavily on AI and machine learning algorithms. Facebook algorithms organize your newsfeed in terms of what it thinks your preferences are. And Google organizes your search results in terms of what you’ve been searching for in the past and what other people clicked on in similar situations.

We know that humans frequently commit epistemic injustices. We commit testimonial epistemic injustices when we, e.g., treat the testimony of someone as less trustworthy because she belongs to a marginalized or oppressed group. And we commit hermeneutic epistemic injustices when we, e.g., contribute to a lack of understanding of marginalized perspectives. That makes it plausible that some of the interactions of people with Google etc. manifests a tendency to commit epistemic injustices, as when someone doesn’t read a news source because the author belongs to a marginalized group.  Now, the algorithms that are used to select the information to which we are exposed potentially perpetuate such patterns. Simplifying, there is a risk that, since, e.g., few people click on information provided by people who belong to a certain group, information provided by people from that group doesn’t appear high on search results or is not selected to appear in news feeds. In such cases, algorithms put people of a certain group at a disadvantage as knowers. These and similar cases make it seem plausible that machines produce, or at least perpetuate, epistemic injustices. Such epistemic injustices may not be as egregious as, e.g., the well-known examples of chat-bots turning racist. But that epistemic injustices produced by machines may be difficult to detect makes them potentially even more harmful.

Interestingly, one probably cannot address the problem adequately by making information about group membership not accessible to machines. After all, we know that machine learning algorithms are very good to find features (or combinations of features) that they can use as proxies for information that is not available to them.

Epistemic injustice may become especially problematic when we defer to/trust the algorithms that we rely on without being able to know how the algorithm arrived at the conclusion. Other basic forms of knowing don’t have this feature of opacity. Our perceptual beliefs are scrutable, to an extent. But trusting the end-result of an algorithm—and indeed, taking it to count as knowledge—might be problematic precisely because of the algorithms inscrutability.

No doubt, the consequences may be unimportant when you accidentally advertise something to a consumer who isn’t interested. But the consequences are much more significant when we start to rely on AI and algorithms as a means for knowledge for anything that has real moral weight, such as the use of AI in medical diagnosis, law enforcement or the juridical system. But given the opacity of the algorithm, it seems harder in this case to trust it. The output, instead of being a subpar advertising campaign, might be a misdiagnosis. Or even worse, it might be a particular medical treatment that has its end goal in achieving profits, rather than healing the patient at the lowest possible cost. The worry here, then, is that “the data might drive the medicine, rather than the other way around”. But given the obvious upside to implementing AI into medical diagnosis, the challenge is this: how do we remain responsible epistemic agents in this new era? What do we take as justification? And should we trust an opaque algorithm?

What is valuable about your theme is how much it overlaps with other areas of philosophy. Studies of politics, law, society, ethics, aesthetics, and I would argue, even metaphysics and ontology, are influenced by the digital world. Are you hoping the conference will illustrate these overlaps, and if so, how?

We would hope so! In fact, we think that there are two important connections to other philosophical fields here. As you mention, digital technology is everywhere and is, hence, relevant in many fields. But there is also the fact that epistemology and moral, social and political philosophy are all centrally concerned with normativity, reasons, and justification. We hope that the topic of digital technology can serve as a point where insights about normativity, reasons, and justification from different subfield of philosophy crystalize and can be shared.  

Now, being realistic with what can be accomplished in a single conference, we hope to start conversations and see what ripples out towards these other sub-disciplines. The obvious and immediate areas might be law and public policy. But given the diversity of work showing up in recent publications, such as The Routledge Handbook of Applied Epistemology, we think that the conference has the potential to engage with these overlaps.

What can attendees expect to see at the conference?

Like any other conference, we hope that attendees can see some great presentations and relevant current research. We already have generated some interest for the conference within Montreal outside the bounds of philosophy departments. So, we hope to see some real interdisciplinary thinking at the conference.

What insights do you hope attendees will come away with?

Well, some of the problems and questions that I listed earlier might be on some of the attendee’s minds. We hope that some of the research being presented will contribute to addressing those questions and problems. But there is also another possibility at play here that I hope would come out of this. Montreal, has a growing tech sector and I think that it would be interesting for people in the private sector (working for companies such as Facebook or Google) to take a walk down the street and see what concerns those in academia have with the technologies that are being developed. Getting some of the conversations that happen at this conference to flow into a boardroom meeting would be a great success, at least by my lights.

One more thing: while it’s likely that the bulk of attendees will be academics, I would hope that a few journalists come out to the conference. I think that it’s important to showcase the work being done in this area. That means getting the research disseminated beyond academic journals and blogs. And while these are obvious starting points, I think that it’s healthy to practice some form of “public philosophy” where available. I recognize that its harder to do this with some sub-disciplines of philosophy. But with respect to epistemology and AI, I think that it is slightly easier than, say, philosophy of mathematics. Now, whether that amounts to philosophers or journalists translating their work for a different audience isn’t up to me. I just know that topics like this would be of interest to your general reader of, say, The London Review of Books (LRB)or some other accessible periodical. Almost every time I pick up the LRB I see something philosophical—and it’s usually quite intriguing! I’m not going to speak for everyone, but it’s nice to see the discipline reaching outside of the bread and butter of journals and books.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

History of American Philosophy, Robin M. Muller

The origin story for this course is a bit unusual. State law in California requires students in the California State University system to engage...