Welcome to the first post of the Tech & Society blog series. Each month, this series will delve into the various challenges, issues, and advancements in both established and emerging AI technologies. It will provide in-depth analysis of the latest developments from a multi-disciplinary viewpoint, offering guiding insights from diverse fields and industries, all within a philosophical context.

It is sure to be interesting content, but why is this approach important?

The Need for Responsible AI

Paul Virilio, a philosopher, urbanist, and cultural theorist, famously said: “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution…” He argued that every new technology inherently brings with it the potential for failure and disaster. His “accident theory” was that accidents like these are not anomalies, or isolated incidents but rather an integral part of technological development, therefore inevitable. This concept—that every invention necessarily involves risk—encourages the idea that, as you design your invention, you carefully work to consider and anticipate various potential risks and failures, and plan to mitigate them. Not just in terms of ensuring that the technology meets its primary objectives and aligns with broader business goals, but also in terms of considering the deeper, indirect, and less obvious risks and harms to individuals and society that need to be thoroughly investigated and understood.

What a fantastic idea! It seems like a no-brainer, yet this approach certainly hasn’t been executed much in the past fifteen to twenty years of accelerating AI innovation. 

AI offers incredible opportunities for enhancing human and environmental welfare. Exciting advancements are being made in many areas of life. However, we see the inherent risks and harms of technology every day. Rising suicide rates among young people, discrimination due to algorithm bias, and the unregulated spread of misinformation and disinformation online are only a few examples of the challenges we face as a society. Emerging technologies involved in areas like brain tracking, predictive policing, and autonomous weapons promise to bring even more worrying risks in the future.

Many of these negative consequences can be attributed to the rampant pace of the development of AI. Innovations in artificial intelligence have skyrocketed—unchecked, unregulated, and without consideration of (or even perhaps recognition of) potential risks and harms inherent in them. Focus has been much more on innovation and profit than on aligning technology with human and social values.

This is changing. As AI technologies have become more integrated into our lives, and public awareness and critical examination of AI’s societal impact has grown, so has the recognition that we need to take back some control over how technology is shaping society. We see this in the development of the Responsible AI movement, which is now growing with some urgency. Look at groups like All Tech is HumanThe Engineering Change Lab, and The Center for Humane Technology for examples of this.

The Need to Involve Multiple Disciplines

Historically, AI has generally been regarded as the exclusive domain of data scientists, much like accounting is seen as the responsibility of the finance team. This perception creates natural silos not only within organizations but also among the general public, with the idea that such specialized work should be left to the experts. However, with artificial intelligence capabilities growing more sophisticated every day, and influencing and shaping all pillars of society, what data science is doing, and how it works, is becoming everybody’s concern. We are all stakeholders in how society is being shaped, and we need to bring society’s collective knowledge and skills to bear on examining and understanding the complexities of the effects of technology.

Throwing more AI at the problems technology creates is not the answer. Some in the AI field think using “better AI” can resolve the issues—e.g. transparency tools, bias mitigation techniques, iterative improvement through feedback mechanisms, continuous monitoring, having a human in the loop, and integrating ethical frameworks into tech design. But data science alone cannot fix the problems. The tools may be helpful, and ethical frameworks are essential, but while necessary they are not sufficient.

We need to be one step ahead of technological challenges instead of one step behind, which means proactively planning the development of technology. Data science and software engineers must involve multidisciplinary subject matter and discipline experts early in the tech design process. This includes careful consideration of both direct and indirect potential consequences of any technology and taking steps to mitigate those consequences in the design. To create AI systems that are not only effective but also ethically sound, the integration of deep, discipline-specific expertise is crucial. Subject experts, who understand the deeper issues within their fields, are needed to thoroughly examine potential outcomes and consequences. For example, it’s not just about data scientists collaborating with educators to build personalized education platforms; it’s also about involving cognitive scientists who understand child brain development and the impact of reduced handwriting in favor of keyboard use on neural connections. Similarly, it’s not just about health insurance personnel working with data scientists to create algorithms for recommending medical treatment; community health workers must also be involved to ensure that algorithm feature selection and weighting is carefully considered and does not inadvertently disadvantage minority groups by overweighting proxies for protected characteristics, such as zip codes for race, for example. 

Philosophy as the Backbone of Responsible AI

The value of philosophy in this context goes without saying for members of the APA, but for any external readers unfamiliar with philosophy as an academic discipline, philosophy is one of the most rigorous academic disciplines. It is rooted in critical analysis, logical reasoning, and systematic questioning. As such, a philosophical approach to problem-solving is helpful because it rigorously questions assumptions, exhaustively explores all possibilities, and considers diverse perspectives, ensuring comprehensive and well-rounded solutions. This approach, combined with various philosophical doctrines, concepts, and logical frameworks is clearly invaluable in tackling tech challenges.

Philosophers in Industry

Murat Durmus, author and CEO & Founder of AISOMA AG (The AI & Data Analytics Experts), is an example of an industry professional with a strong leaning toward philosophical thought who uses philosophical principles in guiding his work. In the same vein as this post, Murat believes we need more philosophers in the workplace—see his cartoon below. A software engineer by trade, he regularly posts on LinkedIn on philosophy and is well worth following as someone who pushes the importance of the role of philosophy in AI development. 

Used with Permission from the Author

Another example of a business leader who uses philosophy in the AI field is Chris McClean, Global Lead in Digital Ethics for Avanade (an IT Services & Consulting company) and responsible for AI governance and responsible AI policy. Chris is also currently working towards a PhD in Philosophy & Applied Ethics, and finds the value of a philosophically trained mindset to be incredibly beneficial. He says that the more he studies philosophy, the more helpful he finds it in business settings—“There are obvious touchpoints these days with respect to technology ethics generally and AI ethics in particular. But the more you dig into these topics, the more you find tech is just an entry point into much larger arenas of business ethics and eventually political philosophy. For example, the pursuit of fairness in AI is important, but if your organisation doesn’t have a good understanding of what fairness looks like, you get stuck fairly quickly. Is the goal to give everyone a fair shot, to steer toward equitable outcomes, or to reverse historic injustices? That’s not something you can solve with algorithmic testing tools or good data science. That’s a complicated conversation at the executive level that would very much benefit from a philosophy mindset.”

Philosophical training hones critical thinking and analytical skills. As Chris points out, interrogating concepts like fairness and justice often raises more questions than it answers. A philosophical mindset, along with the knowledge gained through studying philosophy, is well-suited to navigate these complexities and see those questions through to a logical conclusion.

A Call to Reflection and Action

This series aims to deepen the conversation about the challenges of artificial intelligence by offering practical guidance for real-world applications. We need to translate the excellent research and thought in academic fields into actionable strategies on the ground. Nothing is slowing down in the world of artificial intelligence. OpenAI’s ChatGPT, for example, became the fastest-growing product in history, with over 100 million users registering within just two months of its launch in 2022. And despite recent media speculation that the hype around generative AI might not match reality due to financial issues and environmental concerns, there is no doubt that with generative AI, technology has reached a significant milestone in terms of capability that may trigger even more rapid growth.

Michael Wooldridge, a Professor of Computer Science at Oxford University and a veteran in the field since the 1980s, described the launch of ChatGPT as a “watershed moment” in technology, marking a “truly extraordinary time in AI.” Professor Wooldridge highlights the intense pressure on tech companies to bring generative AI products to market to ensure competitive advantage. This scramble for market dominance is clear to see, but this kind of race obviously undermines careful and considered planning. The upshot is that it’s imperative now more than ever to prioritize a responsible approach to AI development and ensure our ship doesn’t end up on the rocks. 

Previous articleThe Train is Coming for Us: The crushing power of fiction
Next articleAnd If I Die Before I Wake

5 COMMENTS

    • Thanks Marc. Great article. Good to see ethical guidelines and the move toward responsible implementation of AI across different fields and industries. It’s really important that businesses and organizations take it upon themselves to manage this, because industry regulation lags too far behind.

  1. I think two reasons why controlling AI will be difficult are: one, it will be difficult to maintain an evolving set of goals for AI containment when the money incentives are enormous; two, that AI may reach capabilities in logical networking that challenge the human ability to understand the pathways and conclusions that AI can reach. First occurring in a short story he wrote in 1942 – “Runaround” – Isaac Asimov’s three rules for robotics were at that time a clever fictional attempt to confine robot behavior. Those clever rules seem naive now, and I would guess that changing verbal meaning into clear and comprehensive mathematical, programmable meaning is very difficult.

    • Thanks Frank. And yes, those three rules are somewhat simplistic with the technology we have today. Changing verbal meaning to a programmable meaning is difficult, and even if you translate the basic meaning successfully, artificial intelligence struggles with context and subtlety or nuance.

    • Indeed, a number of experts in AI note that they don’t know what’s going on inside the black box of AI. In addition, AI hallucinates (word used by experts, speaking of linguistic issues), makes things up, makes errors. AI programs are often trained on large language models (LLMs) and whatever errors/prejudices exist in the LLM may be replicated in the AI. There is much work going on with correcting/preventing errors in AI results, but it’s difficult. One proposal is adversarial– to set one AI program against another to ferret out errors. I don’t know how well this works.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield