The Precarity of AI? One Philosophy Professor Discusses AI's Uncertainties
Department(s):
Marketing Communications
“I think we’re living in a time that’s rife with all kinds of precarity,” said Geoffrey Pfeifer, associate professor of global studies and philosophy at WPI. “Precarity of our knowledge systems, precarity of our political systems, precarity of our environment, precarity of jobs and work life.”
Precarity of artificial intelligence.
A philosopher by training, Pfeifer does not identify as an expert in artificial intelligence research. Yet, as is the case for most professionals today, the implications of AI are inescapable in his work, which focuses on social and political philosophy, global justice, and critical pedagogies. When he helped with the design and instruction of a critical AI micro-course for faculty earlier this summer, he started to understand problems in the environmental impact and ethics of AI, particularly chatbots.
“They’re consuming so much energy,” he said. “You can imagine what this means for us when we think about the environment and climate change and fossil fuel usage. And people in the AI space are starting to say that we may see limits on AI’s ability to do what it does because we’re limited by its energy consumption.”
Pfeifer is not the only philosopher with concerns about AI’s slippery slope. The annual Society for Philosophy in the Contemporary World (SPCW) conference, held July 17–20 at WPI, included multiple presentations that addressed the conference theme of “precarity” through AI. Pfeifer, coeditor of the SPCW’s Journal of Philosophy in the Contemporary World, was host of the conference. His two departments at WPI, Integrative & Global Studies and Humanities & Arts, cosponsored the event.
One conference presentation, by Richard Frohock, a PhD candidate in philosophy at the University of South Florida, examined AI chatbots and the precarity of our knowledge systems—in particular, what might happen when AI takes over the role of expert. In the same way that “food deserts” result from the absence of nearby grocery stores in certain communities, Frohock theorized that AI could lead to “expert deserts” where human experts in certain fields have been replaced by chatbots.
Pfeifer said his own questions about chatbots have bled into questions around extractivism, a term that he used to describe the practice of mining minerals for chip production primarily in the Global South. He said that the social and political implications of extractivism go beyond climate to things like land ownership, environmental degradation, and the impact on local residents.
This raised further ethical concerns around what Pfeifer called the hidden labor of training chatbots and generative AI models. For example, OpenAI and similar companies employed workers in economically disadvantaged countries to manually filter out “all the horrific content you can find on the internet.” In recorded interviews, some of these workers report that they were traumatized by what they were exposed to, lost their families as a result of this trauma, and were inadequately compensated.
“These are global justice questions,” he said. “And this is all hidden stuff that we don’t see.”
The precarity generated by AI doesn’t stop there. Other ethical questions that are top of mind for Pfeifer include the lack of transparency in data collection and the perpetuation of biases in AI training data. But don’t mistake his caution for complete pessimism.
“Sometimes I catch myself sounding so negative. I think it’s less negativity and more trying to be critical and make sure that we’re paying attention to these things.”
“I don’t think it means that we have to reject AI or not train people to work with AI,” he added. “But these are issues that we’re going to need to confront as a society. And we should train our students to be aware of them so they can think about ways to minimize these harms as infrastructure gets built out.”
While many things in the contemporary world may be precarious, WPI’s capacity to train the leaders of the future is not one of them. The university believes that the key to shaping the future of AI is specialized education with a dual foundation in technical expertise and responsible practices. In late 2023, WPI announced the launch of its Master of Science in Artificial Intelligence, boasting a curriculum that was jointly crafted by industry leaders and faculty experts to address the same ethical considerations that Pfeifer and others have raised. This fall, the university will welcome its first cohort of graduate students in the program.
WPI faculty, too, are benefiting from AI training. The micro-course on critical AI literacy that Pfeifer contributed to was spearheaded by resident experts Gillian Smith, director of the Interactive Media & Game Design department, and Yunus Telliel, assistant professor of anthropology and rhetoric.