Social Science and Policy Studies Professor Richard Lopez Considers AI’s Impact on Human Development in WPI Podcast
Department(s):
Marketing CommunicationsIn a new WPI podcast, Richard Lopez, assistant professor of neuroscience and psychology in the Department of Social Science and Policy Studies, explains that social interaction, learning things over time, and even confusion and failure are all crucial processes that need to be practiced and preserved as we enter a new age of artificial intelligence. Listen to the conversation; you may also read a transcript below.
Steve Foskett:
Artificial intelligence has quickly worked its way into business, industry, and our own daily lives. It can see patterns we can't. It can help us spit out emails, essays, and even full-length books with abandon. But in the rush to get answers about everything, are we bypassing a true understanding of anything? Hi, I'm Steve Foskett from the Marketing Communications Division at Worcester Polytechnic Institute. That’s WPI. This podcast brings you news and expertise from our classrooms and labs. Rich Lopez, assistant professor of psychology and neuroscience at WPI, is our guest on today's episode. And he isn't here to tell you to cancel your ChatGPT account or Amazon Alexa. In a recent essay he penned for the Templeton Foundation, Lopez, a social neuroscientist who focuses his research on emotions and mental health, made the case that we should approach AI as cautious adopters, while at the same time adding more enriching human interaction to our lives. Today we're going to talk about why he thinks key components of what makes us human are too important to be left to a large language model. Thank you for joining us today, Rich. It's great to have you on the podcast.
Rich Lopez:
Very glad to be here. Thank you, Steve.
Foskett:
What essential human needs are we in danger of bypassing when we use AI tools?
Lopez:
As human beings, when we process anything, when we're thinking about something, reflecting on something, or even interacting with another human being, we need time to process and to unpack what's happening. And that pace of how we engage with information or even interact with someone, that pace has to be honored, because it's how we evolved as a species and how our brains function of just taking a beat, taking some time. And sometimes we need a lot of time. Sometimes we need hours or days to really properly reflect on something or if we're learning new material, we can't accelerate that process because it is just not consistent with how we have evolved cognitively. And even though chat GPT and other tools like it are really stunning in their speed how quickly they can present information and how we can further interact with it and have a whole series of queries and interactions with AI tools, it's not consistent with how we're wired psychologically to really digest and chew on information.
Foskett:
One of the main selling points is the speed, but that's also one of the main drawbacks for us as humans.
Lopez:
Yes, I agree. And I think just knowing that and going into a chat session with ChatGPT or Gemini or whatever tool you're using, knowing that about us and knowing that that's how we're wired as humans, I think, can really help us even as we interact with something that can turn around information so quickly and faster than our brains could ever process it.
Foskett: Now there are two things that you talk about in the essay that I thought were really interesting and may resonate with a lot of people: the “liking gap,” and the “pleasure of finding things out.” Can you talk a little bit about where those two concepts come from and why they're relevant in the context of AI?
Lopez:
Well, the late famous physicist Richard Feynman would share in various interviews, and I'm thinking one interview in particular, just the process of discovery and coming to know about something and not just know that something's true, but knowing how it works, how it functions in relation to other things. And he used the phrase, the pleasure of finding things out. And that's kind of similar to what I was speaking about earlier with respect to the process by which we come to grapple with things in the world and learn new information, integrate that new information. And we want to ideally respect that process of how we can find something out. So there's not just the gratification of getting to that point of knowledge, where now you know something you didn't really understand well before, but the emotional experience of moving through uncertainty, pushing your skill set in a new way, whatever it looks like. There is a real psychological kind of unfolding of gaining new information and insight, but also the experience of acquiring that information. And there's just no way from a pure temporal perspective that we can replicate that or recover that if we're interacting with AI tools that are spitting information back to us within seconds. It's not physically possible to have that experience.
Foskett:
Is that what you meant when you were writing about answers without understanding?
Lopez:
Yes. So even in the classroom context, when I teach students here at WPI, many times-- and I understand this full well-- many times students will want to go to ChatGPT or another tool and begin to dig into a particular topic. And it could be a great starting place. But there's no substitute for sometimes months or years long study of a topic. And even the way graduate programs are set up, or PhD programs are set up--in order to really understand a topic in a nuanced, deep way, you need time. And there's no way around that.
Foskett:
You need time. And you need things to be awkward. You need failure. You need confusion. Talk a little bit about why it might be a hard sell to somebody to say, well, why would I want to go through all that?
Lopez:
Yes. And I think that ties in with another human need, that the way that we grow as human beings is we have to face challenges in order to become more resilient and to grow beyond what we were before. And I would argue that the negative things, the things that we're uncomfortable with, with respect to confusion, frustration, not understanding something, that we absolutely need to experience that. Because how else can we push through to the other side? Anything worth doing or worth understanding, especially if something complicated or hard, those emotions, those experiences, are part of the deal. And with AI, if we're just trying to get an information dump, the knowledge isn't going to be gained. And then we're not going to grow in ways that we arguably need to grow or grow in ways that are most beneficial to us. Because if anyone looks back on their life, if you've lived any number of years, I mean, yeah, in the moment, if we're going through a struggle or we're trying to understand something, it's not pleasant. But that wasn't the end point. We want to move through that discomfort and get to the other side. And unless an AI tool is designed or redesigned in light of that process, we're going to miss out on all of that. And we're not going to grow in ways that would be really beneficial for us, holistically, as human beings.
Foskett:
So sometimes things just have to be messy.
Lopez:
Yes, these large language models are trained on human knowledge and growth and discovery that took a really long time to achieve and to build that. And we're not doing our predecessors justice with respect to previous scientists or other people who helped create and generate knowledge. We're not doing that justice if we encapsulate something down into a summary, because you lose the nuance. You have no access to the process that people experienced and engaged in to get to that point. And I'm not opposed to students or scholars or really anybody using these powerful AI tools. But we have to realize: what are we trying to get out of them? And is it just this transactional type of dynamic where I want this bit of information and you can deliver it back to me rather than having a relational dynamic, which I think is best achieved with other human beings? I mean, you could start with something with ChatGPT, but then go to a friend or a colleague or even a family member and say, hey, ChatGPT was telling me about this. What do you think? And then I think that is a potential area where we can coexist with AI well, where we can use it as needed or if we find it useful. But then go back to the human-human interactions as well.
Foskett:
And those human interactions, interestingly, you write about in the essay something about the liking gap. Where you talk about how sometimes we have an askew perception of how much people enjoy our conversation or our company and vice versa. Talk a little bit about that.
Lopez:
Well, the liking gap is this phenomenon, as you described, that we can be inaccurate in how we estimate people's enjoyment in terms of conversation. But it sheds a light on a broader phenomenon in human behavior where we can be very myopic and limited with our perspective, because we think that maybe this conversation might be going really well or not so well, but the person's perception or experience can be totally different, because we're only limited by our own perspective.
Foskett:
You point out how AI can bypass crucial processes of how we understand things. But are there ways that generative AI tools can enhance or supplement these processes?
Lopez:
It's a great question. I think the short answer is yes, as long as these AI tools aren't relied on exclusively. So if we're only relying on generative AI to come up with an idea or to stimulate a conversation or research a topic, and we're only using those tools, we're going to be limited, because we're at the mercy of however that particular tool and large language model has been trained. But if we put the results of a generative AI set of queries, for example, in conversation with expert knowledge on a topic or even the collective knowledge or expertise of a team, then we can bring the generative AI piece into that setting. And then it could stimulate another idea, because you never know what is going to trigger an idea or an insight. So I'm not opposed at all to people using these gen AI tools. But as long as they're integrating that into a broader kind of process of either understanding something or coming up with a solution, and that we use other tools as well.
Foskett:
So you'd have the generative AI tools giving you sort of the technical horsepower, but you'd still be able to get good practice at those human interaction tools, working in teams, bouncing ideas off one another, that kind of thing.
Lopez:
And in that way, too, there's no substitute for a team, say, if we're talking about team-based settings, and even here at WPI, which is very much about project-based learning, where students work in teams, is that there's no substitute for a team wrestling with a problem. And yes, they may bring some things into their conversations and interactions that they-- maybe they got an idea from chat GPT. But then, because they're still working together, and it's that human-human interaction, then maybe if they hadn't had that initial insight from chat GPT, they themselves wouldn't have reached the next insight. So it could be generative in that way in multiple stages. But the actual insight is generated by the humans. But the pre-insight, if I could call it that, was provided by an AI agent.
Foskett:
At WPI, AI has become quite a big part of what we do here. We've been in data science for decades. We're teaching it. We're using it. We're researching it. It could potentially be solving problems out there in health care and education. Seems like what you're saying here is not that this isn't such a good idea, but that just you're encouraging people to just take a minute and understand what's going on beneath the surface.
Lopez:
Yes. We are moving so quickly through our day-to-day lives and through our schedules, through our routines, and if we can get people to inject a pause before they just fire up chat GPT, before they go on Instagram, before they book the next thing on their calendar, because we feel this pressure to produce, to be busy, to be occupied in all these sorts of ways. But we aren't taking moments of reflection. Say, well, why am I really doing this? What purpose does it serve? And I think anyone can relate to that. I see it certainly among students on campus, but I see it in myself. I see it in colleagues. I see it everywhere where the pace of modern life-- I don't know if it's totally sustainable, given how human beings evolved over many thousands of years and how our brains evolved in a very different context than the context in which we find ourselves now. So it's that pace. It's that franticness that we are leading our lives. And with generative AI, that's just another way that that feeds our modern pace.
Foskett: And it seems like–you talked a little bit about the notion of time, where we don't have enough time. Things are so busy. But yet, we spend so much time with these AI chatbots.
Lopez:
Yes.
Foskett:
On Instagram, on TikTok. There's a little bit of a conflict there. You're trying to save time, but you're spending time. I think you talked about a little bit, calling it a displacement.
Lopez:
Right. And I think that that's a pretty compelling argument, where the issue with technology, whether it be social media, AI, what have you, is that, yeah, we do find time for it. And in some cases, we might spend too much time using these different tools. But at the end of the day, it's like, whatever we choose to do, we could be doing something else. And that's another part of the article that I spoke to this where it's important to remember that as human beings, we have to remember the things that really nourish us and really promote our well-being. And what those things look like are usually face-to-face interactions and human connection. And then also, being outside, not being indoors and with technology for untold hours. And also, again, just self-reflection and being able to be a reflective person consistently and how we approach our life, the values we have, the goals we have, and making sure we create space where we're not doing anything, whether it be work or using technology. And we just have time for ourselves to be with our own thoughts, because so many things are competing for our attention and our time.
Foskett: What about AI do you feel holds promise? And what about AI are you most fearful of?
Lopez:
So AI can help us accelerate certain processes with respect to large-scale problems, whether it be climate change or developing new drug therapies or even more seemingly trivial things like planning a children's birthday party. Where there are certain tasks that we can offload to AI. And then as far as my fears, my greatest fear of fears is related to AI, I think it has to do with an overreliance and thinking that we can get all that we need from just using this technology and not thinking more broadly about human connection and human interaction.
Foskett:
Those are muscles that need to be exercised on a regular basis.
Lopez:
Yes, with fellow human beings.
Foskett:
Rich Lopez is an assistant professor of psychology and neuroscience at WPI. Thank you so much for sharing your perspective with us today.
Lopez:
You're welcome. I really enjoyed this conversation.
Foskett:
You can check out the latest WPI news on Spotify, Apple Podcasts, and YouTube Podcasts. You can also ask Alexa to open WPI. Thanks for listening.