“This Technology Is Only as Creative as We Are”: Q&A With Healthcare AI Executive Andreea Bodnari ’10

Bodnari reveals what applications of AI she’s focusing on, what excites her about the healthcare AI industry’s future, and how she believes humans can best use this technology.

Preview

Headshot of Andreea Bodnari

In the economy of the not-too-distant future, where artificial intelligence (AI) systems augment the majority of our work, how will humans stay competitive? The answer, according to Andreea Bodnari ’10, is by reading. 

We sat down with Bodnari, founder and CEO of ALIGNMT AI, a start-up that provides healthcare companies with automated AI risk monitoring and compliance, to talk about her journey with AI and her vision for the future. A former leader of healthcare AI products at Google Cloud and UnitedHealth Group, she serves as a member of the Worcester Polytechnic Institute (WPI) Executive Advisory Board for Data Science and Artificial Intelligence.

What originally sparked your interest in artificial intelligence, and what role did WPI play in that?

My time at WPI was instrumental in shaping my path. As an undergraduate student, I had the incredible opportunity to begin conducting research and developing practical applications of artificial intelligence in healthcare. This involved collaborating with UMass Medical School on a research project. Our goal was to build clinical decision support tools. These were essentially probability-based systems that analyzed data from pancreatic cancer patients. By comparing a patient’s profile and medical history to similar cases, the system could predict potential outcomes of specific surgical procedures.

That is the question: How are we going to evolve as humans when we’re interacting with this AI technology? And are we raising the bar or lowering the bar for humans?

My experience at WPI and close collaboration with UMass Medical School proved to be an invaluable foundation for my development as a researcher and contributor to the field. The relevance and impact of our published research solidified my passion and prepared me for the rigors of a top PhD program, ultimately leading to my acceptance at Massachusetts Institute of Technology. 

Learn more about AI at WPI → 

At MIT, I ended up specializing in natural language processing. At the time, my foray into NLP was a bit serendipitous: I’d have liked to focus more on proteomics and clinical biomarkers, but the research funds available were for digital biomarkers, for clinical NLP. So, I went in that direction. It was fascinating because it was still a fresh field at the time—not a lot of applied research available to build upon, not a lot of techniques that could help us build practical applications. It was a very good focus area to go after.

What kind of AI systems are you using in your work today?

I’ve been more focused on building AI systems that enhance human capabilities. That kind of augmentation can come from different types of intelligence. On a basic level, it might involve monitoring data streams with rule-based AI implementations. For instance, if a patient’s heart rate exceeds 100 beats per minute, the AI system could trigger an alert for a nurse. 

And then there are more advanced cases of AI deployments we’ve worked on where these AI systems have to do some clinical reasoning. For example, with a prior-authorization request, a human might need to interpret hundreds of pages of clinical evidence through the prism of published clinical best practices. When you build AI augmentation techniques for that type of use case, what you’re doing is helping present the data and summarize the data and also helping the human reason in areas where humans are known to have psychological biases because of how our brains work.

Our brains excel at specific tasks, but not all. Analyzing massive datasets to draw conclusions is not our forte. We have limitations in data storage and processing capabilities. This is where AI can excel, complementing human strengths and overcoming weaknesses.

What’s one of the big challenges facing the AI industry right now? 

The industry is asking for more transparency into how AI systems are operating because transparency is fundamental to acceptance. It’s fundamental to trust. Sometimes we discover that our AI computational techniques have been accurate and have been performing well for quite a while, but the technology was left on a shelf because people didn’t trust it. 

There are three key elements to achieving transparency in AI systems: understanding what the system did (the output), how it did it (the process), and why it did it (the rationale).

The “what” is obvious. You see the output. Most systems output something—a recommendation, an analysis, a number. The “how” is also commonly explained across industry applications today. Most people can say, “This is how I designed my system.” It’s about the data that went into the system and the processing steps that the system was doing to manipulate the data. 

The “why” is the hardest thing to prove or to explain programmatically. For the more sophisticated machine learning algorithms, it might even be impossible. And that’s something we have to come to a consensus about as an industry and as a society as we put more of this AI technology into live production settings.

These comprehensive government regulations will accelerate the adoption of safe and reliable AI in healthcare.

We readily accept a doctor’s recommendation, even if it seems unusual, because we trust their expertise and believe there's a good reason behind it. When a doctor tells you, for example, that you have to test for Lyme disease even though you’re showing up with symptoms that are not directly related to this condition, you trust the doctor and say, “There must be a reason.” But right now, if an AI system comes up with a suggestion that looks a little off the beaten path, we don’t trust these machine learning algorithms enough yet to say, “There must be a reason.” Currently, we don’t extend the same trust to AI suggestions that deviate from the norm. There’s a reason behind the AI’s output, based on its data analysis, but it’s often opaque. We hold AI to a higher standard of explanation than we do human performance.

But if you always control risks in AI implementation settings and you have a way to measure clinical outcomes, then even if you can’t explain the logic for how the AI reached a decision, if the decision leads to good outcomes, that’s ultimately what you care about. And that is what I’m focused on right now. I started a company called ALIGNMT AI, where we automate compliance and risk mitigation requirements for AI applications in healthcare. Our platform establishes clear measurement standards and continuously monitors both the system’s and the healthcare professionals’ performance. This data will be crucial as we define what constitutes “good performance” in AI for healthcare. 

Do you see any other major hurdles that we’ll have to overcome with AI, particularly in the healthcare space?

Even if the AI systems were working to the right level of quality, and even if society were to trust these intelligent systems, there’s still a practical question of, “How do we build the infrastructure to deploy this technology at scale?” 

For AI technology to work, it has to integrate with hospitals, clinics, physician’s offices, and different types of data storage systems. The interoperability of these interfaces is a challenge today since information is not encoded in the same way across the board. We need further investment in data standards.

For example, humans exchange information through language—language is the data exchange standard. When a doctor collects clinical information from a patient, the doctor asks questions in order to map the information from the patient into their own data standard. As ambiguous and noisy as it is, that process works and it’s the common way the world runs in terms of exchanging information. 

We don’t have that same interoperable at-scale platform on which to deploy these AI systems. There’s a bit of building the railway for this AI to percolate and have a global footprint. And that will be a challenge. It’s something the industry has been working at for quite a while, but there’s still a lot of work waiting ahead of us. 

What were some of your key takeaways from your time at Google Cloud and the healthcare AI product you were managing?

My time at Google Cloud highlighted the importance of translating cutting-edge technology into business solutions and actively educating the market about the inherent value proposition of innovation.

At Google, we built amazing AI systems, but they weren’t catching on as quickly as we’d hoped. The issue? We weren’t presenting them as solutions to specific business problems. Think of it like this: A hammer is a great tool, but it’s only useful if you know how to use it to build something, i.e., solve a problem. Similarly, customers weren’t sure how our AI could be applied in their own situations to improve efficiency or achieve their goals.

If we don’t have advanced thought patterns to imagine the future, then AI will only work to the edges of our imagination.

Even though Google had cutting-edge research ready for market, we weren’t having conversations about the problems it could solve or the best places to use it. For example, we launched technology powered by large language models years ago, but we didn’t make a big deal about it. Part of this was to protect our intellectual property but, frankly, people also just weren’t aware of the potential.

Fast-forward to today, and everyone’s talking about ChatGPT, generative AI, and large language models. This is because OpenAI did a fantastic job of educating the public about this technology. By understanding its potential, companies can now consider how it can integrate with their work and make purchasing decisions they might not have made before. OpenAI’s approach to consumer education has been a real eye-opener.

What gets you excited about the future of AI?

I see a positive trend in the recent regulations for AI, which emphasize transparency and trust. In the US, the government passed a strong set of compliance frameworks for AI in healthcare at the end of 2023. Around the same time, the European Union passed a legislative act for AI that touches on both commercial and consumer applications. In addition, there are several global standard-development bodies and industry consortiums looking at defining risk mitigation safeguards for businesses and consumers. 

Connect with a WPI expert on artificial intelligence →

These comprehensive government regulations will accelerate the adoption of safe and reliable AI in healthcare. As these AI compliance frameworks are implemented, we’ll see increased consumer education, along with stronger infrastructure for monitoring and auditing AI deployments. This will ultimately lead to a surge in the adoption of trustworthy AI in healthcare.

Currently, consumers often interact with AI unknowingly. For instance, AI may assist radiologists in diagnosing your condition, but you might not be aware of it. The new compliance rules promote transparency. They require informing patients about AI use and providing access to information on how the system works and performs. Ideally, when you visit a hospital, you should be able to understand what technologies are used behind the scenes, allowing you to make informed decisions about your care.

Beyond education and transparency, what other issues will compliance frameworks address?

With the upcoming AI compliance rules, we’re also seeing more scrutiny on the cybersecurity and risk mitigation elements behind AI implementations. How are you ensuring the safety of the user across all circumstances, even when an adversarial attack happens? How can you build the safeguards for your AI system to have some form of notification layer or some form of “unplug” moment when those attacks happen?

The current approach to risk mitigation in AI for healthcare often falls short. While terms like “risk mitigation” might be used for activities like annual staff training, true risk mitigation requires a holistic approach. It has to be interconnected between legal governance and technical implementation, to the level where you monitor your systems for health equity, you have assigned owners for disaster recovery, you have data observability across AI implementations, etc. It all comes down to thorough execution.

WPI recently launched a Master of Science in Artificial Intelligence program with 13 areas of specialization. What do you think of AI as an emerging discipline?

As AI continues to evolve, it’s becoming increasingly interdisciplinary. While computer science, data science, and machine learning remain the foundation, several new areas are gaining prominence. There are also some new dimensions that now need to have more of the limelight when we talk about AI: ethics, human-machine collaboration, and, of course, regulatory affairs. 

Ethics is a critical area in AI development. It deals with complex questions, both philosophical (i.e., subjective) and technical (i.e., computational). A fascinating area of research called “alignment” is exploring how to integrate the ethical principles we value as a society into the very design of AI systems. This is a crucial aspect to consider when training the next generation of AI professionals.

What do you think the future of work will look like for people who are early adopters of AI versus people who are afraid of it or don’t trust it yet?

In general, you have an advantage in the workplace and in the marketplace if you adopt innovative technology because typically it makes you more efficient. You might become faster at your job, better at your job.

Transparency is fundamental to acceptance. It’s fundamental to trust.

At the same time, generative AI, as it’s typically used today as a productivity booster, only gives you a low-quality first draft. As a user of generative AI, you still need to know how to bring that first draft closer to a masterpiece. But taking that journey from the first draft to the masterpiece is something that people can do today only because they’ve gone from zero to one and then from one close to masterpiece many times before. If people start to go into the workplace, or even into college, generating content always assisted by AI without ever having to create that first draft themselves, I don’t know whether those people will have the intellectual muscle or the techniques under their belt to take that AI-generated draft to the next level.

And that is the question: How are we going to evolve as humans when we’re interacting with this AI technology? And are we raising the bar or lowering the bar for humans?

For current students and professionals who don’t necessarily want to specialize in AI but who want to stay competitive and relevant in their respective fields, what advice would you give them?

What I would tell people is to read. But not to read the web—to read books. Because what sets us apart as humans is our ability to reason and to have a different type of intelligence that is not based on large amounts of data. That type of intelligence is shaped through abstract thinking and conceptual thinking, which is what you get from reading. Nothing reaches the level of training that you get for your brain when you read a book. The more advanced concepts you end up reading, the better. 

So, that’s what I would tell people because that’s how we will continue to advance our thinking as individuals and identify new ways to shape this AI technology. It’s only as creative as we are as humans. If we don’t have advanced thought patterns to imagine the future, then AI will only work to the edges of our imagination.

 

This interview was edited and condensed.

 

Preview WPI experts in data science and artificial intelligence stand in front of an electronic screen, studying data representation
AI Is Shaping the Future.

See How We're Shaping AI.

Department(s):

Artificial Intelligence