Blog

AI Unleashed - A discussion about AI safety in healthcare

Ali Devaney, Marketing
December 8, 2023

Centaur Labs CEO and cofounder, Erik Duhaime, sat down with Gianni Samwer from AI Unleashed, to discuss AI safety in healthcare and life sciences.

What you'll hear

🌐 Leveraging the collective Intelligence of skilled groups for healthcare AI

❌ The challenges of using healthcare data in AI development - fragmentation, anonymization etc.

👩‍🎓Passing US medical exams v. being an excellent doctor

📈 Trends - Model monitoring, disclosures, multimodal development, data tokenizing

❓AI…atom bomb, or internet?

📚 Recommended reading to keep up with the latest in AI

Transcript

Gianni Samwer [00:28] : Hello, and welcome back to AI Unleashed. Today I'm very excited to speak to an individual who is a PhD graduate in management from MIT and M.Phil in human evolution from the University of Cambridge. After his studies, he co-founded Centaur Labs, where he's now CEO of the company. Centaur Labs is changing AI development, specializing in data labeling for medical and scientific datasets. Using a gamified approach, they leverage a network of medical professionals to deliver data tables. Backed by influential investors like Matrix Partners and Y Combinator, I now welcome Erik Duhaime to the podcast.

Hello, Erik. Thank you for coming out on the podcast. It's lovely to meet you. I wanted to start off with you just telling me a bit about yourself and how you started in the AI field and started founding Centaur Labs?

Erik Duhaime [01:20] : Yeah, thanks for having me. About five or six years ago, I started the company based on my PhD research. I was based at the Center for Collective Intelligence at MIT. I was interested first in humans, not necessarily artificial intelligence. I was interested in how you can use information technologies to enable humans to cooperate in new ways. I was interested in what is known as the “wisdom of crowds” phenomenon. You can ask a bunch of people how many jelly beans are in a jar or how much an ox weighs, or something like that, and if you average their opinions, it's normally better than the best person in the group. 

But I was interested in how you do that for skilled tasks. If I want to know if I have brain cancer and I've got 100 random people, well, I'd probably rather just ask the doctor! But if I have multiple doctors, who do I want to trust? And then beyond that, I was interested in, well, if I envision AI is maybe just one member of the group, how do you best aggregate the information from algorithms and people together?

I started running some experiments on how to optimally combine the opinions from people with each other, as well as people with algorithms, for questions like classifying dermatology conditions and found that I could outperform board-certified dermatologists at classifying skin cancer. So I decided to launch a company where we leverage skilled crowdsourcing  and artificial intelligence to analyze all sorts of medical data - like medical images and text data as well.

Gianni Samwer [02:56] - Yeah, that's really interesting. I'm interested to find out what is the best thing. Is it the best AI? You mentioned how the average humans are better than the best person. But when you bring AI into the field, how does that change? Or is it the same?

Erik Duhaime [03:13] - Yeah, good question. First, the best way to aggregate human opinions is actually not just to take the five best people. I often liken it to putting together the optimal trivia team. On the optimal trivia team, you don't just want the five best people, you want someone who knows sports and you want someone who knows pop culture and someone who's good at history. The key is to identify the ways in which people's subskills combine. 

The same is true when you're aggregating multiple human opinions with artificial intelligence. So if artificial intelligence is really good at math, well, now I don't necessarily care if I have a human who's good at math on my team. Instead, I want to make sure that I have someone who's really good at history. So I'd say it's not so much about finding the best algorithm or the best person. It's about finding the best team of people and algorithms in order to get the right answer.

Gianni Samwer [04:11] - Yeah. So it's important to have specialized algorithms that specialize in one field. And then let's say if there's not an algorithm for history, then there's a human who's an historian or something like that.

Erik Duhaime [04:26] - Exactly. We're seeing a lot of that - for example, ChatGPT is really good at a lot of things. It's a very good generalist. But if you want to apply ChatGPT to the healthcare domain, there are certain tasks where you don't want to trust ChatGPT, at least not right now. There's research out there showing that smaller models that are specialized on healthcare data are better for a wide range of use cases. 

Then, of course, we haven't yet replaced all doctors with ChatGPT, and I don't think we're going to anytime soon, but ChatGPT can still provide a lot of value for doctors in different ways. Again, it's about finding the right combination of people and algorithms to get the job done.

Thinking about a job as a series of tasks or a combination of skills, you want to put together the right pieces. AI is good at some parts of the task. Some people are better at other parts of the task. It's about finding the right combination.

Gianni Samwer [05:20] - Yeah, and I mean, in healthcare... Obviously, in healthcare data is very crucial. Can you maybe elaborate on the challenges posed by bad data or unstructured data?

Erik Duhaime [05:35] - To say more about what we do, by the way - when I started the company at first, I wanted to be the diagnostician.  I thought, “Okay, well, I can analyze a medical image better than a doctor and better than an algorithm, so maybe I should just be the diagnostician.” We got enough feedback early on, thinking through who's going to sue you? How are you going to get reimbursed? Et cetera. And then we stumbled into this model where what we do is annotate data for companies that are developing artificial intelligence in the medical and life sciences. 

So in order to build an algorithm, you need a lot of data, and you typically need that to be well structured and annotated accurately. Now, the challenge in healthcare, as you alluded to, first off, there's a lot of data in healthcare. I'm not sure of the exact stat, but it's something like 30 or 40% of the world's data is healthcare data, and it's an increasing share every year. And we're generating more data than ever. Again, I'm not sure of the exact stat, but it's something like, in the last three years, we've generated more data than the prior 30 years. So we're generating an enormous amount of data, and a lot of it, especially in healthcare, is messy. It's messy for a lot of reasons. 

First off, it's fragmented. At least in the United States, it is an absolute disaster to switch providers, and then they often can't access your data from your last doctor. So you often need to try to keep track of a lot of that yourself. And it's not all in one place. Different providers and different parts of the system are using entirely different standards, different coding mechanisms, etc. 

And then a lot of the data in healthcare is inherently unstructured. So it's text data - it's a note written by a doctor that has a bunch of rich information in it, but it's unstructured text data, so it is difficult for AI to handle. Same with image data - you've got an image, but to an algorithm, it's a series of pixels, and you might need to teach the algorithm how to identify certain shapes in it, whether that's a lung nodule or something else. So all this data and healthcare - this incredible amount of data - has enormous potential for AI but the key bottleneck is that most of it is unstructured and poorly annotated. And as they say in AI, it's “garbage in, garbage out”. So the key to unlocking value and building state-of-the-art artificial intelligence in healthcare is combining data sources, structuring them, and annotating them.

Gianni Samwer [07:59] - Yeah, I totally agree with you, especially because in one of my summer courses, I actually did or I coded a model regarding whether a patient has diabetes or not, so like a binary classification. I also had to spend a lot of time clearing out the bad sections of the data. Do you think that's the main part that's holding AI back in healthcare? Or is it also the ethical considerations? People don't want an AI to say that you have cancer or something like that?

Erik Duhaime [08:36] - Those are both factors, but the data annotation bottleneck is the first step. Sure, if the models are better than a doctor, it doesn't necessarily mean people are going to want to use them. But in the long run, you have all sorts of solutions where people are very happy to use an artificial intelligence algorithm - areas like drug discovery where I don't necessarily care if AI helped to identify the drug in the first place as long as it went through the traditional human trial. 

There's lots of opportunities where even the psychological concerns or regulatory concerns are important, but they're secondary. But in all of these domains, the bottleneck of having skilled human data annotators, that's the first-order problem. Let's first build algorithms that are undeniably helpful, and then we'll see adoption come.

Gianni Samwer [09:35] - Yeah. Did you, by any chance, see that Google created a new AI which passes the medical exam in the United States and outperforms doctors?

Erik Duhaime [09:46] - It's incredible - it's great news,  a great proof point. Though I also want to express caution that... I often joke - my wife's a doctor, she's a surgeon, and a lot of becoming a doctor is passing the medical exam, but it's not a lot of what makes you a good surgeon at the end of the day. It goes back to that earlier conversation about being a doctor is a lot of different tasks, and an algorithm might be really good at passing multiple choice questions, but that doesn't mean that it is yet as good at communicating with the patient or cutting them open. It's great that my wife can now easily prompt a system and say, “Hey, what are the risk factors concerning A, B, and C?” and maybe get a better answer than if she asked her colleague. But just because the system can pass the medical exam doesn't mean that doctors are doomed yet, so to say.

Gianni Samwer [10:51] - Yeah, I totally agree. In the realm of AI and healthcare, what are some notable trends or advancements that you believe will have an impact in the near future? Because one thing that I, from what you've told me, can see in the future of AI and healthcare is the AI, like you mentioned with your wife and the doctors working together where you put in data from the patient and the AI more or less predicts what the patient might have. Then the doctor goes, speak to the patient and does the human aspect of the treatment. Do you think this will be the future?

Erik Duhaime [11:33] -

Doctors are not going to be replaced - but I do think that the jobs are going to change. New technology like AI doesn't necessarily replace work, but it does change how work is organized. 

A lot of what's exciting is, ironically, that AI has the potential to make healthcare more human. Right now, doctors spend a lot of time writing down notes and looking at their computer screen instead of looking at patients in the eye and talking to them. There's a company called Abridge that's working on automating the creation of notes based on a patient-doctor conversation. That's one example where what that's going to do is free up doctors' time to actually focus on being empathetic and making sure they prompt the right questions to the patient and can be more caring and in the moment with the patient rather than spending their whole time writing the note. 

There's going to be a lot of change, and with a lot of change, there's going to be tensions. But in the long run, you’ll have the best of both worlds - patient care is going to be made better because you have AI helping and humans are better able to focus on where they add the most value.

 

Gianni Samwer [13:06] - I think that's a really important factor that you mentioned that AI will drastically increase the efficiency of doctors where they no longer have to write anything down or have to just spend their time without the patient. Obviously, healthcare is like... I mean, it's very important. I think one thing that people are stressing about is the AI safety and what happens if AI is wrong in their healthcare department. That's obviously maybe human lives are on the line or something like that. How do you see AI safety in the healthcare? How will it happen? Will there be rules and regulations? Or what will happen in that field?

Erik Duhaime [13:56] - Right now, the number of FDA approvals for AI products is increasing rapidly. The FDA has done a decent job given how quickly things are changing at trying to be forward thinking and letting products out there. 

I also think that model developers are doing a decent job, and researchers certainly care about things like the development process in the first place to make sure that you're training on diverse data sets etc. 

Now, if you think about the lifecycle of AI, the development process is the first step, then the FDA approval process is the next step. What is very immature right now is what you might think of as the next two steps:

One is model monitoring. Right now, once your product is approved by the FDA, you can just go sell it. But part of the thing with AI is the world changes and it needs to keep learning from new information. So if it turns out that your model has a certain bias, or doesn't work on a certain patient population, or there's a new disease - COVID comes along and you need to update your chest X-ray algorithm - the process for getting a new algorithm out there right now is you need to train a whole new model and do a whole new FDA submission instead of letting the AI learn over time. 

Where we're going to see focus in the coming years, now that these AI products are out there, is in a company's model monitoring regime. Making sure that you're continually looking at samples of the model's output to confirm that it continues to be accurate, that it's not biased in certain settings etc. 

There hasn't been a lot of that out there. That's actually one thing I'm quite interested in because our company is very well-positioned to help with model monitoring services - here are one or two clients that are working with us in that regard. But right now we're primarily annotating the training data sets. We're part of that development step.

Once a model is out there, you need to keep an eye on it and you need to keep improving it. It's not this fixed thing - the world changes and the models need to change. There needs to be more focus on that.

We'll see that in the coming year -that's a prediction I'll put out there. 

The last step would be…think of it as “labels” - Do patients know or do doctors know what is behind a certain prediction? Do they even know if it's generated by an AI? There's going to be a lot of conversation about disclosures about how predictions are made, what data a prediction is based on. 

Those would be my two predictions of what we'll see more focus on. It's been a long road and it is certainly not perfect. But overall, researchers are doing a good job of caring about things like bias and the FDA is doing as good a job as it can in terms of getting products out there and trying to make sure that they have a rigorous approval process. I just want to see more emphasis on the continual improvement and model monitoring and then disclosures.

Gianni Samwer [17:10] - I think that's a really important point that you mentioned that the model should be able to explain how they got to that prediction or whether the patient has a sickness or not. Because the only thing or one example I could think of right now is that a lot of people obviously think ChatGPT is very smart, which it is. But I think one main problem with it is that it doesn't explain how it got to that answer, which I think is a really important step not only for us humans to learn from the model, but also, especially in healthcare. There has been a lot of talk about data restriction in the AI field. Do you think that is necessary or not?

 

Erik Duhaime [17:58] - It can get very messy and complicated there because of personal health information in the healthcare field. An area of active research is what it really means for data to be anonymized in the first place, for instance. It's not that easy. There's a lot of companies that say they can anonymize data, but nothing is perfect. There's debates about a brain scan - can you ever really deidentify it? 

It makes sense to be cautious here, but at the same time, there's an incredible amount of value that's being lost because the data is so siloed. So if we can save patient lives, what is the risk reward function for the ability to save additional lives and provide you with better treatment, versus the risk that you might have personal health information leak. 

It is critically important that model developers and the health systems that have the data in the first place have patient privacy at the top of their mind. There's no doubt about that. But I do fear there are certain instances in which we can be too restrictive and make the data too siloed. There is a lot of exciting stuff with federated learning where data can remain in the hospital and it can remain anonymous and you can still train on that data. We've been seeing a lot of companies that take approaches like that. 

As long as you make sure the people doing the annotation - if there might be personal health information - have HIPAA training, which is something our company can provide, you're mitigating the risk and you can still annotate the data and train a model on it without leaking any patient information. 

A related issue would be data ownership rights. Memorial Sloan Kettering, there's an article about the fact that they gave a company that we love that is a great company, Page AI, in the pathology space, access to a lot of data. And then patients and pathologists were upset that they were the ones that contributed that data and helped to annotate it. So should it necessarily go to a private company? There's a lot of open questions there about data rights that aren't that different from the fact that Facebook is making money off your data also. 

So it can be messy. I try not to get into the weeds of these debates too much other than to say it's complicated, and it's critically important that patients are informed and that their data is protected, but also that we recognize the value to the broader society in using de-identified data sets to train artificial intelligence algorithms that can help everybody.

Gianni Samwer [20:45] - Yeah, I totally agree with you, especially because I come from Germany and I think in Germany, or Germany is famous that a lot of people are paranoid about their own data and their own privacy and people don't want their data, their personal data to be used somewhere. I think what's really important, especially in the data field, is one or either governments or companies have to see whether the risk versus the potential. Obviously, there is risk for a cyber attack and the personal data leaking, but then also the future and the potential. It has... If you give the data to, let's say, if you give your personal data to a company, is there a potential that the model can save your life? Something like that. I think one has to see what's more important.

Erik Duhaime [21:44] - Absolutely. Because we're in healthcare, it certainly creates some challenges. So for a startup our stage, the fact that we're HIPAA compliant and SOC 2 type 2 audited - it creates additional challenges, but they've been necessary for us to be able to provide value given the fact that we're working with sensitive data. Anything you can do to change the risk reward function by making sure that you're protecting data as carefully as you can, then the more rewards that you can reap also. You can work on more life-saving treatments without needing to justify the impetus of having such a benefit from it.

Gianni Samwer [22:28] - Yeah, I agree. I was wondering because you mentioned the part of the anonymous data. Is there a setback to it? Because to me it sounds like once your name is gone, then it's just data that belongs to one of the billions of people on the Earth, right? Is there a setback? Because to me, it sounds like a good solution.

Erik Duhaime [22:56] - There are increasingly some strategies that you can do to make sure that you can still extract the full amount of value out of the data. 

One of the areas that's exciting with AI development and healthcare is multimodal data. To date, a lot of the AI solutions have been - just given this image, is there lung cancer? And it doesn't have access to patient history or longitudinal data, and that's obviously very valuable data. So companies like Tempus focused on multimodal longitudinal data for a wide range of use cases.

Companies like DataVant, for example, have a strategy to keep data linked but keep it de-identified. They tokenize it. They have a token that is associating data from different sources, but the data is de-identified because it is really valuable to know this patient who went to the ER in Colorado is the same patient where we have their patient history in Boston. So you can remove names and remove other identifiers and keep data somewhat anonymized and still maintain a lot of the value.

We've seen some clients that say - “I could never give you more than one word from a medical note and have that be de-identified”. And others that think - “I took the name out. It's fine,”. It's probably somewhere in between the two because you can have a medical note and you can take the name out, but it might still say, “Her husband was a famous race car driver that died of an accident”, and you might be able to figure out who that is. 

You need to be very careful - it's more than just taking the name out. But there are strategies to keep data linked and to maintain the value, even if you don't have it fully identifiable. To your point about the risk level, if it's very hard to reidentify someone and it is only being used by one third-party company to try to develop an algorithm, that risk is lower than if you're just tweeting out the patient node, obviously.

Gianni Samwer [25:05] - I think one should be able to measure how easy it is to identify that person and then maybe set a barrier on how much companies can anonymize data. Let's say how easy it is to identify that person. Regarding generally AI and all topics, what excites you most about AI in the near future

Erik Duhaime [25:40] - Well, in healthcare, I already said one of them, which is that ironically AI can help make healthcare more human. But broadly, I'm concerned about the pace of change. The pace of change of new technologies is increasing rapidly, and AI is a big part of that, and that can cause economic disruptions. Do we want everybody to need to train themselves on entirely new skills every two years? That can be very difficult for people. 

But broadly, some of the fear mongering with AI is a little misplaced right now.

I don't think this is the atom bomb. I think this is the Internet.

There are going to be lots of really, really horrible things - misinformation, and all these problems we need to deal with like we did with the Internet. But overall, it is absolutely amazing for overall human productivity and flourishing. People are going to live longer, healthier, happier lives thanks to artificial intelligence, and that's an amazing thing.

We need to be wary of unintended negative consequences. We need to think about the distributional effects who are the winners and losers here. But broadly, this is a remarkable technology that will be as good for the world as the the internet was. I'm broadly very excited about it all.

Gianni Samwer [27:10] - Yeah, I agree with you. I think AI has the potential to, I would say, maximize efficiency of human beings. Then no longer people have to, let's say, just one rough example, work at a grocery store. Rather than AI can check out your groceries or something like that, and where the humans can focus on the spaces they're good at and spaces that AI and machines can't do. I just wanted to end off on the note. What resources are the books or documentaries do you can listen to, turn to for more information about AI in healthcare or AI generally?

Erik Duhaime [27:53] - First off, I'd be remiss if I didn't say you should sign up for our monthly newsletter at centaurlabs.com and also check out our annotation app, DiagnosUs. Especially if you're a medical student or a doctor, you can help contribute to AI. 

But more broadly, within healthcare, research is coming out all the time. And at such a rapid pace that right now you really do need to... A book is already going to be two years behind. Follow key thought leaders, and read research that's coming out of places like Google and Microsoft. A lot of those product releases are some of the best ways to keep up to date with how fast the world is changing. 

At a higher level, there's some books on how AI impacts the future of work that are worthwhile. Two professors who were at MIT when I was there, one, my PhD advisor, Thomas Malone, he has a book that's now ~20 years old called The Future of Work, that is a very good framework for thinking about how AI and people can work together and how information technologies change the structure of society and organizations. 

And then also Erik Brynjolfsson at MIT, has a few books. One is called The Second Machine Age about AI that might be 10 years old, but because it's a bit higher level in terms of how to think about how AI is going to impact society, those frameworks are very good.

Those are why... I don't think this is the atom bomb. I don't think this is a spooky technology. I think of this like other information technologies.

People would be less spooked if they could see some of these insights - that AI doesn't replace work, it changes how it's organized, that it increases total productivity, and that it's a very good thing, though not without consequences. We should think about how the world is going to change for the better - and be aware of how it's going to change - rather than just being scared that it's going to get worse.

Gianni Samwer [30:06] - Yeah, I think that's great advice. I think a lot of people that haven't really looked into AI are just think about the dangers and maybe don't think about the potential or have watched too many science fiction movies about AI, go rogue, I would say. But thank you, Erk. This was really interesting. Thank you for coming out on the podcast.

 

Erik Duhaime [30:32] - Yeah, thank you for having me. It was a pleasure.

Related posts

July 8, 2021

Centaur Labs teams up with Brigham and Women's Hospital on Massachusetts Life Sciences funded project

Learn more about how Centaur Labs is working with the Brigham and Women's Hospital team to develop multiple AI applications for point of care ultrasound.

Continue reading →
March 29, 2021

Our data-driven approach to QA

Medical assessments are rarely black and white. To handle the grey, we offer a rigorous, data-driven approach to QA.

Continue reading →