Back To Blog

On Tech Ethics Podcast – Artificial Intelligence in Human Subjects Research

Season 1 – Episode 2 – Artificial Intelligence in Human Subjects Research

Covers the challenges that Institutional Review Board (IRB) members face when reviewing research protocols that involve artificial intelligence (AI), strategies for addressing those challenges, current events and trends that may impact human subjects research involving AI, and additional resources that you can utilize if you are involved in this space.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. Today’s guest is Tamiko Eto. Tamiko manages research compliance at the Division of Research at Kaiser Permanente. We are going to discuss the challenges that institutional review board members face when they’re reviewing research protocols that involve artificial intelligence. And of course, we will then get into some strategies for addressing those challenges, current events and trends you should be paying attention to, and additional resources such as an IRB review checklist and decision tree that you can utilize if you are involved in this space.

Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guest.

All right, so let’s get into it. Hi there, Tamiko. Thank you for joining me today.

Tamiko Eto: Hi, Daniel. Thank you for having me.

Daniel Smith: All right. So just to get started, tell us a bit about yourself and what you currently focus on at Kaiser.

Tamiko Eto: Well, I worked at Stanford University previously and then SRI International. It really got me into the AI, but then now Kaiser Permanente Division of Research. And so my responsibilities have ranged from IRB and HRPP management and administration to where I’m now serving in a role of broader research compliance. But separately from that, as you know, I’ve worked extensively with the CITI Program Primer Public Responsibility in Medicine Research, and then the Department of Health and Human Services, SACHRP, the Secretary Advisory Committee for Human Research Protections. And what we’ve been focusing on is developing new guidelines to address how IRBs can conduct effective reviews of AI research under the current regulatory framework.

Daniel Smith: Thank you, Tamiko. So when we are talking about artificial intelligence in the context of human subjects research, how exactly would you define artificial intelligence?

Tamiko Eto: Yeah. Oh gosh, that’s such a really good question. And I think in order to fully understand artificial intelligence in the context of human subject research, it would be probably better to first define human subjects research. So the term human subject research is actually a combination of two separate federal definitions. So it’s human subjects plus research. And so IRBs entirely rely on these definitions when determining if they review something or not. And so that first step is to determine if the project constitutes research per that federal definition. And that’s defined as a systematic investigation including research development, testing, and evaluation, which is designed to develop or contribute to generalizable knowledge. So in layman’s terms, that basically says, “Are you doing something using these scientific methods that might be usable or applicable to others besides yourself?” And if yes, we call it research.

And then we move to step two, which is to determine if that research involves human subjects. And then human subjects is defined as any time a researcher would either A, collect data from people through intervention or interaction, or B, if that researcher obtains, uses, or generates identifiable private information. And so a good AI example that would meet both criteria would be if you grabbed a bunch of identifiable data from medical records to develop or validate a cancer predictive model with the hopes of it leading to something more generalizable and effective.

So the challenge for non-medical researchers, especially in AI, is that there isn’t this federal definition of private information, but obviously for medical researchers, we have the FDA guidance and HIPAA requirements that very clearly spell out what that means. And so the non-medical researchers, especially, again, in the AI world, they define private information very subjectively, which I believe is one of the larger challenges in regulating AI research and big data research. And so that’s why these definitions are so important, why I want to start with that first.

And so moving on, defining artificial intelligence. So the standard definition of AI is actually very different from how the general public would probably expect it to be. And so AI, commonly thought by the general public as a thing or a product, for example, like facial recognition to open their phone or Netflix recommendations. And they see it this way because AI is used in products and tools. And so because AI in phones and Netflix, they generally do what they’re supposed to do. For example, they’ll open your phone and recommend your next great binge watch. We assume that AI always works and that it’s trustworthy and actually intelligent, but more importantly, if it makes a mistake, it’s not a big deal. And then in most cases we can easily overlook it or maybe we wouldn’t even notice.

And I think that is misleading and it disregards the much higher risks when it comes to medical related AI. And so this misunderstanding that AI is intelligent, which comes from that word, it assumes that it even works well. That leads to that misnomer that AI research is just software development, like a development project running in the background alongside basic actual research. And so the misconception in that name that AI research is like software development, that’s going to mislead people into thinking that AI research does not involve human subjects when it actually does. And I believe we can use that actual artificial intelligence definition to work through that determination. So now, long story short, we can define it. So there isn’t a globally accepted definition of AI, but Nils Nilsson’s 2010 definition is the most commonly accepted currently. And he basically said, “AI is an activity devoted to making machines intelligent.”

And then he goes on to define intelligence as, “The quality that enables an entity to function appropriately and with foresight in its environment.” But if I may, I actually have to say that Joanna Bryson’s definition more recently in 2019, and she’s actually a well-published author and professor of ethics and technology, her definition is the best I’ve ever seen. And it goes something like, “AI combines science and engineering in order to build machines capable of intelligent behavior.” And then her definition of intelligence is actually very similar to Nils Nilsson’s 2010 one. But I love Joanna’s definition because I think it more accurately captures and highlights the interdisciplinary nature of AI. For example, it brings in that work from the fields of philosophy and psychology, computer science, brain science, and linguistics. And so again, like her, I strongly believe that we have to be really careful about the definitions that we end up choosing to use because that definition, as we saw through defining something as basic as human subject research, that’s ultimately going to dictate if and how we can regulate it.

Daniel Smith: Thanks for sharing those definitions, Tamiko. I really think they lay the groundwork for how we can think about AI in human subjects research. And I also want to say that I appreciate how you called out the interdisciplinary nature of artificial intelligence, as I think that brings into focus some of the challenges it raises. So given how AI works, what are some of the main challenges that you think IRB members face when it comes to the review of research involving AI?

Tamiko Eto: Absolutely. So as I see it, there are two main challenges when it comes to the review of AI research involving human participants. So the first one is people thinking that the validation and testing of predictive models, for example, is not research, and therefore it wouldn’t require IRB per that federal definition. And then the second is this false belief that the scope of the IRB is limited and shouldn’t consider how this technology might affect society. And my position on both of these is yes, it is research, and yes, it is somewhat within the IRB scope, and it’s these federal definitions that really help us in answering this question. And I’d like to explain that. So the first one where people wonder if validation and testing constitutes quote unquote, “human subject research,” again, yes, because if you recall that definition, developing, validating and testing is literally in the federal definition of research.

And obviously in order to get something as complicated as a predictive model, for example, to work, it does take quite a bit of a systematic approach. We don’t just randomly hack out models and throw them into the real world scenarios and then just hope for the best. And so for one, it wouldn’t sail. And two, it’s probably just not going to work immediately. So it does take that testing then. So this is where I got inspired to create that AI HSR decision tree and IRB reviewer checklist. My hope was to demystify all of these issues through these resources and loosen up those challenges. But then the other part, whether or not assessing societal impacts is out of the IRB scope, I think we’re asking the wrong question, and then we’re forgetting about other relevant regulations and requirements that bind us as IRBs into taking these things into consideration.

So first, under the declaration of Helsinki, the Nuremberg Code and the Belmont Report, we have to consider scientific validity and study design. So for example, under the Declaration of Helsinki, it states literally, “Every medical research study involving human subjects must be preceded by careful assessment of predictable risks and burdens to the individuals and the communities involved in the research in comparison with foreseeable benefits to them and to other individuals or communities affected by the condition under investigation.”

So for an easy comparison, we wouldn’t ask if a drug for cancer treatment is going to hurt society, but we would ask if it will actually work on the people and the communities and the conditions that it’s intended to work on. And the IRB and the FDA also ask important questions like this. Is the drug addictive? And if it is, there are numerous protections put in place to prevent that abuse. And that’s both done at the research level and that post-market societal level. So similarly for AI medical devices and software, we don’t ask if a computer aided stroke detection software program has long-term impacts on society, but we still require that that software go through quite a bit of scrutiny when testing its safety, effectiveness, and performance before we use it to process a patient’s breast tumor images. So in my opinion, people are misinterpreting the IRB’s limitations, and maybe that’s why these remain common misconceptions and challenges.

Daniel Smith: So I want to come back to this in a moment and further discuss some of the limitations of the current IRB and regulatory framework. But before we do so, can you share a bit more on the IRB oversight expectations and norms for human subjects research?

Tamiko Eto: The main job of the IRB is to protect research participants, and that includes protecting their data. And so we have to ensure that the risks don’t outweigh the benefits. And also we want to make sure the ethical principles spelled out in the Belmont Report are followed. But what most people don’t realize is that IRB oversight is not always required, even if someone is actually conducting human subject research. And so IRB oversight is basically legally only required in two situations. And that first one is for FDA regulated clinical investigations. And then the second is any research that’s bound to the common rule. And that basically means any institution that receives federal funds for research. And so what that means is that commercial entities, for example, Facebook, they can and they do conduct research, and they don’t have to get an IRB to review or approve their stuff if they don’t want to.

So while our research folks from academia, they have to walk through all these regulatory hoops to do the same stuff in an ethical and compliant manner, while commercial entities have little to no obligations and tend to put stuff out there on the market that might not be as ethical or compliant. And so given that AI is currently largely commercially based, we might end up seeing the results of that research that’s not necessarily as ethical or safe. And so there’s this new avenue that presents itself with AI research, which is how that data from humans is being used and regulated, especially when those commercial and academia worlds collide.

Daniel Smith: Thanks for that, Tamiko. So commercial research aside, when it comes to fitting AI HSR into the current IRB and regulatory framework, there are obviously some limitations. Can you talk some more about that and how folks can currently address some of those limitations?

Tamiko Eto: Yeah. Well, first let me start with, I admit that our current regulatory framework is definitely insufficient, and we are in need of some laws that can actually more adequately protect people from the harms that we’re seeing coming out now. But establishing law is obviously going to take years, if not decades. So it’s my belief that until that day comes, IRBs actually do have lots and lots of resources at their disposal. So to start, the training modules and webinar that we’ve provided under the CITI Program, those are excellent resources for beginners who need to understand the relevance basics of how the technology works and what regulations and other issues should be considered in their IRB review. And then people can refer and even modify those checklists and decision trees that I’ve developed, which are free and publicly available. And then what these do is set up the parameters for making determinations of whether things would fall under IRB purview or not, and then how to apply regulations like the FDA.

So in my opinion, these checklists, they’re just the starting point in helping guide whether IRB is needed. But the intention is that, as we invest more time in the review of these AI applications, then we will become more familiar with the relevant ethical and regulatory considerations. And so it’s my hope and belief that IRB review of AI HSR should naturally become more systematic through simple exposure and experience with these resources, and then we’ll be more prepared to review those complex research proposals. So it’s just a start to develop the proper framework. And then as that technology inevitably evolves, then we’ll modify those checklists and then expand our considerations accordingly. And then there’s free webinars I provided for Primer that really dig into the checklist and explain a lot of those questions, like why we ask the questions that we do and how to use the checklist.

And then lastly, if an institution just doesn’t have the resources available, they can either outsource their AI regulatory and ethical review, or they can always onboard an expert or a consultant. And so with these numerous resources, I don’t think we have a good excuse for why we aren’t effectively reviewing AI HSR applications.

Daniel Smith: That’s a lot of really helpful resources, Tamiko. And I’ll be sure to drop those in our show notes, so our listeners can explore them further. But I also want to ask, what are some current trends that we should be paying attention to, which might lead to the further development of the existing resources out there?

Tamiko Eto: There’s so many different spaces where AI is starting to infiltrate, and so obviously we need to set up frameworks that make this safe and effective determinations that facilitate safety and privacy of the participants in all of these areas, not just healthcare where I’m currently focused. But the longstanding challenge or this trend is that this historical data that’s used to train these machines in all of these areas, legal, criminal, educational, loans, all that stuff, that data is historically and fundamentally biased.

So my personal concern as I see this is this continued exacerbation of social inequalities and this biased output, not to mention these false alarms from faulty models. And then there’s obviously the fact that data just simply cannot be de-identified. And so the more data we get, the more identifiable it becomes. And so the ability for humans to choose how their data is used and with whom it is going to be shared and to what endeavors maybe it would contribute to, all of that is lost. And for example, in the EU, human data is actually considered a personal property and a human right. But here in the US, human data is actually considered a commodity that can be bought and sold without consent. So those are some trends that we’re seeing.

Daniel Smith: Another recent event is the AI Bill of Rights, which I believe was proposed by the Office of Science and Technology Policy. How do you see the evolution of that affecting human subjects research involving AI?

Tamiko Eto: Well, it’s a really good question. So I think that was building a US version of what the EU has currently under the AI Act. Unfortunately, you may have noticed under that AI bill of Rights, that they actually don’t define artificial intelligence. I don’t think they even mention artificial intelligence, but they put AI at the beginning because again, that’s just the term that has been widely known and accepted by the general public. And so it’s more of a way of saying, “This is where we’re going in developing some kind of regulatory framework.” But obviously one, it’s a draft, and so it has absolutely no regulatory teeth. But I do think it would be fair to say that we can expect some legal framework with those concepts that it does talk about in the future. I do think that we should use that now as our North Star when we address the issues in our IRB review of those protocols.

Daniel Smith: All great points and something we all will definitely need to keep an eye on. So my final question is, do you have any closing thoughts that you would like for our listeners to take away from this conversation?

Tamiko Eto: Well, yeah. Just really take advantage of the current resources when conducting the reviews. So really to understand more broadly what human subject research is in the context of AI. Then the key to getting there is, again, those Primer webinars, the CITI Program modules and webinars. I think it’s really going to help people, give them the confidence and ability to effectively review the applications, obviously. And then the checklist and then the latest white paper that we put out in August. I think that’s really going to help IRBs through this how to step-by-step guidance in conducting the reviews. And then from a legal perspective, again, like I just said, we don’t have much in terms of US regulations, and that Bill of Rights isn’t really going to take us far for now. But the FDA guidance out there currently is really good, specifically on the CDSS or the clinical decision support tools.

And then SAMD, which is software as a medical device. So FDA recently put out an action plan, which includes GMLP, which is good machine learning practices. All of that stuff is so excellent. It’s an excellent resource. I think it’s going to guide where regulations are going to go. So getting ahead of it is a really good idea.

And then again, the EU AI Act, it’s a great resource to get an idea of where regulations could potentially go even for us in the US. And then if institutions have the resources, then I would recommend getting some real AI expert oversight. So for example, I also sit on the AI Ethics Advisory Board for Experiential AI at Northeastern. And I have to tell you, the members on this committee are some of the world’s best. And then there’s always Google searches for those with no resources. And I’m sure lots of institutions have published their approach to AI HSR review. And so that could also be a great way for smaller institutions to get an idea of what may or may not work for them when they develop their own institutional policies accordingly.

Daniel Smith: And I think that’s a great place to leave it for today. Tamiko, thank you again for joining us today. And as I mentioned, I encourage our listeners to check out the show notes for links to many of the resources that Tamiko mentioned. Be sure to also follow, like, and subscribe to OnTECH Ethics with CITI Program for more conversations on all things tech ethics. I also invite you to review CITI Program’s newest course and webinar offerings regularly. All of our content offerings are available to you anytime through organizational and individual subscriptions. Notably, you may be interested in our Leveraging IT Insight in IRB Review webinar, which discusses why technology-based expertise is critical to human subject protections.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Meet the Guest

Content Contributor Tamiko Eto

Tamiko Eto, MS, CIP – The Mayo Clinic

As Acting Director at SRI International, Office of Research Integrity, Tamiko was responsible for the administrative leadership and direction of SRI’s HRPP. Now, in the Divison of Research, she continues working closely with researchers in addressing the ethical and regulatory challenges in AI Human Subject Research (AIHSR) and healthcare.

 


Meet the Host

Team Member Daniel Smith

Daniel Smith, Associate Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Associate Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.