• About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
Tuesday, December 2, 2025
mGrowTech
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions
No Result
View All Result
mGrowTech
No Result
View All Result
Home Al, Analytics and Automation

3 Questions: How to help students recognize potential bias in their AI datasets | MIT News

Josh by Josh
June 5, 2025
in Al, Analytics and Automation
0
3 Questions: How to help students recognize potential bias in their AI datasets | MIT News
0
SHARES
2
VIEWS
Share on FacebookShare on Twitter



Every year, thousands of students take courses that teach them how to deploy artificial intelligence models that can help doctors diagnose disease and determine appropriate treatments. However, many of these courses omit a key element: training students to detect flaws in the training data used to develop the models.

Leo Anthony Celi, a senior research scientist at MIT’s Institute for Medical Engineering and Science, a physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, has documented these shortcomings in a new paper and hopes to persuade course developers to teach students to more thoroughly evaluate their data before incorporating it into their models. Many previous studies have found that models trained mostly on clinical data from white males don’t work well when applied to people from other groups. Here, Celi describes the impact of such bias and how educators might address it in their teachings about AI models.

Q: How does bias get into these datasets, and how can these shortcomings be addressed?

A: Any problems in the data will be baked into any modeling of the data. In the past we have described instruments and devices that don’t work well across individuals. As one example, we found that pulse oximeters overestimate oxygen levels for people of color, because there weren’t enough people of color enrolled in the clinical trials of the devices. We remind our students that medical devices and equipment are optimized on healthy young males. They were never optimized for an 80-year-old woman with heart failure, and yet we use them for those purposes. And the FDA does not require that a device work well on this diverse of a population that we will be using it on. All they need is proof that it works on healthy subjects.

Additionally, the electronic health record system is in no shape to be used as the building blocks of AI. Those records were not designed to be a learning system, and for that reason, you have to be really careful about using electronic health records. The electronic health record system is to be replaced, but that’s not going to happen anytime soon, so we need to be smarter. We need to be more creative about using the data that we have now, no matter how bad they are, in building algorithms.

One promising avenue that we are exploring is the development of a transformer model of numeric electronic health record data, including but not limited to laboratory test results. Modeling the underlying relationship between the laboratory tests, the vital signs and the treatments can mitigate the effect of missing data as a result of social determinants of health and provider implicit biases.

Q: Why is it important for courses in AI to cover the sources of potential bias? What did you find when you analyzed such courses’ content?

A: Our course at MIT started in 2016, and at some point we realized that we were encouraging people to race to build models that are overfitted to some statistical measure of model performance, when in fact the data that we’re using is rife with problems that people are not aware of. At that time, we were wondering: How common is this problem?

Our suspicion was that if you looked at the courses where the syllabus is available online, or the online courses, that none of them even bothers to tell the students that they should be paranoid about the data. And true enough, when we looked at the different online courses, it’s all about building the model. How do you build the model? How do you visualize the data? We found that of 11 courses we reviewed, only five included sections on bias in datasets, and only two contained any significant discussion of bias.

That said, we cannot discount the value of these courses. I’ve heard lots of stories where people self-study based on these online courses, but at the same time, given how influential they are, how impactful they are, we need to really double down on requiring them to teach the right skillsets, as more and more people are drawn to this AI multiverse. It’s important for people to really equip themselves with the agency to be able to work with AI. We’re hoping that this paper will shine a spotlight on this huge gap in the way we teach AI now to our students.

Q: What kind of content should course developers be incorporating?

A: One, giving them a checklist of questions in the beginning. Where did this data came from? Who were the observers? Who were the doctors and nurses who collected the data? And then learn a little bit about the landscape of those institutions. If it’s an ICU database, they need to ask who makes it to the ICU, and who doesn’t make it to the ICU, because that already introduces a sampling selection bias. If all the minority patients don’t even get admitted to the ICU because they cannot reach the ICU in time, then the models are not going to work for them. Truly, to me, 50 percent of the course content should really be understanding the data, if not more, because the modeling itself is easy once you understand the data.

Since 2014, the MIT Critical Data consortium has been organizing datathons (data “hackathons”) around the world. At these gatherings, doctors, nurses, other health care workers, and data scientists get together to comb through databases and try to examine health and disease in the local context. Textbooks and journal papers present diseases based on observations and trials involving a narrow demographic typically from countries with resources for research. 

Our main objective now, what we want to teach them, is critical thinking skills. And the main ingredient for critical thinking is bringing together people with different backgrounds.

You cannot teach critical thinking in a room full of CEOs or in a room full of doctors. The environment is just not there. When we have datathons, we don’t even have to teach them how do you do critical thinking. As soon as you bring the right mix of people — and it’s not just coming from different backgrounds but from different generations — you don’t even have to tell them how to think critically. It just happens. The environment is right for that kind of thinking. So, we now tell our participants and our students, please, please do not start building any model unless you truly understand how the data came about, which patients made it into the database, what devices were used to measure, and are those devices consistently accurate across individuals?

When we have events around the world, we encourage them to look for data sets that are local, so that they are relevant. There’s resistance because they know that they will discover how bad their data sets are. We say that that’s fine. This is how you fix that. If you don’t know how bad they are, you’re going to continue collecting them in a very bad manner and they’re useless. You have to acknowledge that you’re not going to get it right the first time, and that’s perfectly fine. MIMIC (the Medical Information Marked for Intensive Care database built at Beth Israel Deaconess Medical Center) took a decade before we had a decent schema, and we only have a decent schema because people were telling us how bad MIMIC was.

We may not have the answers to all of these questions, but we can evoke something in people that helps them realize that there are so many problems in the data. I’m always thrilled to look at the blog posts from people who attended a datathon, who say that their world has changed. Now they’re more excited about the field because they realize the immense potential, but also the immense risk of harm if they don’t do this correctly.



Source_link

READ ALSO

Forecasting the Future with Tree-Based Models for Time Series

Instruction Tuning for Large Language Models

Related Posts

Forecasting the Future with Tree-Based Models for Time Series
Al, Analytics and Automation

Forecasting the Future with Tree-Based Models for Time Series

December 2, 2025
Instruction Tuning for Large Language Models
Al, Analytics and Automation

Instruction Tuning for Large Language Models

December 2, 2025
Study Shows ChatGPT and Gemini Still Trickable Despite Safety Training
Al, Analytics and Automation

Study Shows ChatGPT and Gemini Still Trickable Despite Safety Training

December 2, 2025
MIT Sea Grant students explore the intersection of technology and offshore aquaculture in Norway | MIT News
Al, Analytics and Automation

MIT Sea Grant students explore the intersection of technology and offshore aquaculture in Norway | MIT News

December 2, 2025
MiniMax-M2: Technical Deep Dive into Interleaved Thinking for Agentic Coding Workflows
Al, Analytics and Automation

MiniMax-M2: Technical Deep Dive into Interleaved Thinking for Agentic Coding Workflows

December 2, 2025
Pretrain a BERT Model from Scratch
Al, Analytics and Automation

Pretrain a BERT Model from Scratch

December 1, 2025
Next Post

Fetch Drives Explosive Growth for MobileGames Through Super Bowl Partnership with adjoe

POPULAR NEWS

Communication Effectiveness Skills For Business Leaders

Communication Effectiveness Skills For Business Leaders

June 10, 2025
Trump ends trade talks with Canada over a digital services tax

Trump ends trade talks with Canada over a digital services tax

June 28, 2025
15 Trending Songs on TikTok in 2025 (+ How to Use Them)

15 Trending Songs on TikTok in 2025 (+ How to Use Them)

June 18, 2025
App Development Cost in Singapore: Pricing Breakdown & Insights

App Development Cost in Singapore: Pricing Breakdown & Insights

June 22, 2025
7 Best EOR Platforms for Software Companies in 2025

7 Best EOR Platforms for Software Companies in 2025

June 21, 2025

EDITOR'S PICK

Installing Proxmox VE 8.1 on VMware Workstation 17

Installing Proxmox VE 8.1 on VMware Workstation 17

October 16, 2025
NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining

NVIDIA Researchers Propose Reinforcement Learning Pretraining (RLP): Reinforcement as a Pretraining Objective for Building Reasoning During Pretraining

October 14, 2025
Drip Email Software: The Ultimate Pricing and Buying Guide

Drip Email Software: The Ultimate Pricing and Buying Guide

May 27, 2025
Technically proficient but lacking soul

Technically proficient but lacking soul

June 23, 2025

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Account Based Marketing
  • Ad Management
  • Al, Analytics and Automation
  • Brand Management
  • Channel Marketing
  • Digital Marketing
  • Direct Marketing
  • Event Management
  • Google Marketing
  • Marketing Attribution and Consulting
  • Marketing Automation
  • Mobile Marketing
  • PR Solutions
  • Social Media Management
  • Technology And Software
  • Uncategorized

Recent Posts

  • Blending into a cultural moment: MGM Resort’s NY Fashion Week strategy
  • YouTube releases its first-ever recap of videos you’ve watched
  • Forecasting the Future with Tree-Based Models for Time Series
  • AI for Enterprise: Scale AI from Pilot to Production
  • About Us
  • Disclaimer
  • Contact Us
  • Privacy Policy
No Result
View All Result
  • Technology And Software
    • Account Based Marketing
    • Channel Marketing
    • Marketing Automation
      • Al, Analytics and Automation
      • Ad Management
  • Digital Marketing
    • Social Media Management
    • Google Marketing
  • Direct Marketing
    • Brand Management
    • Marketing Attribution and Consulting
  • Mobile Marketing
  • Event Management
  • PR Solutions

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?