Skip to main content Skip to local navigation

Recap — AI for Health Equity: Transforming Global Pandemic Preparedness, with Jude Kong

Post

Published on November 25, 2024

On November 6, Dahdaleh faculty fellow Professor Jude Kong presented an overview of his work in developing and implementing decolonized Artificial Intelligence (AI) frameworks, particularly within public health contexts across Africa and the Global South. He began by detailing the structure and reach of his network, which spans 21 countries, emphasizing the value of consistent communication and collaboration among members. This community centered approach which contributes to knowledge sharing and the adoption of effective strategies tailored to local needs.

Professor Kong introduced the three foundational pillars of his work which includes ensuring timely and reliable health data, strengthening healthcare systems and promoting the inclusion and equity of vulnerable groups. He highlighted the proactive nature of AI driven solutions, such as early warning systems to prevent disease outbreaks, which are crucial for neglected communities that often receive attention only when crises escalate.

One of the core concepts Professor Kong explored was the decolonization of AI. He defined this as dismantling historical colonial structures that influence technology and emphasized the need for locally relevant and co-created solutions. This requires engaging communities throughout the AI development process, from data collection to model validation. He stressed the importance of understanding community needs, collaborating with local stakeholders and ensuring solutions are culturally and contextually appropriate to avoid stigmatization or ineffectiveness.

Addressing Bias and Decolonization in the Machine Learning (ML) Pipeline: A Stage-Wise Framework 

To systematically reduce bias and ensure that ML solutions are inclusive, equitable, fair, and aligned with principles of decolonization, each stage of the ML pipeline must incorporate specific responsible practices. Below is a structured framework for achieving these goals: 

A. Data Collection 

Bias can be reduced/eliminated by ensuring inclusiveness in the dataset through the following actions: 

  • Demographic Representation
    • Collect data from diverse demographic groups (e.g., age, gender, racial groups). 
  • Clinical Diversity
    • Include data from individuals with varying clinical conditions (e.g., chronic diseases, disabilities). 
  • Geographical Inclusivity
    • Gather data from diverse geographic locations, especially underserved areas like rural and deprived regions. 
  • Socioeconomic Variety
    • Ensure representation from varied socioeconomic groups (e.g., labor workers, caregivers). 
  • Diverse Team Participation
    • Collaborate with multidisciplinary and diverse teams to integrate varied perspectives. 

Impact: Inclusiveness, equity, fairness, justice, compliance, and care. 

B. Data Cleaning and Preparation 

To minimize bias, the following steps are critical: 

  • Handling Missing Values and Errors 
  • Extrapolate or interpolate missing values. 
  • Identify and address errors and outliers. 
  • Balance the datasets to prevent over- or under-representation. 
  • Labelling 
  • Use human labeling where feasible to improve accuracy and reliability. 
  • Data Security 
  • Store datasets securely (e.g., encrypted cloud storage). 
  • Preserve metadata and protect sensitive information. 
  • Implement robust access controls and logging mechanisms. 
  • Documentation and Sharing 
  • Document data gathering methods transparently. 
  • Share datasets responsibly with appropriate disclosures. 

Impact: Accuracy, privacy, security, transparency, accountability, and trust. 

C. Model Design and Training 

Reducing bias during design and training involves: 

  • Testing and Learning Strategies 
  • Define edge scenarios and design appropriate unit tests. 
  • Leverage transfer learning to adapt existing models to new scenarios. 
  • Inclusive Dataset Use 
  • Use diverse datasets encompassing various demographic, socioeconomic, clinical, and geographic groups for both training and testing. 
  • Interpretability and Documentation 
  • Design interpretable units with visualizations (e.g., charts, dashboards). 
  • Document each model component for greater transparency. 
  • Secure Storage 
  • Store the model securely and enable controlled access with tracking mechanisms. 

Impact: Reliability, robustness, interpretability, explainability, inclusiveness, and equity. 

D. Model Evaluation and Validation 

Key practices to reduce bias in evaluation and validation include: 

  • Hyperparameter Optimization 
  • Fine-tune hyperparameters to balance performance and fairness. 
  • Scenario-Based Testing 
  • Evaluate the model against diverse scenarios and demographic, geographic, socioeconomic, and clinical characteristics. 
  • Unit Evaluation 
  • Assess the performance of individual model components across different settings. 

Impact: Accuracy, fairness, robustness, inclusiveness, trust, and accountability. 

E. Automation, Maintenance, and Registration 

Ensuring long-term fairness and reducing bias requires ongoing efforts: 

  • Pipeline Automation 
  • Automate processes for consistency and compatibility across all stages. 
  • Model Updates 
  • Continuously retrain, validate, and evaluate the model using updated datasets. 
  • Incorporate user feedback to iteratively improve performance. 
  • Transparency and Sharing 
  • Document methodologies and register the model in public repositories. 
  • Share insights to promote reproducibility and trust. 

Impact: Maintenance, transparency, reproducibility, and accountability. 

Professor Kong explained the practical application of these principles with case studies, such as tools for early detection of acute paralysis in Ethiopia, a fake news detection system adopted in Brazil and mosquito surveillance technology in Ghana. He concluded by discussing sustainability, noting the important role of governments in adopting and funding these technologies. Through this seminar, Professor Kong presented the significance of a decolonized approach to AI, where community involvement and localized data are central to creating sustainable and impactful solutions in public health.  

Watch the seminar presentation below: https://www.youtube.com/watch?v=RFSj062ZorI

Connect with Jude Kong.

Themes

Global Health & Humanitarianism

Status

Active

Related Work

Updates

N/A

People

Jude Kong, Faculty Fellow, Faculty of Science - Active


You may also be interested in…