Skip to main content Skip to local navigation

Connected Minds researcher explores AI’s future at top conference

Thousands of artificial intelligence (AI) researchers from around the world have gathered in Vancouver this week for one of the largest international academic conferences on AI and machine learning.

Laleh Seyyed-Kalantari
Laleh Seyyed-Kalantari

Among the attendees of the 38th annual Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence is York University’s Laleh Seyyed-Kalantari, an assistant professor in the Department of Electrical Engineering & Computer Science and a member of Connected Minds: Neural and Machine Systems for a Healthy, Just Society – a $318.4-million, York-led program focused on socially responsible technologies, funded in part by the Canada First Research Excellence Fund.

Seyyed-Kalantari will bring her leading research expertise in responsible AI to the conference, while also helping to run a Connected Minds- and VISTA-sponsored workshop on responsible language models (ReLM 2024), alongside researchers from the internationally recognized Vector Institute, a Connected Minds partner.

In the Q-and-A below, she talks about the workshop and the state of AI research.

Q: Why a workshop on responsible language models?   

A: The use of generative AI models, like ChatGPT, is increasingly becoming more and more common in our everyday lives. In fact, recent studies show that generative AI (GPT-4) can be programmed to pass the U.S. medical examination or pass the bar exam to become a lawyer. This has encouraged the idea that generative AI models can replace humans, but the reality is that this is not true, and we are far away from that point.

For my research and that of my Connected Minds colleagues, the question is not if generative AI models can be used for good – they can – but a more important and pressing question to ask inside and outside of this workshop is whether these AI models generate reliable and responsible things. Despite our rapidly evolving technological world, the answer is still no. Our workshop aims to get at the right kinds of questions both academia and industry should consider now and in the future.

Q: What makes a language model responsible?

A: Responsible language models can be evaluated with the following factors in mind: fairness, robustness, accountability, security, transparency and privacy. AI models need to be tested and evaluated for whether they are fair to all its human users. For example, AI models use data that may not include ethnic minority populations, and programmers run the risk of amplifying existing racial biases. Robustness involves assessing the generated material and its accuracy. Does it generate the right or consistent solution? Is it robust to adversarial attacks? Accountability involves decisions about regulation and legislation. Who oversees ensuring the model is fair? Security means how to protect a model from malicious attacks. Transparency and privacy refer to the use and permissibility of people’s private data, including medical information. These six factors set up a framework for a broad discussion on various issues related to responsible AI and machine learning in the context of language models.

Q: What are you most looking forward to by attending the conference and running this workshop?

A: The trip to Vancouver offers an opportunity for a significant exchange of ideas and collaborative brainstorming among a diverse group of communities, bringing academia and industry together. It’s a rare chance to gather with influential figures in the field of generative AI, all in one space. It allows us to discuss the issues, to learn from one another, and to shape future research questions and collaboration surrounding large language models. I’m grateful to Connected Minds and VISTA [Vision: Science to Applications] for helping to advance my work and for making this event possible.

Editor's Picks Research & Innovation

Tags: