Skip to main content Skip to local navigation

Artificial Intelligence primer: The good, the bad and the ugly

Q&A with Yves Lespérance, associate editor of two leading AI publications, offers fulsome overview, answers key questions, such as: What is AI? Where are we in its evolution? What are the key questions we should be asking ourselves?

Artificial Intelligence (AI) has evolved rapidly over the last five years. Many feel they need an introductory primer on this broad field that has jumped from science fiction to science fact in warp speed. Who better than the Lassonde School of Engineering’s Yves Lespérance to provide an insightful overview. He is associate editor of Artificial Intelligence, the premier international journal for current research in AI, and associate editor of the Journal of Artificial Intelligence Research, which covers all areas of AI.

AI is about developing models and techniques for building systems that act intelligently

AI is about developing models and techniques for building systems that act intelligently

Lespérance is a computer science professor and an expert in AI, knowledge representation and reasoning; logic-based tools for building intelligent autonomous agents; and the design and implementation of a family of agent programming languages now being used in robot control, process modeling and intelligent software agents.

In this Q&A with Brainstorm, Lespérance interprets this rapidly changing field and answers today’s most pressing questions around AI.

Yves Lespérance

Yves Lespérance

Q: Please define AI and describe its scope.

A: AI is about developing models and techniques for building systems that act intelligently. In my view, the four most important areas are: knowledge representation, reasoning and planning; machine learning; computer vision ad robotics; and natural language understanding.

Q: How could AI help society?

A: This is difficult to predict. With population aging, one very big application could be assistant robots for the elderly. Additionally, self-driving vehicles [robocars] could offer a major social benefit.

Applications that understand natural language and can communicate with people are starting to become more mainstream, but there’s still a lot of room for progress.  In time, even people who are not technologically educated will be able to use these systems much more easily.

Also, security and privacy managers and search engines will progress. These are some of the most important applications, with social benefits, that are coming.

With population aging, one very big application could be assistant robots for the elderly. Softbank Robotics’ Pepper robot illustrates. Reproduced with permission

Q: How could AI harm society? Any unintended negative consequences?

A: Automation, together with AI, will replace a lot of jobs – service jobs, in particular. It’s hard to predict how many. My sense is that this will be gradual, and I think society can adjust, but it will be a major disruption.

Also, AI in weapons systems, battle-field robots. This is a quite dangerous development. Given that governments are putting so much money into this, it’s almost inevitable that some of this will be deployed.

Safety is an issue. If you think about self-driving vehicles, we need to make sure that they’re at least as safe as human drivers. I don’t think any system can be 100 per cent safe. How do we manage the risk? How do we manage this legally? That’s a big issue.

Another thing to consider is if we, as a society, exploit technology more, then our values may alter. Could some future applications, such as eldercare robots, be dehumanizing?

“We shouldn’t be merely selling this technology; we need to warn people about how it can be used in a healthy or socially beneficial way. […] Incorporating ethical managers in AI systems will become important.” – Yves Lespérance

One overarching concern: When systems make decisions, based on knowledge that was mined or acquired by machine learning, we must ensure that these decisions are not altered by prejudice that is present in the data. These are all important issues that need to be considered.

Battle-field robots: “Given that governments are putting so much money into this, it’s almost inevitable.”

Battle-field robots: “Given that governments are putting so much money into this, it’s almost inevitable.”

Q: What are the ethical roles of researchers and decision makers?

A: Researchers must investigate ethical and safety issues, and explain these to the wider public and society at large. We shouldn’t be merely selling this technology; we need to warn people about how it can be used in a healthy or socially beneficial way.

We should work on techniques to ensure that AI systems can explain themselves – why they make certain decisions –  to guarantee that these systems are understandable to their users as well as key decision-makers – for example, in the case of robot warriors.

Incorporating ethical managers in AI systems will become important. (In fact, AI safety and ethics are getting a lot of attention these days, for instance, from the Future of Life Institute.)

Q: AI is already a big part of everyday life, with Siri and Alexa. Where will AI be in five to ten years? Where are we on the continuum?

A: It’s clear that there has been quite rapid progress in machine learning and problem-solving techniques. In the next five or ten years, we will see improved intelligent applications.

Some of these, such as self-driving vehicles, will take longer to develop than people expect because there are many safety issues to consider. We may still need to have someone in the vehicle to make sure everything is okay. Public transit may take a more important role.

In truth, some aspects of intelligence are very hard to automate. In terms of systems that are as intelligent as people, that can make decisions in novel contexts … I think this will take somewhat longer. (Hector Levesque has written an interesting book on this: Common Sense, the Turing Test, and the Quest for Real AI.)

“York is one of the best places in the world for computer vision. York also has a lot to offer in machine learning, data mining and reasoning. Being very strong in the humanities, York could also bring to the table the ethics component.” – Yves Lespérance

Q: What is York’s unique contribution to the AI discussion?

A: York is one of the best places in the world for computer vision. We are No. 1 in Canada and No. 3 in the world in the research area of biological and computational vision. York’s influence here will continue to grow.

York also has a lot to offer in machine learning, data mining and reasoning. We can make major contributions here.

Being very strong in the humanities, York could also bring to the table the ethics component. Additionally, York contributes to AI via its combination of arts and technology.

We want to participate in all AI developments in Toronto, Canada and the world.

For more information about Lespérance, visit his faculty page.  To learn more about the Artificial Intelligence Journal, visit the website. To learn more about the Journal of Artificial Intelligence Research, visit the website. To learn more about the Future of Life Institute, visit the website. To read more about Hector Levesque’s book, visit the publisher’s website.

To learn more about Research & Innovation at York, follow us at @YUResearch, watch the York Research Impact Story and see the snapshot infographic.

By Megan Mueller, manager, research communications, Office of the Vice-President Research & Innovation, York University, muellerm@yorku.ca