Much of what we hear about artificial intelligence today focuses on its potential, with new capabilities from machine learning, neural networks, and data science driving applications in virtually every industry. But as with all technologies, there is also a dark side to AI, vulnerabilities that can be exploited by “bad actors.”
Ben Zhao, Neubauer Professor of Computer Science at UChicago, brings the myriad adversarial uses of AI out of the shadows to expose potential flaws and improve protections. On this week’s episode of the University of Chicago podcast Big Brains, Zhao chats with host Paul Rand about some of his recent projects, from training a neural network to write fake restaurant reviews to finding the “backdoors” that could allow hackers to trick facial recognition security systems or automated vehicles.
In the interview, Zhao argues that it’s up to computer scientists to carefully scrutinize the new AI techniques and applications appearing with startling frequency in the modern world.
“I think this is one of those things where, whether it's atomic fission or the newest gene splicing technique, once science has gone to a certain level you can only hope to make it as balanced as possible,” Zhao says. “Because, in the wrong hands it will get used in the wrong way. And so as long as the science is moving and technology is moving, you have to try to nudge it towards the light. And so in this sense that's what we're trying to do. These techniques are coming. And there's no stopping that. So the only question is, will they be used for good or will they be used for evil? And can we stop it from being used and weaponized in the wrong way?”