“There’s probably an over-abundance of hype over how much it’s going to change everything in the immediate short-term,” Zhao told Fortt. “There’s still a lot of work to be done to eliminate bias, to really make these models robust, to help us understand when they work and when they don’t work, their limitations. Like any technology, once it really hits the mainstream, there are still bugs to be fixed, there are still holes to be patched. I don’t think we’re quite there yet, and people are really excited, but they’re jumping the gun just a little bit. And that’s what we’re doing in academia: really trying to make sure that, as people are excited about the technology, we’re making sure that it’s safe, making sure that all the corner cases and unexpected behaviors get ironed out.”
Earlier this month, Zhao also spoke to CNBC reporter Joe Andrews about some of those potential AI risks in a feature on “deepfake” technology. The ability of neural networks to produce realistic, but falsified video and audio has alarmed experts in computer science and beyond, fearful of the further erosion of public trust in journalism and government communications.
Zhao has spent a great deal of time speaking with prosecutors, judges — the legal profession is another sector where the implications are huge — reporters and other professors to get a sense for every nuance of the issue. However, despite his clear understanding of the danger deepfakes pose, he is still unsure of how news outlets will go about reacting to the threat.
“Certainly, I think what can happen is ... there will be even less trust in sort of mainstream media, the main news outlets, legitimate journalists [that] sort of react and report real-time stories because there is a sense that anything that they have seen ... could be in fact made up,” Zhao said.
Then it becomes a question of how the press deal with the disputes over reality.
“And if it’s someone’s word, an actual eyewitness’ word versus a video, which do you believe, and how do you as an organization go about verifying the authenticity or the illegitimacy of a particularly audio or video?” Zhao asked.
Learn more about Zhao’s research on AI security and privacy at the SAND Lab website.