Creating and Detecting Deep Learning-generated Fake Reviews

September 06, 2017

PHOTO FROM FORBES.COM / CREDIT: GERALT/PIXABAY

Results of a recent study led by Professors Ben Zhao and Heather Zheng’s SAND Lab are making waves in the popular press.  In an upcoming paper to appear at ACM Conference on Computer and Communications Security (CCS 2017), students from the SAND Lab successfully used deep learning AI models to mimic online product reviews written by human users. Fake online reviews today are often generated by crowdturfing campaigns, where real users write personalized content for pay (a “dark” version of crowdsourcing services like Amazon’s Mechanical Turk). Today's crowdturfing campaigns, while realistic, can be costly and easily identified by the “bursty” nature of the reviews. The work from the SAND Lab is a new attack where malicious parties can use software to generate large volumes of realistic online reviews for free, and control its timing to avoid detection even by today’s advanced detection tools. The study also showed that not only do real users fail to distinguish these fake reviews from those written by humans, but users also find the fake reviews to be “useful,” underscoring the potential impact of these new attacks.  The paper, led by UChicago PhD student Yuanshun Yao and Postdoc Bimal Viswanath, also identifies new mechanisms to detect these fake reviews, by looking for properties of natural writing samples that are lost in the review modeling/generation process.

The results of this project have resonated with popular press across the world. After an initial interview and article with Business Insider UK, news about the work has spread to numerous newspapers, technology news sites, financial news services and blogs across the world, from the US, to UK, India, China, and Australia. Here are a few of these articles: