Jim: For the last year or so, “fake news” has been, well, in the news. Whether you use that term to describe intentionally false stories that are intended to go viral on social media, or as a general collect-all term for the mainstream media, there’s a magnifying glass on the accuracy of the news like never before.
But there’s another type of misinformation that could be just as damaging, if not more so, to society–at least according to researchers at the University of Chicago.
Katherine: UChicago researchers have published a white paper on how artificial intelligence can be used to develop sophisticated online reviews on sites like Yelp that are virtually indistinguishable from an actual review. The researchers used a deep learning technique called “recurrent neural networks,” which generates real-sounding reviews by analyzing thousands of actual online reviews.
The reviews produced by the UChicago researchers proved difficult to detect by software or the human eye, and were even determined to go beyond just believable and were actually “useful,” according to the researchers. Here’s an example of an AI-generated review, according to Business Insider:
“I love this place. I went with my brother and we had the vegetarian pasta and it was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a great breakfast and a small spot with a great deal.”
Jim: Researchers said that they aren’t aware of any actual instances of AI being used to make fake reviews, but they said it’s not hard to do (you just need some simple computer hardware and a database of actual reviews).
The issue is that users of sites like Yelp and Amazon rely on truthful reviews. And while posting fake reviews online is nothing new, using AI could help post waves of positive or negative fake reviews at a scale like we’ve never seen.
“In general, the threat is bigger [than fake news],” Ben Y. Zhao, a professor of computer science at the University of Chicago, told Business Insider.
“It is going to progress to greater attacks, where entire articles written on a blog may be completely autonomously generated along some theme by a robot, and then you really have to think about where does information come from, how can you verify … that I think is going to be a much bigger challenge for all of us in the years ahead.”