Image: Yenpitsu Nemoto/Getty Images, via Wired.

Humans can’t expect AI to just fight fake news for them

By Tom Simoite

Here’s some news that’s not fake: Not everything you can read on the internet is true. Trouble is, it can be hard to know truths from untruths, and there’s evidence untruths travel faster. Many hands have been wrung in recent months over what to do about made-up news stories created to convert social media shares into page views, ad dollars, and perhaps even political traction. The modest first results from an effort to crowdsource machine learning technology to help stem the flood of falsity are a reminder that machines may help us grapple with fake news—but only if humans take the lead.

Late last year, Facebook’s director of AI research Yann LeCun told journalists that machine learning technology that could squash fake news “either exists or can be developed.” The company has since said it tweaked the News Feed to suppress fake news, although it’s unclear to what effect. Not long after LeCun’s comment, a group of academics, tech industry insiders, and journalists launched their own project called the Fake News Challenge to try and get fake news-detecting algorithms built out in the open.

The first results from that effort were released this morning. The algorithms the winning teams created might help rein in online misinformation, but as tools to speed up humans working on the problem, not autonomous fake news killbots.

This first task posed by the Fake News Challenge asked teams to make software that can identify whether two or more articles are on the same topic, and if they are, whether they agree, disagree, or just discuss it. The top three teams were from Cisco cybersecurity division Talos Intelligence; TU Darmstadt, in Germany; and University College London. Each notched up more than 80 percent of a perfect score on a metric that awarded most points for the more challenging job of identifying whether two stories agreed. All three used deep learning, the technique used by Google, Facebook, and others to parse and translate text.

That might not sound very relevant to the problem of debunking lies spreading online. But the contest’s organizers say that given the limitations of how well software can understand language, the best thing machine learning could do right now is help people tracking fake news work faster. Algorithms that could cluster together articles taking a particular line on something could speed up the work of screening—and rebutting—misinformation.

“A lot of the work of fact-checkers and journalists tracking fake news is manual, and I hope we can change that,” says Delip Rao, an organizer of the Fake News Challenge, and founder of Joostware, which builds machine learning systems. “If you catch a fake news item in the first few hours you have a chance to prevent it from spreading, but after 24 hours it becomes difficult to contain.”

Fake News Challenge plans to announce more contests in coming months. One option for the next one is asking people to make code that can screen images with overlaid text. That format has been adopted by some people who set up fake news sites to harvest ad dollars after new controls were introduced by Google and Facebook, says Rao.

You can expect Fake News Challenge contestants and others to gradually ask more of their news-analyzing algorithms, but don’t hold your breath for fully autonomous fact checkers. Existing technology isn’t close to having the ability to understand language and make decisions that would be needed. Giving machines to effectively censor certain kinds of information would also come with a lot of baggage. “I think there’s a chance to algorithmically identify things that are more likely than not to be ‘fake news,’ but they will always work best in combination with a person with a sharp eye,” says Jay Rosen, a professor of journalism at New York University.

He also cautions anyone pondering the hard-to-define problem of fake news to think more broadly about it. “Almost all the attention goes to the supply of fake news. How to reduce it, identify it, choke it off, label it,” says Rosen. “There is almost no interest in the demand for fake news.”

Algorithms will be helpful, but real progress on understanding or controlling the fake news phenomenon is ultimately about humans not machines.

Published by Wired

ShareShare on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Share on RedditPin on PinterestShare on TumblrEmail this to someonePrint this page