Why Facebook Will Never Fully Solve Its Problems with AI - Publik Talk

Breaking

An informative website for every one.

Post Top Ad

Post Top Ad

Wednesday, April 11, 2018

Why Facebook Will Never Fully Solve Its Problems with AI




Facebook CEO Mark Zuckerberg testifies in a Senate Judiciary and Commerce committees joint hearing regarding the company’s use and protection of user data.

Aaron P. Bernstein / Reuters

Mark Zuckerberg offered AI as a panacea for Facebook’s massive content problem during Tuesday’s testimony before the Senate Judiciary and Commerce committees — but this is ultimately a false promise.

Leaning on the dispersion of artificial intelligence to detect and remove the kind of problem content that is drawing scrutiny to the social network invariably leaves room for Facebook to never fully or directly take responsibility for what’s happening on its platform — and worse, it will do this at scale.

About one hour into his marathon testimony, Facebook’s CEO unexpectedly gave up the “neutral platform” defense that Facebook, and so many other technology companies, have deployed to distance themselves from being held accountable for the problems on their platforms.

“Proposing AI as the solution leaves a long period where the issue is not being addressed, during which Facebook’s answer is, ‘We are working on it.’”

“In the past, we’ve been told that platforms like Facebook, Twitter, Instagram, the like are neutral platforms. … They bore no responsibility for the content,” Sen. John Cornyn told Zuckerberg. “Do you agree now that Facebook and the other social media platforms are not neutral, but bear some responsibility for the content?”

“I agree that we’re responsible for the content,” Zuckerberg answered. It was an astonishing concession. But it didn’t last.

Seconds later, he launched into a talking point about how AI could address undesirable content, effectively abdicating Facebook’s responsibility for the problem. He would return to this defense 10 more times before his testimony ended.

"In the future, we're going to have tools that are going to be able to identify more types of bad content” like hate speech, fake news, obscenity, revenge porn, and other controversial content on Facebook, Zuckerberg said. The company is hiring more content moderators, with the aim of having 20,000 workers by the end of this year, and “building AI tools is going to be the scaleable way to identify and root out most of this harmful content.”

Call it AI solutionism. It’s an attractive idea. But it will never fully work.

“Proposing AI as the solution leaves a very long time period where the issue is not being addressed, during which Facebook’s answer to what is being done is, ‘We are working on it,’” Georgia Tech AI researcher Mark Riedl told BuzzFeed News.

Fake news running rampant? The algorithm hasn’t been trained on enough contextual data. Violence-inciting messages in Myanmar? The AI isn’t good enough, or maybe there aren’t enough Burmese-speaking content moderators — but don’t worry, the tools are being worked on. AI automation also gives the company deniability: If it makes a mistake, there’s no holding the software accountable.

“There is a tendency to want to see AI as a neutral moral authority,” Riedl told BuzzFeed News. “However, we also know that human biases can creep into data sets and algorithms. Algorithms can be wrong, and there needs to be recourse.” Human biases can get coded into the AI, and uniformly applied across users of different backgrounds, in different countries with different cultures, and across wildly different contexts.

Facebook did not immediately respond to a request for comment from BuzzFeed News.

To be fair, even Zuckerberg was up-front about some of the limitations of AI, saying that while AI may be able to root out hate speech in five to 10 years, “today we are not there yet”:

“Some problems lend themselves more easily to AI solutions than others. Hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced. You need to understand what is a slur, and whether something is hateful. Not just in English — majority of people on Facebook use it in languages that are different across the world. Contrast that, for example, with an area like finding terrorist propaganda, which we've been very successful at deploying AI tools on already.


Today, as we sit here, 99% of the ISIS and al-Qaeda content that we take down on Facebook, our AI systems flag before any human sees it. So that's success in terms of rolling out AI tools that can proactively police and enforce safety across the community.”

But several AI researchers told BuzzFeed News this ignored several facets of the problem. First, as Cornell AI professor Bart Selman said, you could argue that artificial intelligence, and algorithms in general, seriously contributed to Facebook’s predicament in the first place.

“AI algorithms operate by finding clever ways to optimize for a preprogrammed objective,” Selman said. “Facebook instructs its news feed algorithms to optimize for ‘user engagement.’”

When Facebook users engaged with posts that reaffirmed their biases, Facebook showed them more of it. News feeds got increasingly polarized. Then bad actors realized they could game the system, and so fake news and extremist content became a problem.

Of course, Zuckerberg doesn’t want to talk about how AI got us into this mess.

As for Facebook’s systems catching what it considers “bad” content, Jana Eggers, the CEO of AI startup Nara Logics, said she “doubts” Facebook is rooting out as much of the terrorist content as Zuckerberg said it did. “There is plenty of that propaganda that is also being spread that they don't find,” she told BuzzFeed News. “I worry that he has a false sense of pride on how much propaganda they are actually getting, and that false sense of pride will lead it its own set of problems.”

What’s more, the researchers warned that Zuckerberg’s timeline of AI understanding the human context in hate speech within five to 10 years could be unrealistic. “AI systems would have to develop fairly sophisticated forms of ethical reasoning and journalistic integrity to deal with such language,” said Cornell University’s Selman. “We are at least 20 to 30 years away from that for AI systems, and that may be an optimistic estimate.” But even Zuckerberg’s optimistic 10-year timeline would be “too long of a wait,” he said.

Tarleton Gillespie, who studies how algorithms and platforms shape public discourse at Microsoft Research, told BuzzFeed News that he wasn’t just skeptical that it would take “a while” for technology companies to develop AI adequate enough to address hate speech and controversial content on platforms. “AI likely can't ever do what platforms want it to do,” he said.

"AI likely can't ever do what platforms want it to do."

At its size, Facebook is never going to fully address its vast content problem. Yes, having some AI systems to help those 20,000 content moderators is better than none. “But AI for content monitoring would need to be carefully designed and monitored with the right human interest–aligned objectives in mind,” Selman said.

Which implies a perpetual problem. Culture, the complexity of language, the tricks of those who willfully violate platform standards and game AI systems — these are all factors that the people developing AI systems themselves acknowledge are in flux. And that makes the training of data itself fluid by definition, Microsoft Research’s Gillespie pointed out. Platforms will always need people to detect and assess new forms of hate and harassment, and they will never be able to eliminate the need for humans dealing with this problem.

What AI automation really does, Gillespie argued, is “detach human judgment from the encounter with the specific user, interaction, or content and shift it to the analysis of predictive patterns and categories for what counts as a violation, what counts as harm, and what counts as an exception.” If Facebook truly wants to make a good-faith effort to grapple with its content problem, it shouldn’t outsource this judgment to general AI.

For as long as Facebook is as huge as it is, AI will never be a complete solution. One real — though unlikely — solution? Downsize. “Venture capitalists and the market may not have supported such an approach,” Selman said, “[but] if Facebook had opted for a more manageable size, the core problems would likely have been avoided.”

“It's indeed the relentless pursuit of rapid growth that drove the need for near-complete AI automation, which caused the problems with these platforms.”



Author: Davey Alba
Published at: Wed, 11 Apr 2018 10:59:18 -0400

Credits: https://www.buzzfeed.com

No comments:

Post a Comment

Post Top Ad