Monday, February 18, 2019

3 things we learned from Facebook's AI chief about the future of artificial intelligence


In recent years, many of the worlds biggest tech companies — from Google to Facebook and Microsoft — have been fixated on artificial intelligence and how it can be incorporated into nearly all of their products. For example, Google even rebranded its Google Research Division as Google AI ahead of its developers conference this year, during which AI was featured front and center. Mark Zuckerberg also explained how Facebook is using AI in an attempt to crack down on hate speech on its platform during its F8 conference in May.
The AI market is also booming as companies continue to invest in cognitive software capabilities. The International Data Corporationindicates global spending on AI systems is expected to hit $77.6 billion in 2022, more than tripling the $24 billion forecast for 2018.
But the industry still has a long way to go, and much of its progress could depend on whether academics and industry players will succeed in finding a way to empower computer algorithms with human-like learning capabilities. Systems powered by artificial intelligence, whether you're referring to the algorithms Facebook uses to detect inappropriate content or the virtual assistants made by Google or Amazon that power the smart speakers in your home, still can't infer context like humans can. Such an advancement could be critical for Facebook as it ramps up its efforts to detect online bullying and identify content related to terrorism on its platforms.
"There are cases that are very obvious, and AI can be used to filter those out or at least flag for moderators to decide," Yann LeCun, chief AI scientist for Facebook AI Research, said in a recent interview with Business Insider. "But there are a large number of cases where something is hate speech but there's no easy way to detect this unless you have broader context ... For that, the current AI tech is just not there yet."

No comments:

Post a Comment