Nixse
0

How Facebook Plans to Remove Harmful Content

The social media giant announced a new AI technology that can identify harmful content in order to make Facebook safer. Facebook’s new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks. 

It is worth mentioning that, Few-shot learning has similarities to Zero-shot learning. The goal of both of them is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

 Facebook’s new AI model has the ability to take action against new kinds of harmful content. In most cases, it takes several months to collect and label thousands of examples necessary to train each individual AI system. So, it is not an easy task to train each individual AI system to spot a new type of content.

 The new technology announced by Facebook is effective in one hundred languages. It works on both images as well as text. Even though it is an addition to current methods it is not a small addition, it is a big edition. The impact of Facebook’s new AI model is one of scale and speed.

 

Facebook and AI

The social media giant revealed that the new system is currently deployed and live on Facebook. The company already tested the AI system. It used its new system to spot harmful Covid-19 vaccination misinformation. Facebook also used it to identify other violations. 

Facebook claims that the new AI system already helped reduce the amount of hate speech published on the platform. This system has a remarkable ability to correctly label written text that is hate speech. Furthermore, it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

  • Support
  • Platform
  • Spread
  • Trading Instrument
Comments Rating 0 (0 reviews)


You might also like

Leave a Reply

User Review
  • Support
    Sending
  • Platform
    Sending
  • Spread
    Sending
  • Trading Instrument
    Sending