Facebook announced it’s expanding fact-checking efforts to review the accuracy of photos and videos.
Until now Facebook’s fact-checking efforts have been focused on reviewing articles to reduce the spread of false news.
Going forward, all of Facebook’s 27 fact-checking partners in 17 countries will also be focused on reducing the spread of misinformation through photos and videos.
How Facebook’s Fact-Checking Works
Facebook’s fact-checking system utilizes a combination of machine learning and feedback from Facebook users to identify potentially false content.
Once this content is identified it is then sent to third-party fact-checkers for review. Fact-checkers can also surface content on their own.
Facebook explains how this system applies specifically to photos and videos:
“Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken. Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies.”
Facebook will begin using optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles.
The company is also working on new ways to detect if a photo or video has been manipulated.
When false or misleading content is identified it usually assigned one of the following ratings:
- Manipulated or Fabricated
- Out of Context
- Text or Audio Claim
Ratings from fact-checkers on photos and videos will be used to further improve the accuracy of Facebook’s machine learning model.