Facebook AI software could stop users posting risqué shots online
Thu 12 Nov 2015
Having announced new developments in its artificial intelligence (AI) image recognition technology earlier this month, Facebook has confirmed that the software will also alert users if they’re about to upload a photo that they may rather keep away from the public eye.
When uploading photos which may include nude content or images of children, for example, the technology notifies the user who can then make the decision whether or not to complete the upload. Speaking at a Facebook event in London last night, Engineering VP Jay Parikh explained: “If I were to upload a photo of my kids playing at the park and I accidentally had it shared with the public, this system could say hey wait a minute, this photo is of your kids […] Normally you post this to just your family members. Are you sure you want to do this?”
In the fireside chat, hosted by Irish comedian Dara Ó Briain, asked if his “perennial fear of accidentally posting a photograph of [his] penis” would be averted. To which Parikh replied, “Yep.”
Parikh said that the software is still at the demo stage and has not yet been integrated with Facebook products. He added that the social media giant’s research would continue in the field. “We’re investing in AI and really specifically a field called deep learning to help people understand the world around them,” said Parikh.
The engineer discussed the potential of AI in helping Facebook to process all of the user data it gathers in a sophisticated way in order to ensure that the products it builds remain “useful and valuable.”
Earlier this year, Facebook launched a dedicated AI Research Lab in Paris, in addition to current artificial intelligence development teams in Silicon Valley and New York. The French branch is tasked with long-term projects on image recognition, natural language processing, speech recognition and the physical and logical infrastructures required by these systems.