In a world full of cameras, how can you ‘opt out’ of appearing in photos?
Wed 5 Oct 2016
Researchers in Hong Kong have proposed and developed a system which allows users a high level of control over whether their likeness actually appears in any photo of themselves – but by the same token would allow property owners or civic authorities to stop photography being possible in specified zones.
The scheme, entitled Cardea, allows individuals to register their likeness for facial recognition systems and then set preferences for places and/or contexts in which they are prepared to be photographed – or where they explicitly do not want to appear in photos. It also allows the user to temporarily change their permissions for a situation by making a hand-gesture (palm for ‘no’, victory sign for ‘yes’).
The user can select nine scenarios to set preferences for; shopping, travelling, park & street, eating & drinking, working & study, scantily clad, medical care, religion and entertainment. Any unticked sections will default to ‘consent’ in photographic scenarios.
The user can also specify definite locations where they do not want to be photographed, such as a campus or place of work, relieving the system’s dependence on neural networks to identify context and place.
A Faraday cage for photographers
But Cardea is a local proposal for a much wider system – a potential vendor-neutral Privacy Protection Framework which would cover areas, cities, or perhaps even nations; and as such would allow those who might want to prohibit photography on their premises – such as restaurants, concert venues or cinemas – to effectively achieve this. Likewise,Cardea would provide governments with the technical means of enforcing controversial bans and restrictions on photography in public places.
It’s not reasonable to expect low-consumption devices to accommodate the CPU and power needs of facial recognition and re-processing (i.e. blurring a person’s face or even removing them from the picture where possible), so Cardea sends the image to a cloud server where the recognition processes are performed. If a face is successfully linked to a user who has set preferences under the system, their wishes will be observed, and the photo sent back to the device either in its original state (no changes needed) or a processed state (prohibitions observed/omissions).
Retro-active permissions and ‘partner erasure’
A Visual Protection scheme that reached this level of uptake (and it would be nothing less than ISO certification and mandatory enforcement, in Europe), would presumably extend to the usage of photos on social networks and sites such as Flickr, which would be bound to respect the user choices.
Images themselves could be hashed in the cloud so as to be identifiable and subject to the preferences of those concerned, even if the photo’s metadata should be stripped away – a similar process to the hashing algorithms which let copyright holders recognise illegally uploaded segments of their work on YouTube and other video sharing sites.
Effectively this means that a compromising or intimate photo which a user allowed to be taken at an earlier point could become blurred out if the user changes their preferences for ‘scantily clad’, for a certain location or situation, or for whether they are being photographed together with another person the system can recognise – because Cardea allows the user to ban their appearance in photos in which any other specified person appears.
Retroactive preference changes could also apply at a corporate or civic level, with entire swathes of photo collections rendered useless if, for instance, a tourist attraction decides to ban photography of its property in a context where neural networks can recognise the location in photos.
If photos themselves become as volatile as websites in terms of availability, it seems likely that printing photos might return into fashion.
The researchers, from the Hong Kong University of Science and Technology, devised two hand gestures which reverse a user’s preferences for any photo scenario:
The raised palm accords with the universal photographic meme equivalent to ‘no comment’ or ‘do not take this picture’. The famous victory sign is an interesting choice to indicate user consent. However hand-gesture recognition is currently a harder task than facial recognition (since fewer resources are going into the field), so it seems that one must either imitate Winston Churchill or Mr. Spock in order to provide an obvious enough shape, and the Vulcan salute is a bit harder to do.
The research team conducted the experiments with Android smartphones, and also provided the user interface for the system on desktop devices, and achieved 86% accuracy in terms of facial recognition and subsequent respecting of user preferences, among the test subjects. The team notes that there is room for improvement in terms of recognising hand gestures, and also that many photos taken in the ‘entertainment’ context are snapped in very low light, which can interfere with recognition algorithms even after the photo has transferred from the light device to the cloud server.
The future of photography control
Issues around the legality of taking someone’s photographic likeness have never been in greater contention, with the massive rise in the number of sensors snapping the world via smartphones, surveillance cameras and IoT devices frequently grating against government and government authorities wanting to take control over the use of photography in areas they might consider sensitive. The topic gained further friction in 2016 as the advent of Pokémon Go sent millions of people into public and private areas (as well as some very dangerous ones) in pursuit of Pokémon.
Other schemes have been proposed which allow individuals to express preferences as to whether they give their consent to appear in a photograph. One involved wearing a QR code, another wearing distinctive headwear. However, nearly all of the schemes in question either assume that the user is aware of being photographed, or requires an improbable level of effort and sartorial commitment.
Where lifelogging systems such as Google Glass, Microsoft HoloLens and Narrative Clip are recognisable enough to regulate, some organisations have gone to the trouble of addressing the issue – particularly in the case of Google Glass.
But since photos retain an immediacy of impact which is a different proposition to video, and since a video-enabled version of Cardea would be unworkable due to the amount of data involved, it seems likely that further proposals to regulate photography at a technological level will be presented over the next five years. Though possible remedies against infringements such as revenge porn might prove the popular headline-driver, it seems likely that the impetus would be driven by government and business.