For this week’s post, I thought to continue my reflections on images, power, and Internet platforms. Of particular interest to me was our discussion on how to blur an image. Specifically, during this conversation, I immediately thought of blurring is often used as a mechanism for censorship or privacy. For example, network television shows often blur sexually related images or blur individuals to anonymize their identities. Blurring is but one of the ways to censor or anonymize an image; other image processing techniques include pixelization (which reduces the resolution) or covering part of an image using a black bar. Colloquially, these image processing techniques are termed “fogging”.
This image is actually from researchers More, Souza Wehrmann, and Barros (2018) who developed an AI to censor nude women by covering explicit regions with swimsuits
In part, why I am interested in fogging on Internet media platforms because of the idea of a user’s right to privacy. As one example, Google Maps allows users the ability to request that images of their homes or persons be “fogged” in Street View. This feature started in 2008, in response to several privacy concerns about people being photographed and identifiable on Street View without their knowledge or permission. However, blurring or pixelating as a measure of privacy is also easily reversible. Wired published an article summarizing how several AIs software can recognize faces even when they are blurred or pixelated
https://www.wired.com/2016/09/machine-l ... hers-show/. Thus, a user’s request to be anonymous does not necessarily mean that these requests will be fulfilled.
From an aesthetic point of view, I am also intrigued by the power of suggestion that comes from “fogging”. Specifically, fogging a specific area of an image, whether through blurring, pixelating, or covering with a black a bar, suggests that information is being purposefully hidden. It draws our attention to that missing bit of information – why is this information being withheld? - what am I not allowed to see and why? This drastically differs from an alteration of an image where information is completely removed. With an image in which information is completely removed, the same questions are not as obviously present. We don’t immediately ask what is being withheld.
Below, I present three images of the same photo of my friend from a vacation. The first photo is original, in the second photo, I blurred her face, and in the third photo, I removed her completely. Applying the idea of a person’s right to privacy, I wonder for my friend if she wanted to be anonymized, which of these photos she would want to be posted? Probably the last one. However, the last photo is also the least truthful to the original. So, if I, as the poster of the image, wanted to be factual then it would be inaccurate for me to post the photo where she has been fully removed. Indeed, another Wired article points to the idea that blurring and censoring images could be anti-journalistic
https://www.wired.com/story/opinion-blu ... nti-human/. Thus, whose rights should be respected when it comes to anonymity and privacy?
References:
- M. D. More, D. M. Souza, J. Wehrmann and R. C. Barros, "Seamless Nudity Censorship: an Image-to-Image Translation Approach based on Adversarial Training," 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1-8, doi: 10.1109/IJCNN.2018.8489407.