‘Let’s talk about ads, together’

Published: 4 March 2019

Dr Justine Gangneux discusses advertising on Facebook and what we can learn from accessing our data.

In the past year, Facebook and data-mining have hit the headlines, most notably after the Cambridge Analytica revelations. Not long after, users started to delete Facebook frustrated by what they have long known, namely that connecting people together cannot be the whole story in an age of ‘surveillance capitalism’. Surveillance capitalism – Zuboff (2014) explains – thrives on a new logic of accumulation which extracts added value from data by predicting human behaviour. And data – small and Big – are everywhere. Almost all aspects of our everyday lives have been ‘datafied' – that is turned into quantified data and interpreted using behavioural and predictive analysis tools. These predictions – based on algorithms – have pervaded our daily lives in more or less harmful ways ranging from the conduct of screening checks by prospective employers, governmental institutions or banks, the improvement of urban quality of life in so-called ‘smart cities’ like Toronto, the customisation of our viewing experiences and recommendations on Netflix or the optimisation of our selves via apps monitoring sleep patterns or heart rates (otherwise known as the Quantified Self). All of which is starting to resemble an episode of Black Mirror.

 

Curious to see what added value Facebook could extract from my data, I decided to check my ad preferences and in Facebook’s words to ‘learn what influences the ads I see’ on the platform. I was quite surprised to discover a seemingly comprehensive breakdown of the advertisers I have interacted with and of the different categories of ‘interests’ Facebook has attached to me. Facebook designed an interactive interface dividing my interests in several categories such as ‘Business and Industry’, ‘News and Entertainment’, ‘Hobbies and Activities’, ‘Fitness and Wellness’ or ‘Lifestyle and Culture’. The experience of looking at and interacting with aspects of my Facebook data double was simultaneously cringeworthy and fascinating. Some of the information I expected (e.g. categories such as ‘Social Sciences’ or ‘Sociology’), some I did not but did not mind either (e.g. ‘Tea’), some I expected but minded (my political affiliation), some I did not understand (e.g. ‘Albatross’?!?), some I found funny (e.g. ‘Giraffe') which clearly was inferred from the name of the band I play with) and some was wrong (e.g. ‘thriller movies’). And perhaps this latter category of information ‘about me’ that disturbed me most. At this stage I sort of appreciated the intricacies and ironies of surveillance capitalism as Facebook offered (of course!) the possibility to edit these categories. I was aware that this would give more fuel to my profiling by disassociating my data from specific entry points (in a similar way that unliking is part of Facebook predictive algorithms) but at least I would not be targeted based on my liking of thriller movies!

 

The problems with accessing my data this way are numerous. First, the categories attached to my profile were quite broad and not very sophisticated, far from the fine-grained data-mining that I expect companies such as Facebook to do. This is because the interface only provides a small, incomplete and non-explanatory snapshot of the data double Facebook created on me. A large amount of data they collect (e.g. meta-data) and they create via predictive analysis (such as inferences from various categories and in relation to Big Data sets) are missing. The interface also conveniently omits to explain and remains very vague about how the data were obtained, monetised and the predictions gleaned from it. The only thing that the interface achieves, and in particular the ‘How Facebook ads work’ animation embedded in it, is the impression of transparency regarding Facebook’s data- mining. And we know that Internet companies like Facebook are big advocates of transparency. That is because transparency is used as a device to keep at bay governmental regulation while continuing commercial surveillance (Crain, 2016) as well as to displace responsibility onto individuals to make informed choice (Ananny and Crawford, 2017). Furthermore, transparency, often perceived as desirable because it is said to bring accountability – lies on the false assumption that seeing equates knowing (Ananny and Crawford, 2017). In the case of Facebook ads, seeing partial and disaggregated categories attached to my profile did not mean that I better understood how they put them together or how the system works as a whole.

 

Not only is the ‘ad preferences’ interface a hollow shell which creates a distraction from the real issues in terms of sorting and exclusion raised by profiling and predictive analysis, it also plays into the normalisation of data- mining. Looking at and interacting with my ‘Facebook data double’ was – as already mentioned – cringeworthy and fascinating but also playful. Play – or gamification – has been shown to shape popular representations of technologies and participate in legitimating surveillance. For example, Ellerbrok (2011) demonstrated how the playful incorporation of Facial Recognition Technologies (FAT) on Facebook via the ‘tagging’ function not only contributed to the normalisation of these practices but also reshaped how FAT were perceived (dissociating them from airport security controls). In the case of Facebook ads, the playful design of the interface normalises personal data collection and data- mining by encouraging users to interact and engage with it.

 

And finally, the interface is yet another way to create added value from your interactions with the categories attached to your data.

 

The animation embedded in ‘About Facebook Ads’ and its inviting first sentence ‘Let’s talk about ads, together’ says a lot about social media corporations’ attempt to frame the public debate and our understanding of personal data- mining. The animation presents in a comprehensive and interactive way what Facebook supposedly does with users’ data. Although on reflection the interface does not actually say much about that, what it says however is that data-mining is after all not such a big deal as you can ‘control’ or ‘personalise’ it (both feed into behavioural data). This is obviously only one side of the story – the one Facebook wants to portray and make us believe. This is not dissimilar to how we portray our lives online in a good light through Instragram ‘stories’ which are not the whole story either. By the way, these ‘ephemeral’ stories are collected, stored and mined by Facebook long after they have disappeared from your Instagram profile. 

 

I am wary of letting Facebook frame the terms of our discussion. Let’s not ‘talk about Facebook ads’, let’s talk about surveillance capitalism and challenge it together.

 

References
Ananny, M. and Crawford, K. (2017). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media and Society, 20(3): 973–989.

Crain, M. (2016.) The limits of transparency: Data brokers and commodification. New Media & Society: 1–17.

Ellerbrok, A. (2011) Playful Biometrics: Controversial Technology through the Lens of Play. The Sociological Quarterly, 52 (4): 528-547.

Zuboff, S. (2014). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30: 75–89.

 

* * * 

 

Justine recently completed her doctoral research in Sociology at the University of Glasgow. Her thesis examined young people’s engagement with social media platforms and the complex ways in which their digital practices and understandings of the platforms were informed by techno-social structures and broader power relations. She has recently published on these issues in the Journal of Youth Studies.

 


First published: 4 March 2019