One consequence of the global digital revolution is that growing numbers of citizens are now placing significant amounts of personal information, and trust, into the ownership of private, online multi-nationals. Although seemingly innocuous, this submission of privacy has effectively ‘morphed the public into a database of users’ (Lovink, G. 2016: 21), wherein citizens willingly update their profiles and input new data about their worries, desires and actions every single day. Political value can be extracted from this data, providing insight into the fears and motivations of an electorate. This value is now recognised internationally and has resulted in the emergence of a global market for political consultancy and strategy organisations, whose focus is on the compilation and analysis of privately-owned user data, commonly referred to as Big Data (BD) analysis.
This paper explores the utilisation of BD analysis within contemporary political strategy through the case study of influential political consultancy firm, Cambridge Analytica (CA), discussing its relationship with Facebook and contribution to populist political campaigns within the United Kingdom (UK) and United States (US) (Bartlett, 2018; Goodwin, 2018; Moore, 2018; O’Neil, 2016; Wylie, 2019), looking to highlight the inadequacies in online regulation regarding the misappropriation of private user data and spreading of misinformation. Additionally, this paper will investigate the stifling effect such analyses can have on the ‘emancipatory potential’ of digital technology and explore widening inequalities in Asia through review of Jason P. Abbott’s (2001: 99) ethnographic analysis of the online political economy in China. However, before exploring how individual instances of BD’s political application display regulatory inadequacies and serve to perpetuate inequality, it is necessary to further define BD, discuss the current regulatory climate surrounding the prominent social networks, and assess the impact such analysis is set to have on future state sovereignty.
BD is the analytic field concerning the complex, predominantly online, datasets that are commonplace in the digital age (Moore, 2018; Moore & Tambini, 2018; O’Neil, 2016). These datasets accumulate by default within the functionality of social media platforms, with private entities allowing for increasing levels of ‘out of sight’ data aggregation. In this essay BD refers to the use of data scientists and consultancy firms by political parties and organisations to calculate the ‘how best to manage their various communication channels’ (O’Neil, 2016: 74) in real-time during the runup to a democratic vote. Again, this process seems rather inoffensive, as the digital age requires updated means of political campaigning; however, the contention surrounding BD analysis in political campaigning stems mainly from the manner in which data is acquired. Such analysis often requires private entities to misappropriate user data from social networks on behalf of state interlocutors, both undermining the democratic process and breaching numerous data privacy regulations.
In a Bourdieusian sense, these new tools of political analyses contribute to the construction of a new digital form of governance, wherein state actors are granted ‘soft power’ (Nye, 1990) over citizens in the form of access to private and personal data, internet browsing habits, and even location tracking. What makes this digital power ‘soft’ is that it relies on the influencing of the public through co-opting and persuasion, in the form of targeted advertisements, as opposed to coercing through bribery or threats of violence. Such co-opting requires access to a specific technical capital, in this case BD analysis, the possession of which, whether direct or indirect, enables state actors to reproduce forms of governance that deliberately concentrate power into select social groups (Bourdieu, 1986). This power is only maintained through the continued willing and passive submission of data and privacy from citizens, with contemporary democracies relying on this submission in order to exploit the aforementioned inadequacies of current online regulation, highlighting the precarious nature of privately-owned datasets and the threat to digital democratisation that such co-opting poses.
Of the three most prominent online multinationals (that is: Facebook [subsidiaries: Instagram, Messenger and Whatsapp]; Alphabet Inc. [parent company of Google and Youtube]; and Twitter), this essay will focus predominantly on Facebook, whose role in the digital public sphere has been heavily debated in recent years (Bartlett, 2018; Moore, 2018; O’Neil, 2016). Currently boasting an active userbase of over two billion (Facebook, 2019) its value as a tool for improved communication and global publishing is simply impossible to ignore, and its exponential growth has seen the site become a primary news source for many citizens (Moore & Tambini, 2018). However, as a result of the social network possessing the largest informal database of the global population, and with their mobile app sitting at the top of the most downloaded list of the 2010’s (CNET, 2019), it is only natural that the platform finds itself at the forefront of contentious online political strategy as ‘the leading space in which election campaigns (and referenda) are fought’ (Moore, 2018: 111).
Beyond the controversy surrounding Facebook’s psychosocial effects, such as the potential for addiction and negative mental health (Moore & Tambini, 2018; Morozov, 2001), the site has been accused of facilitating the spreading of fake news and the misappropriation of user data (UK HC, 2019; Wylie, 2019). In the UK, this manifested in the form of the 2017-2019 Digital, Culture, Media and Sport Select Committee’s final report on ‘Disinformation and ‘fake news’’ (2019). This select committee was called in direct response to the public and media outcry that followed the UK’s 2016 European Union (EU) membership referendum, which will be discussed in the following section, where claims of foul play were made against campaigning organisation Vote Leave as a result of their relationship with CA and Canadian political technology company, AggregateIQ (AIQ), both of whom have been found to have misappropriated user data through Facebook in order to deploy targeted adverts for the Leave campaign (UK HoC, 2019).
The resulting report was scathing of Facebook’s current lack of regulatory guidelines for data protection. It stated that CEO Mark Zuckerberg had ‘shown contempt towards both the UK Parliament and the ‘International Grand Committee’ for his failure to appear before the committee or ‘respond personally to any of their invitations’ (UK HoC, 2019: 89), suggesting that Zuckerberg believes Facebook are dealing with issues of data privacy and misinformation on their platform adequately. The report also recommends a ‘compulsory Code of Ethics’ for social media platforms overseen by an ‘independent regulator’ (UK HoC, 2019: 89). The committee advocate for a significantly more robust system of legal liability for the social networking giants in order to prevent them from shirking regulatory responsibility and ‘hiding behind the claim of being merely a ‘platform’’ (UK HoC, 2019: 89), which, in an era of fake news, misinformation and rising national populism (Goodwin, 2018), would only serve to further facilitate the abuse of their network.
The UK’s referendum result, which left much of the world’s democracies ‘flabbergasted’ (Moore, 2018: 110), presents the direct threat that BD analysis poses to marginalised communities. The new algorithmic campaigning methods that BD enables have been proven to be the most efficient means of building, engaging and speaking directly to voters (Moore, 2018; O’Neil, 2016), and the communicative shift within political campaigning prioritises the online world and side-lines traditional, boots on the ground methods. Moore (2018) posits that the days of fretting over access to Royal Mail constituency directories are long gone, and that political campaigns are now able to reach nearly every voter in a specific constituency ‘without even having to pay for postage’ (ibid.: 127). This evolution in campaigning methods is a direct result of the rich user profiles that BD analysis creates.
Before 2014, such profiling was used primarily by marketing agencies and advertisers, as it allows for the delivery of tailored advertisements for each individual consumer, increasing the likelihood of a potential purchase and contributing to a more efficient mode of marketing. However, in 2014, in a move made in the interests of online UK advertisers, Facebook merged their UK user database with that of Arkansas-based marketing company Axciom (Moore, 2018). This new data ‘contained lots of different ways to split users geographically’ (Moore, 2018: 126) and sparked a new interest in BD analysis for political campaigners (Bartlett, 2018; O’Neil, 2016; Wylie, 2019). Most notably in the case of AIQ and CA, who despite ongoing investigations from UK, US and Canadian governments (Gizmodo, 2018; Ljunggren, 2019; UK HC, 2019), have already managed to blindside the traditional political class, facilitating a transformative populist shift which has rocked the UK and US political landscape. This paper argues that in these cases (2016 UK referendum & US Presidential Election) BD has been weaponised by political campaigners to promote national populist rhetoric (Goodwin, 2018), and has directly contributed to the further marginalisation of minority groups and those of lower socioeconomic status (SES) in both nations.
Although both campaigns ran on uniquely divisive narratives, the mutual nature of their campaign rhetoric was fundamentally characterised by a similar nativist nostalgia. This similarity also rang true for the practical side of the respective campaigns, with both placing large emphasis on BD analysis for the distribution of their national populist messaging (Moore, 2018; UK HC, 2019; Wylie, 2019). The calculated delivery of such political messaging involves the systematic construction of a database of ‘persuadables’ consisting of the unmotivated young, the unconfident and ‘the downright apolitical’ (Moore, 2018: 127). Theoretically, this focus on untapped, or floating, voters unlocks new sections of the electorate who for one reason or another avoid mainstream news sources. This should be a positive effect; however, offset with the aforementioned nativist nostalgia, encapsulated by slogans such as ‘Take Back Control’ (Vote Leave, 2016) and ‘Make America Great Again’ (Trump, 2016), the electorate are exposed to messaging whose core hinges upon a longing for regaining control over the political and socio-cultural infrastructure of the respective nations, control which is framed as being directly under threat from the globalised structures of the EU and NATO. ‘Powellite’ (Virdee & McGeever, 2018: 1804) in its nature, this messaging promotes an insularity that has paradoxically followed the British for centuries, paradoxical in that continues to dwell on a ‘deep nostalgia for empire’ (Virdee & McGeever, 2018: 1803) whilst remaining sceptical of the aforementioned super structures of the contemporary political world.
This form of campaigning boils complex ideology down into consumable, and most importantly marketable, chunks of information, often oversimplifying to the point of meaninglessness in the case of ‘Get Brexit Done’ (Conservative Party, 2019), it then distributes these chunks at a moment of the campaigns choosing through BD targeted delivery. This is not to say that the contemporary electorate is bereft of personal or political agency, but instead that when at the behest of persistent online stimuli, voters naturally begin to normalise and internalise rhetoric, which in this instance was built on the reclamation of national and community identity and supported by staggered phases of targeted messaging. Such rhetoric plays on latent nationalist sentiment within the electorate and seeks to vocalise feelings of lost cultural identity within target demographics and remedy them with national protectionism at the expense of perpetuating anti-immigrant sentiment. For example, in the last ten days before the 2016 referendum the Vote Leave campaign homed in on the ‘powerful but questionable claim’ (Moore, 2018: 127) that if the UK remained in the EU floods of Turkish migrants would enter the country. Not only was this messaging factually inaccurate, but it actively sought to mobilise anti-immigrant sentiment through formulating a ‘desirable’ and ‘undesirable migrant’ (Bulat, 2017: 1). This formulation, although indirect, is a clear example of the new digital co-opting process, wherein BD analysis not only facilitates the promotion of falsified claims regarding the EU (Ref) but claims that damage both migrant communities within the UK and perpetuate difference for new citizens outside of the country.
Similar themes of paradoxical nativism can be found in President Donald Trump’s 2016 election campaign, whose utilisation of CA’s BD analysis is well documented (Bartlett, 2018; Moore, 2018; Wylie; 2019). The President-to-be’s inflammatory campaign rallies promoted the kind of national isolationism explored by Goodwin (2018), and this was echoed in Trumpian campaign materials (Warner et al., 2018). The persistent mobilisation of anti-immigrant sentiment in his rallies, campaign propaganda and even tweets ensured that ‘immigration would become the topic most associated with his candidacy’ (Benkler et al., 2018: 105). Initially, this mobilisation focused on the potential migration of peoples from fellow North American nation Mexico, with Trump launching his campaign on June 17th, 2015 with the statement:
‘The U.S has become a dumping ground for everybody else’s problems…’ and ‘When Mexico sends its people, they’re not sending their best.’
Trump’s rhetoric gradually shifted over the course of the campaign from a focus on ‘Building the Wall’, to one whose central concern was on Muslim migration into the country and the threat of ‘Radical Islamic Extremism’ (Benkler et al., 2018: 105). This shift encapsulates the core aim of the campaign, which was to perpetuate the othering of migrant communities and play up to the idea that ‘the ordinary man or woman on the street is being sold-out by out-of-touch elites’ (Geddes, 2013: 247). It is important to note the relative unpopularity of both candidates amongst the electorate (Benkler et al., 2018: 105), which ensured the result would be one of tight margins, fought ‘over a small number of key marginal districts’ (Bartlett, 2018: 100). This made for the perfect terrain for the application of CA’s refined messaging methods and provides another indirect example of BD analysis contributing to the success of national populist campaigns, which in this instance effectively sought to other migrant communities and promote racial and cultural discrimination (Warner et al., 2018). These two examples provide an insight into the disruptive potential of BD analysis within democracies who adopt a comparatively laissez-faire approach to internet usage and regulation among their citizens.
In order to move beyond assessments of BD analysis within laisses-faire, liberal democracies and strengthen the argument that BD contributes to widening inequalities globally, this paper will now look at the work of J. P. Abbott (2001), whose comparative ethnographic analysis of the online political economy in China and Malaysia explores the degree to which active conservative regulation of the digital realm impacts on political mobilisation and perpetuates longstanding race, gender, and class inequalities in the region. In addition, this work investigates the stifling effect of globalisation, in the form of increasing commercialisation and privatisation, on the online world’s potential to alleviate inequality through increased democratisation.
Abbott first posits that the online world is akin to the ‘19th century American West’ in that it is ‘vast, unmapped, legally ambiguous’ (Abbott, 2001: 99) and without any formal warden. This analysis of the digital public sphere implies that the politically disenfranchised would be able to use the net in China as ‘an alternative medium’ through which their opposition to state intervention and regulation could be voiced (Abbott, 2001: 99). This runs in direct contrast to the common view held regarding Chinese state intervention within the online world, suggesting that despite heavy surveillance and filtering of content activists are still able to negate their government’s wariness of online dissidence. Abbott states that Chinese State has continuously acted to try and clamp down on the use of online communications by political activists, whether in the form of raiding ‘so-called ‘illegal’ internet cafes’ or introducing ambiguous regulation regarding the publishing of content which contains ‘state secrets’ (that is, anything that has not been formally verified by the government) (Abbott, 2001: 103).
Here we see the active use of the new form of digital governance play out directly, with state actors utilising BD analysis and online encryption to limit the potentially damaging effect that open access to the internet may have on perceptions of a nation. It is also important to note that this analysis was published in 2001, and that since then the Chinese Government has developed and now is implementing a system of National Social Credit. This relies on the use of both BD analysis and facial recognition within cityscapes in order to ‘rate the trustworthiness of citizens’ and is intended to promote ‘transparency and ‘sincerity’’ among the population (Bartlett, 2018: 199). This disconcerting development highlights the urgency of the battle for positive digital transformation, and clearly displays the enormous political value of this emerging technological capital within contemporary democracies. Abbott’s work further supports the argument that despite the emergence of communicative technologies that seek to remove limitations on an individual’s ability to communicate, exchange and produce capital there are both direct and indirect forms of state resistance which look to challenge new forms of collectivity.
To summarise, this paper has demonstrated an understanding of the potential impact of BD analysis on widening global inequalities. It has achieved this firstly through highlighting the developing regulatory pressures facing contemporary social networks, with particular focus on the accountability challenges Facebook poses internationally (Moore, 2018; Moore & Tambini, 2018; UK HC, 2019; Wylie, 2019). In addition, it has illustrated the new form of digital state governance that is shaping contemporary democracies globally. This was then supported by two key examples of BD analysis facilitating the widening of inequality through the perpetuation of anti-immigrant rhetoric, which has served to give voice to distinctly nativist ideology in both the UK and US. After which it has sought to comment on the ‘emancipatory potential’ (Abbott, 2001) of the digital evolution through review of ethnography on the Chinese political economy, which displayed both state suppression of free political debate online and the genuine conceivability of BD analysis being used to police citizens in real-time (Bartlett, 2018).
In conclusion, it is apparent that the digital public sphere holds immense potential for increased democratisation and voter participation; however, in its current form BD analysis is being utilised, both directly and indirectly, as a force for reproducing state political power, breaching individual privacy and the further marginalising of minority and lower SES groups.
Comments