Cape Town - Claims of a Facebook data "breach" or "leak" are inaccurate and highly exaggerated.
But it doesn’t make what happened any less insidious: understanding the difference is a telling look at how data culture is affecting the modern world.
Facebook shares took a 7% tumble on Monday when the New York Times and Guardian, including reporting from The Intercept, broke the news that an estimated 30 to 50 million Facebook users’ data was siphoned and used to market candidates in the 2016 United States election.
The data specifically served conservative candidates, namely Ted Cruz and Donald Trump, with Trump’s camp appearing to be the main beneficiary.
The contention is that Facebook user data was being used unethically. But it should be clear that this was not the result of a breach or leak.
Facebook’s systems weren’t compromised in ways that would be associated with cyber criminal activities. Instead, a third party found a dubious way to glean data from consenting Facebook users as well as their non-consenting contacts or "friends" on the service.
Seeking political data
At the centre of the storm is a company called Cambridge Analytica, formed in 2014 after its founders made overtures to billionaire Republican funder Robert Mercer and then-Trump ally, Steve Bannon.
Mercer wanted to use data to help influence the US elections, which wouldn’t be different to any other attempt to market to voters. Only in this case he wanted the heft and potency of very specific digital user data.
Facebook is one of the companies with that level of granular insight. A cursory use of its ad system demonstrates how much a campaign can be fine-tuned based on demographics and location.
When Cambridge Analytica, having successfully executed a number of regional election campaigns, had to step up to the national level, it hit a barrier: it needed much more and much better data. It looked to Facebook for this.
Yet it did not involve the social media giant directly. Instead it created a partnership with a Cambridge academic, who in turn developed a Facebook app that could harvest the sought-after data.
The app was offered to part-time workers on Amazon’s Mechanical Turk site, a space for digital odd jobs, paying each participant a small sum to install and use the app.
But while the app did make users aware of some of its data gathering, it didn’t reveal the full extent, including that it was potentially taking data from friends and associates on that person’s social network profile.
The result was the alleged collection of as many as 50 million usable profiles, very few taken with the permission of their users.
So, while Facebook was not breached or has a data leak, Cambridge Analytica appears to have used loopholes in Facebook to get the data it needed.
Facebook argues that the relevant permissions for the app was meant to have been purely academic, arranged through the Cambridge-based researcher. The researcher, Aleksandr Kogan, is citing non-disclosure agreements to maintain his silence, while Cambridge Analytica has downplayed the effectiveness of the app.
The problem runs deeper: glimpses of activities were flagged as far back as in 2015, when Facebook banned the app in question, and it was reported on in 2016 by the Guardian. Even though Facebook took action back then, getting Cambridge Analytica to delete the data, copies of the data still exist.
Cambridge Analytica has hit back, saying it fully complies with Facebook’s terms of service, and pinned the blame on Global Science Research, a research firm it said it contracted to obtain voter data. It said it had deleted the offending data when it became aware of GSR’s activities. Yet Facebook maintains that Cambridge Analytica retained the data after claiming to have deleted it.
But the proportions of the event are already being distorted. The saga has added fuel to the firestorm around social media’s role in the 2016 US elections. This includes claims of Russian interference in the elections, though there appears to be no link between the two. Nonetheless, it is creating a major sideshow that has become the main narrative.
Are data regulations next?
That risks missing the more serious point: a company was able to get at user data without breaching any system. They simply asked some participants and were then able to exploit the relationships between Facebook accounts to get around privacy settings.
This has been happening since 2014 and is likely not a lone example. Instead it can be assumed such activities are actually widespread and reach further than political campaigns. The political angle simply makes the current story much more appetising.
It is very likely that this will drive the call for more regulations around user data, which could seriously inhibit the revenue streams of services such as Facebook. But they are more likely not to be effective, given the complex and opaque layers around our own online behaviour and the data we shared.
*James Francis is a freelance writer who has been covering technology since the Y2K scare.
* Sign up to Fin24's top news in your inbox: SUBSCRIBE TO FIN24 NEWSLETTER