Former employee of creepy evil alt-right affiliated firm Cambridge Analytica has revealed embarrassing and creepy details about the work the firm did with the help of extremely detailed data on 50 million Facebook users:



At issue is the scandalous revelation that extremely detailed data on individuals was obtained even though they themselves never gave their consent. Facebook's default settings at the time enabled a app developer to get info not only on the app's users but also on those users' friends, unless said friends changed their privacy settings.

In response to this story (the outline of which was known or suspected a year and a half ago), Facebook announced that it had suspended CA:

https://newsroom.fb.com/news/h/suspe...dge-analytica/



Carole Cadwalladr has been pursuing this story for a while, and the Guardian has some more details:

https://www.theguardian.com/technolo...data-algorithm

https://www.theguardian.com/news/201...ce-us-election

NYTimes and others have also covered the story in the US: https://www.nytimes.com/2018/03/17/u...-campaign.html



FB claims no laws were broken, that they did everything in its power to stop this but that they were misled by Kogan (the researcher). Kogan had permission to collect and use this data for academic work but passed it on to third parties. When this was discovered--somehow--FB requested that he and the third parties in question certify that the data had been destroyed. However, we know that, even in 2016, months before the US election, FB was aware not all data had been destroyed. No meaningful action appears to have been taken, and, as we know, FB has been anything but forthcoming in responding to questions from investigators and journalists. Users--even those who were directly impacted--have received very little information, and that only after significant pressure on FB.

FB claims no laws were broken, but it appears they--or at least Kogan--may be significantly exposed to legal liability in the UK, where laws expressly prohibit gathering this sort of data for one purpose and then using it--without consent--for another purpose entirely.

I think this is all very interesting and I believe it may lead to a change in the way these massive platforms and their activities are regarded--and regulated. Microtargeting on FB is very valuable for large companies and this ability accounts for a great deal of FB's allure to businesses. While it's cool in some respects to be able to show diaper ads to an FB user expecting a baby in a week, these features also enable businesses to selectively advertise to white people rather than to black people, enabling racial discrimination in areas such as housing. In the political context, it can help further agendas inimical to democracy and social cohesion, enabling hostile foreign actors to subvert the democratic process of a country.

Seen some proposals for how best to respond to these events. At a minimum, I think the legality of the data harvesting and selling--and FB's subsequent actions--should be investigated.

Do platform owners have any responsibility for criminal activity on their platforms, in the way a landlord may be responsible for not doing enough to stop criminal activity on his property? Some say making platform owners liable will kill app markets completely, which I believe is unlikely. Nevertheless, it would certainly increase their burden, perhaps substantially.

Some have suggested that the simplest and most palatable response to this is to make microtargeting for political purposes illegal (with exceptions that make it similar to handing out flyers at the local level) and political advertising on social media more carefully regulated. Eg. a campaign would have to be officially registered as such when buying ads on FB, and wouldn't be able to tailor & target political ads at the individual level, being required, instead, to show the same things to a large and mixed population. Failure to comply with the law would be criminal.

I think this idea has many problems but it's more workable and a more reasonable compromise than many others. Significant flaws: easy to get around restrictions by identifying and exploiting polarizing subjects that are strongly correlated with political affiliation and behavior; problems with fuzzy definitions; possibly unfair (other users not similarly restricted) and damaging to businesses (FB loses revenue, obv).

Were such a change implemented, I believe FB & co. should be required to report suspicious/illegal activity to authorities in addition to the usual measures it takes to sanction misuse (eg. suspension), similar to how banks may be required to report suspicious activity.

Whatever the response may be, I think it's fair to say a response is required.