The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen. They knew it was illegal but put all the incentives for companies not to follow the rules. That's the only hole in his statement.
As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.
Overall, an optimal outcome for all parties currently, except for society as a whole down the line
What, exactly, are you claiming about this that was “illegal”? When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
The Facebook developer platform became so limited in 2014 that most developers (including me) left. There was no point in developing apps for the social graph that had no ability to be use the social graph. But even prior to that, the sharing of this information with apps, even those authorized by a friend, wasn’t “illegal”. You agreed to it when you signed up for Facebook and voluntarily handed them your information. Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here.
It is relevant and related to their track record regarding how they treat user privacy, what data they have previously released to third parties, and what their current legal standing is with regards to the same issue, especially in light of Cambridge Analytica.
A bit of a nitpick, but the Hackernews post history and comments are publicly available. Most of, if not all, privacy law treats public data differently.
Now the majority of Facebook data, which is visible only to a selected group of individuals, can not contradict privacy laws of a country. Despite whatever agreement you signed. Law trumps user agreements.
As an extreme example I can sign that I give you the right to kill me, but if you do that you have still committed pre-meditated homicide. So it does not matter what Facebook made you sign it is likely they committed (and still do) a crime in most European countries. For the U.S. the waters appear extremely blurry from what I understand... But I can not offer an opinion. If there is a lawyer/ privacy expert to pitch in this discussion that would be nice.
> I can sign that I give you the right to kill me, but if you do that you have still committed pre-meditated homicide
This sounds as if nobody can agree to voluntarily take part in a potentially fatal experiment, like testing a potent medicine or be a test pilot. I believe legal provisions exist here.
That depends on the law in each individual legal region (most probably country). For example, I believe Euthanasia is legal in some European countries.
Even the opposite can be true. Life saving medication may be illegal in one country but another provides it without restriction.
Kind of sums up my reply. Generally, if you know or there is enough suspicion the cure is going to harm or kill one and euthanasia is out of the picture then you are liable. That's why the whole process exists about medical trials. One nice window here is that you can in general experiment on yourself (people have and are doing that). Depends on the framework, but good point.
Anyways, my point here is you can not override law (or you shouldn't be able to).
If it mattered to me, was necessary, and i was in the right legally, then I would need to.
Each country has its own laws covering this situation.
Giving someone personal information still restricts them legally irrespective of what they think i've consented to or their own definition of consent. The legal system has its own opinion.
One day, the whole farce that is "you agreed to the terms and conditions" deserves to just die.
Almost no one reads them, so they should not be enforceable.
I mean, as developers we know when a session is established, pages are visited, and can easily see how long they've been on the page.
No one can read the typical T&C in 10 seconds .. let alone 1 minute .. especially without even opening the page! So the options should be something like:
[x] I don't care, just whatever dude.
[ ] No. Get me out of here. Because I don't know how to close the browser window myself.
>Almost no one reads them, so they should not be enforceable.
It varies depending on where in the world you are but its actually pretty unclear as to whether they are enforceable. Or at least, a specific set of T&Cs with a specific user/customer may be found to be unenforceable for a wide range of reasons.
Of course, the company that puts the T&Cs in front of you isn't going to tell you that.
> Even the idea that developers were supposed to delete the information they had before was just a civil agreement between the company and themselves - it wasn't illegal. Facebook can certainly sue them over it, but there are no violations of the law occurring here
Perhaps not in the US, but I'd like to point out that it's not the same way everywhere: I think the EU is moving towards another direction. There's a whole thing around whether a company has a responsibility to do due diligence around preserving the personal information of its users. Just because you gave them your data doesn't always mean that they can now do whatever they want with it (e.g. give access to detailed information in large amounts to third parties). Even if you sign an agreement, in many jurisdictions there are certain rights a company can't just make you sign away.
Doesn't matter what you signed and how it relates to civil law, you cannot sign away your statutory rights.
What matters is how criminal law views this particular data collection in the context of Facebook's working relationship with this particular client, within all of the various jurisdictions that Facebook operates.
>When you singup for Facebook, you agree that anything you post might be shared with others on the platform.
Except you CAN'T sign away your rights in many jurisdictions, including this thing called HIPAA. So if Facebook sold people's mentions of health problems to third-parties... is that a violation?
I mean, this is the exact scenario people grilled Windows 10's spyware. But somehow, Facebook doing it, isn't an issue? What's the difference? I'm honestly asking.
Yeah this is one of the worst part about this whole situation. Everyone screaming ILLEGAL!! and also somehow acting like Facebook is the only one doing this?
Almost EVERY free site is trying their damn best to collect as much info and link everything together to sell it so they can- you know.. make money.
I guess those people are instead willing to pay for virtually every site they use on the internet, right? Right? crickets
So FB opened up their platform to 3rd party Devs in 2007 and this CA incident happened in 2013. FB changed their policy of not allowing broad data access to these Devs in 2014. So my question is: Why they admit that they'd audit the pre-2014 apps now when NYT and Guardian/Observer broke the news? And what policy is in place now that makes sure that these 3rd party Devs won't sell whatever info they do collect as per post-2014 policy? I am not satisfied with what Mark said just now. We need more answers.
I kind of doubt that this action will stop all of the devs that check in their api keys into public repos everyday that are then used by other parties, but at least it will allow facebook to cover their ass a bit more.
> when Obama committed even worse Facebook privacy violations back in 2008/2012
Obama's campaign said they were scraping for academic purposes, downloaded a bunch of data and then lied about deleting it? There are multiple levels of B.S. when it comes to Cambridge Analytica and Facebook. Only one finds comparison in the 2008 or 2012 races (on either side).
You're framing this improperly. Prior to 2014, anybody could create an app, and if people authorized it to, it could access some friend data. Your comment reads as if they went and made some special arrangement with Facebook and lied to them saying it was for "academic purposes". That just isn't the case. Every single Facebook user with the ability to write code could do this easily and automatically for the 7 years leading up to the change in 2014.
What Facebook did with the Obama campaign was far worse. No authorization was required - they gave them free, full access to the entire social graph. Didn't like Obama and didn't authorize an app to access your and your friends' info? Too bad, your information was still given to them and used to help get him elected. That wasn't a problem for anyone in the press though, because of course, Obama was a Democrat.
edit: I misspoke. There was an app required, but the Obama campaign accessed 190 MILLION profiles, versus 50 million involved in CA. According to [1]:
"The campaign boasted that more than a million people downloaded the app, which, given an average friend-list size of 190, means that as many as 190 million had at least some of their Facebook data vacuumed up by the Obama campaign — without their knowledge or consent...A former [Obama] campaign director, Carol Davidsen, tweeted that Facebook "was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing.""
> According to a Time magazine account just after Obama won re-election, "the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button."
you are doing the old classic false equivalence... the deliberate use of an obama sanctioned facebook app that asks you to press a button to share the get out to vote (probably for obama) with their friends is 100% different than Cambridge University creating psyche profiles on people to ostensibly use as a fun way to find out your profile. then to sell that harvested data to a different campaigning firm Cambridge Analytica to use to micro target ads to them for political purposes unbeknownst to them is way different. It's about expectations: one expects it to be used in a political fashion because they opted into it and the other is not expecting it to be used to target themselves and their friends later.
The Obama app accessed the data of roughly 189 million friend profiles that didn't authorize the app (it only had about 1 million installs). About 95 million of those people would have consciously objected to helping the Obama campaign had they known about it. But all of their data was collected and used by the Obama campaign to target political ads and formulate campaign strategy anyway, even though none of those 189 million people expected or specifically authorized their data "to be used in political fashion".
> That’s because the more than 1 million Obama backers who signed up for the app gave the campaign permission to look at their Facebook friend lists.
permission for one.
> The campaign called this effort targeted sharing. And in those final weeks of the campaign, the team blitzed the supporters who had signed up for the app with requests to share specific online content with specific friends simply by clicking a button. More than 600,000 supporters followed through with more than 5 million contacts, asking their friends to register to vote, give money, vote or look at a video designed to change their mind.
intent and deliberate action up front by the user of the facebook app for two not some shady psych profile that gets sold later and used behind the scenes to target them without any link to the campaign. that's absolutely a false equivalence.
Also, that number 189 million is absolutely likely to be way too high and is likely bullshit. Say I have 190 friends with a significant overlap with say 50 friends who all know each other and are likely also friends with each other. it simply doesn't follow that every single friend has 190 unique friends to get a graph of that size even though the average across all users might be 190 friends. have a few more of those for different groups (business, whatever) and you have a much lower number. coupled with the fact that you have no idea what the average number of friends the people that opt into that app have if you are just comparing it to the overall average number of friends for the entire set of facebook users you possibly have a lower number still.
Which were the same permissions that the users of the Kogan app gave. Remember that the issue here revolves around the fact that friends of the users of either app never gave permission for their data to be used. The only difference is that Obama did this to about 4X as many people.
Also, that number 189 million is absolutely likely to be way too high and is likely bullshit
According to their own campaign manager, they obtained the entire US social graph, and Facebook knew about it and allowed it to happen. The 190 million number may actually be low.
> Which were the same permissions that the users of the Kogan app gave. Remember that the issue here revolves around the fact that friends of the users of either app never gave permission for their data to be used. The only difference is that Obama did this to about 4X as many people.
you're still wrong but at this point, with all the evidence showing just how different the situation is, i suspect you want to be for whatever reason.
It's an agency issue. If I use an attributed channel it's like meeting you and saying "hey remember your friend the CS guy? Our company is looking for someone like that. Could you pass the word along?" vs pretending to be chatting with you when all I'm looking for is relevant recruiting leads which I will then use unbeknownst to you.
For hiring this isn't as touchy a subject, but surely there's a qualitative difference in these two interactions. One is based in an above-biard interaction. The other in subterfuge and hidden motives.
The end result for the friends, which comprise 99.5% of the victims in both the Obama and Kogan cases, is precisely the same. Nothing was represented to them, their data was just used because someone they were friends with installed the app.
You're missing the point. The OFA app determined who should share what with who by running all the extracted profile data (including that taken from opted-in users' oblivious, perhaps anti-Obama friends) through their psychographic models. It's identical, except, for "the good guys".
> The OFA app determined who should share what with who by running all the extracted profile data (including that taken from opted-in users' oblivious, perhaps anti-Obama friends) through their psychographic models.
Yes, this was a political app, downloaded by people motivated to get obama to be the president. They shared that info willingly to the obama campaign. The obama campaign used that information to suggest "hey man, we need help in texas and you know someone that might be able to help us. Can you do us a solid and send them some of this info?" Nothing I have seen suggests that they did anything untoward with the information voluntarily and directly shared with the campaign. They used it to suggest people to share the message to. And they did any action with the direct permission of the people using the political app.
> It's identical, except, for "the good guys".
Not even a little bit. The Koger fellow used the data he harvested from those psych profiles and also got the friends information that the people taking those "tests" doubtfully wanted them to have. If I take a silly test that is a facebook app I would not expect them to datamine my friends to later sell that information off to a political campaigning firm to later use for a political campaign that they may not have wanted it to have. If trump had used an app that did the same as obama I would have had no issue with it. he didn't, and I do.
Again, you're missing the point. The people who SIGNED UP for the Cambridge Analytica app, or the Obama For America app agreed to share their own information with the app. In doing so, they ALSO agreed to share data on their entire friends list with the app -- Facebook had no restrictions in place on this at the time.
Those people who were opted-in by proxy, i.e. the friends you sold out, may not have wanted the Obama campaign or Cambridge Analytica to get that info, but they never had a choice!
Both apps (well, OFA for sure, CA allegedly) took data legitimately provided to them, used it to feed predictive models, and then actioned marketing around exploiting those learnings.
>We released this tool for the Obama Campaign in August 2012. Over the next 3 months, we had over a million supporters authorize our app, giving us potential access to over 200 million unique people in their social network. For each campaign activity (persuasion, volunteering, voter registration, get-out-the-vote), we ran predictive models on this friend network to determine who to target. We then constructed an “influence graph” that matched existing supporters with potential targets. For each supporter, we provided a customized list of key friends with whom to share different types of content. [0]
Literally the ONLY difference, other than the political leanings, is Koger's app was then "acquired" by CA, in breach of Facebook's TOS. Which again, is something that happened probably ALL THE TIME in the pre-2014 wild west of Facebook app mining.
The point you are missing is that there was no cambridge analytica app and nobody opted into it an app that doesn't and didn't exist. They opted into a completely different app that datamined it for completely unrelated purposes and then, much later, sold off to CA which used it to target people. one was on the up and up and one was CA. Again, if trump did what obama did on the level it would have been fair game. they were shady and used shady tactics and shadily acquired data. Not a single person that opted into the koger app thinking it was a political app but people that used the obama app knew what they were getting into.
The point you're missing is most affected users didn't download anything, and were simply friends of someone who did. (OFA or Krogers app)
The distinction you make of OFA being on the up and up and CA not is valid, but doesn't mean suddenly OFA is on a completely different level. There still was a massive amount of data sucked up without consent.
"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."
See above quote from their own campaign manager. Here's a link to her own tweet on this issue if you don't believe me or the publication I linked to above:
I'm not taking sides here, and I could be very wrong – my assumption here is that Cambridge didn't have a "Trump Campaign App" they had some other app, then sold that data to a third party. Obama's campaign had a "campaign" app, and used the data collected from that app.
We can all argue about whether facebook should even allow apps the kind of data they do, but the crux of what CA did was to re-sell data they collected from Facebook, right? This is where the two aren't the same, as far as I'm aware, but like I said, I'm not super in the know.
the crux of what CA did was to re-sell data they collected from Facebook, right?
No. An independent developer had an app many years ago, recorded data from that app, and then sold that data to CA years after the fact. That developer did in fact violate Facebook's developer platform policies by selling the data, but CA had nothing to do with the app or what was represented to the people using it when they installed it.
Are you playing mental gymnastics here? CA bought the data. CA had nothing to do with the app. They did, they bought the data. I get that they aren’t responsible for the person violating facebook’s rules. I think we already established Facebook is the boogeyman here. But we were comparing the Obama campaign app to an app unrelated to a campaign. In this thread there was the implication “obama did the same thing” and I think we can hopefully agree that Obama’s campaign didn’t sell their data in violation of the fb tos as far as we know and the owner of the app who sold to CA did. Sorry I got the name mixed up. Yes CA didn’t sell. But neither did the Obama campaign. So there is not an equivalence.
Of course there's an equivalence. The exact same thing happened - Obama's campaign received the data of more than 100 million people - about 99.5% of whom didn't explicitly authorize them to receive it. They then used that data in violation of Facebook's Developer TOS. Specifically, the Developer TOS say that you aren't supposed to use the data you receive from the API for any purpose other than the functionality of your app. For example, taking that information and analyzing it to produce campaign strategies and/or using it for targeted political advertising is and was against the Facebook TOS.
How is that not exactly the same thing, just done on a much larger scale by Obama? I get it, this is mostly a lefty crowd here on HN. But I just can't stand hypocrisy on either side. The fact that hundreds of commenters in here want to defend one guy and castigate another for doing the exact same thing based solely on the political affiliation of each is disgusting to me.
I can't really disagree with anything you've said here, but the only thing I can say is that your comments might not be received well because they are very firm – "the exact same thing" is a pretty assertive statement that I think several here disagree with, on merit or not, and being a bit more flexible in communicating this (which, by the way, thanks!) might help others receive it better, not that it's your job to do that or anything, just sharing my pov.
> No authorization was required - they gave them free, full access to the entire social graph
Source? All I can find is a scheme where they prompted people to sign in to a campaign site and grabbed the friend data that way. Icky as hell, but it's not obvious why that's "even worse" than CA.
It does seem like the Obama campaigns got preferential access to Facebook data. Data for even more people than CA got. And perhaps the Clinton campaign did as well, or at least got access to the older data.
What distinguishes CA and the Trump campaign is how well the data was used. CA staff apparently just did a better job at using the data to manipulate people. Their canvassing app. Using bots on Facebook and Twitter. Plus entrepreneurs from Eastern Europe or whatever. And maybe they went further, discouraging potential Clinton and Sanders supporters fron voting.
But anyway, the key point isn't whether CA or Obama/Clinton got more data from Facebook, or whether Facebook willingly helped Obama/Clinton. The key point is how well voters can now be manipulated. It's another level in the problem that money buys political power. And it's arguably how AI could do the same.
"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."
Here's Obama's 2012 campaign manager discussing this. They did it different four years later, and Facebook had four more years of data and tens of millions more users at that point.
It's actually not such bad info [1] (from Carol Davidsen, Obama's former campaign manager). The guy you quote certainly has the opinion that it was OK to access and use the data of 190 MILLION people for political purposes, only 1 million of whom explicitly authorized him to do so, but I'm guessing that the majority of those 190 million people would vehemently disagree with him.
CA staff apparently just did a better job at using the data to manipulate people.
I've seen no hard data showing that any of this did Trump any good. Apparently not a single study has been done as to whether people either failed to vote or changed their vote based upon fake news or the use of this data.
I haven't either. My opinion is based on my own anecdotal observations, and news coverage, both of which may be biased.
I do follow fringes of online anarchism and anomie, and I was surprised to see so much support for Trump, when the polls were showing him so far down. And much of that support was so odd that I thought it ironic. But whatever.
I'd like to see such a study, for sure. But I'm not optimistic. Arguably, many who voted for Trump would be no more forthcoming to researchers than they were to those doing the election polling. There's just too much polarization, I suspect.
For starters, they accessed 4X the number of profiles that CA did. I have edited my comment, however, because it appears that they did (just like CA) have an app that they used. Except they used it, as one Obama campaign manager put it, to "suck out the whole social graph" of voters in the US, and Facebook "was surprised, but didn't stop [them] once they realized they were doing it".
By the way, this is something that would have been stopped on any other app long before they ever accessed 190 million profiles. So Facebook essentially gave them the entire US social graph, that they wouldn't have allowed any other app with only 1 million users to have.
> Obama's campaign said they were scraping for academic purposes, downloaded a bunch of data and then lied about deleting it?
Fair point. But wondering if anyone asked them to delete it, and then went about verifying it later? I am guessing nobody bothered too hard because as the person in charge of the strategy said "Facebook were on our side"
They didn't tell Facebook what they were doing and then, when they took way more than Facebook was expecting, Facebook was okay with it because of politics or being too afraid to oppose it. Then the all political opposition was powerless to complete with that efficiency of targeting and had to pull off a heist to even the playing field before the politics of social media company executives became the only relevant politics.
It's not like academics are any morally superior to anyone else anyway so I don't think Facebook should be handing that stuff out to anyone.
"Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing." - Carol Davidsen, Former Obama Campaign Director
> Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing
The extraction of Facebook's social graph is the element I agree "finds comparison in the 2008 or 2012 races" [1]. But Obama's campaign didn't mis-represent its identity nor intentions to Facebook. Kogan and Cambridge Analytica did. Obama's campaign didn't lie about deleting its data when asked to (which, to my understanding, it wasn't). Kogan and Cambridge Analytica did. Moreover, Obama's campaign reported its backers to the FEC; Kogan and Cambridge Analytica do not.
CA didn't represent anything to Facebook. They never created an app. They bought data, years later, from Kogan, who had created an app. Kogan violated Facebook policies, but there again, he didn't "misrepresent his identity [or] intentions" to Facebook. You didn't (and still don't) have to represent any intentions to Facebook to create an app. You simply create the app and agree to the platform terms and conditions. You're framing this as if CA & Kogan created some kind of special relationship with Facebook under the guise of academic research, and that just isn't the case. The only app in this situation that was granted special permissions or where Facebook knew the true intentions was the Obama app, and it's arguably worse that Facebook knew what their intentions were. That's because they knew that about half of the users involved -
95 million people out of 190 million - would never have knowingly allowed their data to be used to help Obama, but they allowed it special access anyway.
As for Obama's campaign not being asked to delete data, all apps were required to delete data that they came into possession of under the pre-2014 policy. Those were the terms they agreed to when they deployed the app - especially one that was allowed special access to the entire US social graph (where no other app would have been). So if Kogan violated the policy, so has the Obama campaign.
The "false intentions" here are about the user agreeing to use the app.
Obama: "Give us a list of your Facebook friends, and we'll help you contact them about voting for Obama." While the Obama campaign did collect data about friends, this data was voluntarily given to them by users. If the Obama campaign had your Facebook data, it was because one of your friends knowingly and voluntarily gave it to the Obama campaign.
Kogan/CA: "Take a free personality quiz!" The people taking the quiz likely had no idea that they were supporting a political campaign or helping Trump win. You could have your data given to CA even if nobody in your friend graph supported it.
Also, that 190 million figure is probably inaccurate, because there is probably significant overlap in people's friends lists.
It was given to them by the 1 million users that authorized the app. Not the other 189 million (or more) users.
Also, that 190 million figure is probably inaccurate, because there is probably significant overlap in people's friends lists.
According to their own campaign manager, they "sucked out the whole of the social graph" with Facebook's blessings. So they may have actually had more than 190 million, since there are more US users that on Facebook.
"I ran the Obama 2008 data-driven microtargeting team. How dare you! We didn’t steal private Facebook profile data from voters under false pretenses. OFA voluntarily solicited opinions of hundreds of thousands of voters. We didn’t commit theft to do our groundbreaking work."
Sure....coming right up! How about a statement from Obama's former campaign director saying that they "sucked out the whole of the social graph" with Facebook's blessings?
That doesn't prove what you are saying. "sucked out the whole of the social graph" is a non-technical statement that doesn't have an explicit meaning, nor is it specified what user data they were scraping.
Look later in the thread. The estimate is that Obama took data from about 190 million profiles, about 189 million of which gave his campaign no explicit access to their data, and about half of whom would have explicitly objected to Obama’s use of their data had they known about it.
You might think “well that’s egregious, but the rules at the time allowed it”. Unfortunately, even that isn’t true. Obama’s app did not play by the rules. The Facebook developer platform rules at the time stated, essentially, that you could not use the data you gained access to through your apps for any purpose outside of the operation of your apps. In other words, you weren’t allowed to create an app that claims it just sends a message to your friends about how wonderful Obama is, and then take the friend data you gain access to through that app and use it for the targeting of political advertising and/or campaign strategy, which is by all accounts precisely what they did. So they broke the same rules that Kogan did, just on a much larger scale, and with Facebook’s tacit approval.
Pulling user data(and friend data) from the API was "fair game" pre 2014. It was an explicit permission you could ask for.
Gleaming insights from that data and then using that data for targeting on FB is/was the point.
Creating an app that collected the data under false pretenses, transferring that data to another 3rd party and then claiming you never had that data makes it a different situation.
Pullling the data that your app needed for its stated functionality, exclusively for use within your app, was “fair game”. Pulling anything beyond that, and/or using it outside your app, was absolutely not allowed, ever, on the platform, even at the very beginning in 2007. Allowing it to be analyzed for targeting was never allowed, nor was it the point of the platform. You’re simply wrong.
How do I know this? I developed apps on the platform from 2007-2014 and was always reading the rules and any changes they issued because I didn’t want to violate them and have my app banned. They were exceedingly clear on this issue. Sadly, Obama’s app was allowed to violate these rules and received no ban.
I'd have to dig through the TOS again to confirm my memory on some of this stuff. You definitely could create a custom audience and target your fb app users based on UID.
You used to be able to target ads based on user ID. That option was closed off a few years ago - I was actually awarded a $2,000 Facebook Bug Bounty for spotting a vulnerability that allowed that option even after they shut it off.
But regardless, it was still a developer TOS violation to export user IDs of people that hadn’t personally authorized your app and use those IDs in any custom audience even back when Obama did it. In other words, you weren’t supposed to grab user IDs of friends of your users and use those in custom audiences. In fact at one point, in an attempt to enforce this policy, Facebook stopped returning friend user IDs, and instead gave proxy user IDs that were meaningful only within the API, but couldn’t be used for custom audience targeting. Then they got rid of the target by ID option altogether.
I hope you're kidding. Not sure what the rule on repeating myself is but since this is currently the top comment in this chain, I'll post it here anyway:
"Facebook was surprised we were able to suck out the whole social graph, but they didn't stop us once they realized that was what we were doing." - Carol Davidsen, Former Obama Campaign Director
They used the exact same technique to access data from 190 million profiles - and about 189 million of those were without explicit authorization. Exactly the same technique, just with 4X the reach that CA had.
I'm not sure why you keep on repeating that line, it's just a statement of fact. There were hundreds (if not thousands) of FB apps from that time period that pulled comparable social graphs before FB switched to the v2 Graph API in 2015. What CA did beyond this is that (1) they collected their political data under the guise of a personality test, without any indication to users what their data would be used for, and (2) the data was used for political microtargeting (in violation of FB's TOS) instead of just sending voluntary messages to friends.
If you know of any evidence that Obama's campaign broke either the law or FB's ToS, please let me know.
There were hundreds (if not thousands) of FB apps from that time period that pulled comparable social graphs before FB switched to the v2 Graph API
That is simply not true. Facebook rate-limited apps for exactly this reason. Additionally, although it was technically possible, the TOS did not allow mass-scraping of friend data for any purpose, much less political purposes. Facebook monitored and routinely banned apps long before they ever accessed the data of even a few million friend profiles, let alone 190 million of them. That's why, according to Obama's own campaign manager, Facebook was "surprised" but then decided not to ban them - which they would have done to any other app.
the data was used for political microtargeting (in violation of FB's TOS)
Which is precisely what Obama did.
If you know of any evidence that Obama's campaign broke either the law or FB's ToS, please let me know.
I think the point neuronexmachina was trying to make about microtargeting is that Obama simply asked their users to send messages to their friends, whereas CA directly advertised to people.
The Obama campaign used that data in every way that CA did. The fact that the app only represented that it was sending messages to friends is just as misleading as what Kogan (not CA, who never had an app) did. They used this data that 99.5% of the people didn't authorize them to have for targeting, campaign strategy, etc.
Again, collecting data about friends while being upfront about the purpose is very different from collecting friends data with completely false pretenses, selling the data(Alex), and then later lying to Facebook that it was deleted when it was not(CA).
At the very start of Facebook platform the API would anonymize the user's email address, the app would get an app-specific hash, e.g. abcdefg-app123456@facebook.net It was a working email address with Facebook handling the forwarding.
This proved futile as the very first thing that apps then did was to ask users for their real email address.
To really fix this, Facebook will have to stop allowing 3rd party developers direct access to user data.
Basically, FB should introduce an App-Engine like platform where the backend of any 3rd-party application that uses FB data has to run on FB-owned servers. Developers of these applications would then ship their code to FB (similar to Heroku) and run in a sandboxed environment where they are not allowed to take data out at all.
That way FB can audit how the data is being used at anytime and kick out people who are out of compliance with their terms. If a user deletes their FB account, now all their data could be deleted from any 3rd party applications automatically. This is basically similar to the way the government handles classified data.
If it were some computation, aggregation or analysis this work, but a lot (I'd guess, most) applications might not fall into this category. How are you gonna present the data in a UI to users, if no data is supposed to leave the server?
The UI could be a web page, mobile app, etc just like FB.com. But to see any user data you would actually have to be authorized as that specific user. I.E. a developer could test their app by logging in with their own FB account and using their own data but would have no capability to look at the raw DB entries of other users.
Or perhaps their could be a limited capability for developers to log in to their app as another user of it, but those accesses would be logged and periodically audited (just like the as-another-user-logins of regular FB engineers are).
Hm, the catch here is something like this could happen:
+--------------+ +----------+ +--------------+
| | | | | |
| other server | | Server | | other server |
| | | | | |
+-------+------+ +----+-----+ +-------+------+
^ | ^
| | |
| http request |
| containing user |
| data for UI |
http request made | rouge http request
with a legit purpose | sending the user data
say fetching assets v to some other server
| +-------+-------+ |
| | | |
+------------+ mobile app +--------------+
| |
+---------------+
And there is really no way of knowing which requests are for legit app purposes and which would be leaking data. Now, you could require that the mobile app make no requests to any service but Facebook, which brings two questions:
1. can facebook do this? Apple could; they approve apps and get the binary/manifest before the app is released. But how could Facebook enforce this?
2. would developers be ok with this? Relying 100% on facebook?
It is an option, I'm not saying it's infeasible. An interesting idea for sure, thanks for sharing!
I guess you could let the mobile app access third-party servers but require all traffic to go through a proxy where Facebook can examine it. (Either by letting Facebook handle all the encryption or giving Facebook a copy of the key.)
All of this tends to trade one problem for another, though: the user no longer has to worry that the third-party app has access to their Facebook data. But now the user has to worry that Facebook has access to their third-party app data.
1) agreed - for mobile apps to maintain some semblance of data sandboxing, this would require FB to work with Apple. I'm imagining some kind of new iOS "Secure API" where the return values from certain API calls are marked "tainted" and then Apple uses static binary analysis to reject apps that write the tainted data to non-whitelisted socket calls.
2) Developers simply wouldn't have a a choice in the matter - FB is so big that they can force the market in a certain direction if they want to.
> But to see any user data you would actually have to be authorized as that specific user. I.E. a developer could test their app by logging in with their own FB account and using their own data but would have no capability to look at the raw DB entries of other users.
This is the status quo as of the 2014 platform policy changes.
The entire debacle is around the data retrieved (and saved) prior to that, which is what Cambridge Analytics has done.
> Or perhaps their could be a limited capability for developers to log in to their app as another user of it
FB Developer app has support for test users, who are marked as such.
Most apps that do "Login with Facebook" also support "Login with Google" and sometimes "Login with LinkedIn". The backend ownership issue becomes a bit more complicated.
This is even before we get into trust issues most developers would have with Facebook having access to their user data.
The portion of the app that deals with FB data would be hosted at FB, the portion of the app that deals with Google would be hosted at Google, etc. And this is only a problem for applications that pull user data from FB itself, if it was merely using FB for authorization but wasn't actually using any FB profile data, photos, friend lists, etc, than it could all be hosted on a 3rd party server.
So most developers would say "But my FB backend is just a simple proxy / transformation layer that parses the data and sends that to the front end".
Most would be correct - some flows like inviting friends to an app, or showing friends who also happen to use this app (for games, etc.) are so common that they're integrated into the Facebook SDK.
For performance reasons (both on Facebook's end and for the sake of user's experience, just in case his phone moved into a zone with inferior coverage) a fat network request was preferred to multitude of thin small ones - you couldn't really predict when the user would fancy inviting their friends, saving their score, looking at their achievements, or checking out friends who are in same town as them - so a 24-hour cache policy was instituted.
They could aggressively pursue enforcement and punitive measures. It's not about preventing everything that could possibly go awry, but making it well-known in the developer community that Facebook/Platform has no problem shutting you down if you egregiously break the rules.
> we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
To generalize the issue, if you were in charge of APIs at some company A and some company B (not necessarily located in your jurisdiction and not necessarily subject to the same legislative framework as you) told you they used to have the data, but deleted it since then, what additional measures would you recommend company A pursue?
In that situation, I guess there's not much power Company A has with regards with Company B, right? Unless Company B is worried that they'll have a future relationship and thus will be willing to submit some kind of audit.
Zuck's announcement seems to have a decent outline -- start with investigating apps with access to large amounts of data, audit apps with strange behavior. But after a year, after the CA (and other) controversies are forgotten, what I was thinking was that FB should be doing regular, random audits/investigations, and publicize the punishment.
I don't mean identifiable shaming, e.g. "Last week, we banned Jane Smith and her Flappy Farm app for misusing the data of 3,000+ users". But maybe weekly/monthly tallies of apps that were shut down or sanctioned, and a breakdown of the reasons why, and users affected, etc. Every once in awhile, an app maker might post a "We Fucked Up" article on HN, which helps even more in reminding people of TOS.
From what I've seen, tech companies tend to be understaffed on enforcement and compliance, relative to their user reach. Was the "certification" anything more than a checkbox in an emailed Word doc? I expect that's one area where they could improve their diligence. I would argue that if you have a high volume advertiser who has already broken a TOS and continues to use the platform, that type of hands-off certification is deliberately negligent. Anyway, we'll find out more details from their FB account manager in the coming months.
But there are many legitimate use cases of sharing email addresses. Just scratching the surface,
* someone used Facebook Connect to login to NYTimes Web site, curious about their Food & Recipes newsletter, wants to sign up
* someone logging into e-commerce shop through Facebook Connect, making a purchase and then deciding that yes, they would like to track and manage their order on the retailer's Web site, and they will even sign up for an account with retailer to do that
Which is fine. But most apps that I remember of this type (back when I used Facebook apps) would ask you for it on landing. Nothing ever has to be black-and-white.
I think the only sane response would be to shut down the developer program. I doubt it contributes much to the FB bottomline and it's clearly something FB doesn't care much about given the breadth of this scandal.
2011 FTC hearings were about the exact same topic. You can't trust the 3rd party app developers. Back then it was social game developers selling user profile data to Rapleaf and other data brokers.
> Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services.
He was part of Cambridge Analytica at the time. So they suspended his account along with the rest of them I suppose.
> What about the punishment to Christopher Wylie, by closing/suspending his accounts in facebook, whatsapp, etc ?
The Christopher Wylie that, by his own account, was a knowing, active, and key participant in the things they punished CA for, and in fact claims to be the one who came up with the concept for it?
It's in Facebook's benefit for advertisers to gather all that data, because the only way they can actually use it to make money is by advertising to the users on Facebook. I find it incredibly hard to believe that this thought never crossed anybody's mind.
Have you ever looked at FB's ad platform? You don't download everyone's data and target the campaign yourself. You target, "18-25 males in these zip codes who like the yankees". I don't see how you go from that platform (hosted and controlled by facebook) to something else.
At least at one point, you could. See the example of the person who pranked his roommate through very specifically targeted facebook ads. [1]
It's even mentioned in that article how to get around the fix they put in so you couldn't target a group of less than 20 people.
I'm not sure if this particular method still works, but let's take a step back and think before we make claims about what is and is not possible in a complex system with lots features and knobs to twiddle. Whether this was an emergent feature from other features of the system or a specifically desired and designed behavior, at least at one point Facebook allowed very fine grained targeting.
That still doesn't really sound like the same thing. Highly specific targeting based on PII you already have is a very different prospect than harvesting PII for tens of millions of strangers.
I concede you might be able to target a single person, but how exactly is that stealing data? It would sure take me a really long time to (one at a time with ads) get all that juicy-juicy data.
There were black hat scrapers in 2013-ish that enabled affiliate marketers to enter a FB page or public group,and export all the commenters' and likers' names, handles, numeric ID's, locations, emails, ages and a few other things to a CSV.
I assume their scraper was more in depth. If you had friend-type permissions back then you could perhaps see their posts and shares. You'd need post and share content to run text analysis, figure out their hot buttons based on language and frequency, sort them into groups and then target then.
Affiliate marketers could also upload lists of unique user ID's (like the FB group members scraped above) for specific ad campaigns, y'know, disguised hookup sites, skin cream credit card rebill offers, etc.
Back then, many of the privacy options were deeply hidden, obscure, changed names a couple times a year or were unavailable.
I wasn't entirely clear. I was really just responding to the point that you couldn't target people yourself. It wasn't meant as a refutation to your point, just a clarification of one aspect of your point.
That said, since you can target specifically, if the system did allow some way to exfiltrate user data, it would make financial sense for larger analytics companies to do so to provide value-added services where they could target much more specifically than Facebook intended (or at least intended to make obvious?).
The apps being able to siphon 10s of millions of user's data makes what you're saying you can't do possible though: you use an app to get everyone's data, then you analyse that and work out the groups you want to target and with what. Then you realise that the best way to deliver those ads, given your data is Facebook profiles and all your targeting categories are from Facebook, is with Facebook ads.
> They knew it was illegal but put all the incentives for companies not to follow the rules.
Offtopic but i cant help it. You get what you measure. Which is why economies that only measure profit optimize for nothing but profit. When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit, we should not be surprised that companies like Facebook do awful things. the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of) https://en.wikipedia.org/wiki/Generally_Accepted_Accounting_...
> When a nationstate says "It's illegal to do X" but has mandatory accounting practices that do not measure X but only measure profit
We don't need accountants to measure legality. That's what we have law enforcement and courts for. Investors care about profits; behaving illegally should hurt profits. Deputising a multi-billion dollar company's thousands of shareholders as its moral police is an absurd proposal.
> Law enforcement and courts aren't funded in a way that makes that effective
In the 1930s, the Congress realized that financial crimes were (a) prevalent, (b) serious and (c) difficult to investigate and prosecute. So it created the SEC [1]. Its specialists, with the budget, focus and mandate to pursue securities-related violations, have been effective (relative to pre-1930s finance).
Regulators make rules. They also enforce them. We have no top cop for technology. The costs of that gap are becoming apparent.
Average US citizens have no comprehension of 1930s America, FDR, etc. but the same people will instinctually glorify the 1950s as a time when traditional values led to economic prosperity.
> the GAAP has all but ensured that this happens. You want companies to have values? Then measure values! (alongside profit, not inspite of)
How do you record company/moral values into Books of Accounts? How does one audit those morals? What category do you assign it under: Asset, Liability or Owners Equity? And moreover, what would happen if morals change and some are deemed obsolete? Also, if you start to measure company values then it becomes obvious that other things that have been kept out of purview of the Books would want to have an equal footing too; like: legal contracts with clients, employee agreements, court cases, so on and so forth.
There is a very good reason why accounting standards (like GAAP or IFRS) decided to only record transactions into the Books and not be concerned with legalities or moralities. It's impossible to assign an amount to a company/moral value as what is valuable to me as a shareholder may not be considered valuable to say the local tax authority or an would be investor and vice versa.
Yes we all know that. To even start thinking about how to quantify values — and if you follow macro like i do, you know a modern economist can probably quantify anything — first we must inject values into the conversation. Right now, the world doesn't have any values. Except profit, competition, winning, domination and control. It's not even efficient — it's a prisoner's dilemma stuck at backstab/backstab. How do you solve a prisoners dilemma? ... shared values! (“thou shalt not kill”)
Yes you are right that from an economic standpoint you will be able to quantify anything. However, GAAP/IFRS was specifically created to handle only a subset of that whole gamut viz. recordation into Books of Accounts and generation of Balance Sheet, Income Statements and Cash Flow statements. There are commissions in place to handle fraud and deceptive practices for things that cannot be codified into the Books of Accounts. For instance, the Anti-Monopoly Acts enacted in various countries have safeguards that prevent a single entity from monopolising the market. However, if you try to codify this into the Books you are going to have a tough time when the law changes. You can't go back in time and retrospectively change your ledger transactions (which is static) to reflect changes in law (which is dynamic) or values/morals (which is also dynamic).
Please note that I am only replying in the context of GAAP (which you mentioned). If not, I agree with all the ideas you presented and only disagree on the one aspect of it being added to the GAAP/IFRS standards as the purpose of it's creation was to precisely steer away from unknowns and only record the knowns.
The parent comment said "Then measure values!", but didn't specify how. That was left as an exercise for the reader to imagine.
Mashing value metrics in to accounting practices seems problematic at best.
On the whole I think it's a tricky subject because we want to be careful not to stifle innovation, and sometimes problems only become evident after quite a few pavers have been laid on the road of good intentions.
Aren't intangibles exactly that, though perhaps a little broader than specifying particular moral behaviours?
Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.
> Aren't intangibles exactly that, though perhaps a little broader than specifying particular moral behaviours?
Intangibles can be bought and sold as they aren't attached to anything emotional. Moral behaviours/values, if embedded into the books, have to be through a transaction. How do you transact moral behaviour/values? That is the question I have.
> Goodwill, Brand (which contains value statements), IP, etc, they're given financial value. Perhaps these don't influence shareholder/public actions as much as they could, but the measures are there at least.
Accounting principles state that for anything to be recorded in the Books a prior transaction should exist. Brand and Goodwill, by accounting principles, are only recorded after they are transacted the first time. What that means is: Say you start an enterprise. The enterprise over its lifetime acquires a Brand value. However, you cannot record that Brand value until the enterprise is sold to another entity. Only in that scenario, can the buying entity record it into it's Books as Asset.
EDIT: IP, Copyright or Patents on the other hand can be recorded as Intangibles because you "bought" it from an issuing entity (the Government or any other body which is issuing you the certifications in exchange for a monetary value). Hence a transaction exists prior to the recordation in the Books which has valued the asset.
EDIT: To explain better: The reason you cannot record a Brand value into the Books until it's either sold/acquired is primarily because there is no way to gauge the value of Brand/Goodwill. I may consider my enterprise Brand value to be a million dollars. But you might consider it to have no value. Unless a transaction occurs, a value cannot be arrived at as it inherently has no value. Hence the recordation in the Books happens only and only after a transaction takes place.
This is a good post. 1/ I am not sure if it matters that social good is hard to value. Even a sloppy metric, even just a boolean – e.g. environment neutral vs negative – serves the purpose, because by simply existing, people can point at it, tweet about it, start to ask questions, apply peer pressure. A couple good catch phrases is enough to get elected. 2/ I think we probably will figure out how to value things like this in the future. A friend of mine suggested that applying options pricing theory might be interesting - air pollution doesn't have a value today, but it could have a very high value someday in a future worst-case scenario. But that's all above my head and I see that you are much better versed on accounting than me.
(I see now our replies crossed so will leave this and stop posting :)
Thanks for the explanations. On the transaction front, what about share trading activity based on Value statements or actions that impact Brand/Goodwill etc? Though the sale of shares is a future transaction, a company could reasonably conclude something like:
We will make X and Y the value statements of our company/brand. If our actions reduce the value of X and Y statements, what will that cost us? This at least would provide a 'rough' starting value, to be reviewed/measured against market response. Do companies already do this? I'd think it important for service/advertising based companies (but I am guessing, I have only couch-potato knowledge).
Here is a one attempt at using environmental, social and governance (ESG) integration factors to create an index of companies that are measured beyond just the basic plain vanilla profitability/market cap metrics:
Slightly offtopic too, but I would love to hear suggestions for good books on economics and/or philosophy that would discuss non-monetary profit and values.
An alternative approach I’ve long been fond of is to lengthen time horizons. In the long run, bad behavior is much more likely to come back and bite you. If you can get incentive structures on a longer term time horizon, you’re much less likely to engage in risky behavior that might have short term payoffs but sinks you in the long run.
One of the key points of the Z post is that for an app to be able to request permissions from users, the app creator will need to sign a contract and be subject to an audit.
This appears to solve the issue of having wide permissions, but it does not do so. In reality, this is an attempt at transferring facebook's risk to shady app developers, while the overall lifecycle for the app won't change.
In essence, this is a do-nothing from the standpoint of app developers who have requested additional permissions. Any app developer who is told they need to undergo an audit, due to transcribing the entire social network, can simply say no and get their account banned. It will likely have no effect, as the account will almost certainly have already been suspended in such a spot.
WRT to FB's knowledge of this happening: Your assuming bad intent is no worse than FB's assuming good intent. Otherwise, one of the more reasonable FB comments on HN.
Why can’t they hold on the data inside Facebook and ask developers to come in a VPN to a VM or Remote Desktop (which gets recorded always ) and then use analytical tools installed on it , to work on it with no internet access in a DMZ . By this way they can record everything the developer is doing with data and the worst case he can do is take screenshot. By this way no data goes out of their hands
Yeah it’s like “oh gosh, they violated our terms of service by pulling 50 million users info! We must send them a sternly worded email with a checkbox to confirm they they won’t do it again!”. It’s inconceivable that there aren’t larger exfiltrations of user data that have taken place - how would they even know?
Facebook did not intend for this to happen. That is such nonsense.
They intended for cool apps to go viral across their social graph so Facebook could be a “social utility” and the operating system of human relationships and other airy fantasies they spouted in 2012, 2013, 2014 when they built the app platform.
Their hopes of a beautiful future of joy and freedom were dashed when they discovered humans are capable of garbage behaviour.
Ironically they believed the walled garden of Facebook would clean up the cesspool of blog comments. Oops.
>The trusting developers not to sell any data but putting zero safeguards in place to prevent this and extremely punitive repercussions despite being repeatedly told by the public, media, and even high level employees tells me Facebook can't plead ignorance to this and they not only knew this was happening, but they probably intended for it to happen.
Why would they intend for it to happen? They didn't make any money off of this exfiltration, CA paid people via Mechanical Turk to install the application so that they could mine their data. Facebook didn't get a dime. In fact, they have a monetary interest in preventing this, because their data is worth something and these guys just got it from free usage of their API. So the insinuation that Facebook wanted this to happen, or looked away because it benefited them, makes zero sense.
What kind of safeguards are you imagining? How do you have 3rd parties interface with Facebook without letting those applications reason about the information within a Facebook account? Tinder is valued at over a billion dollars and it's not possible to use it without a Facebook account-- should Facebook shut that down and ban the entire concept of 3rd party Facebook interaction?
I do not understand the anger. They did nothing wrong. I can't believe that people are legitimately arguing that users shouldn't have a right to expose their information to apps.
I think it is ridiculous to think that Facebook is sharing data to 3rd party sources out of charity. If you provide more value via data to integrating applications than any of your competitors, you will continue to have the largest share of the market. So yes, there is profit incentive.
Many people find various apps on Facebook to be useful. That makes Facebook a better platform which makes Facebook more valuable. Of course there's the profit that comes with making their product better. Heavens forbid!
They sold the data by proxy by knowingly letting the 3rd parties syphon the data. Why else would they be at CA when the cops showed up? Are you being intentionally obtuse?
They let third parties access data but where are you getting the idea that they “sold the data by proxy”? There is no evidence they wanted the data to be stolen, or that they willingly allowed it.
They were at CA because it's a multibillion dollar company and that means that they have people who keep tabs on stories that result in millions of dollars of lost stock value. It's not because they were in cahoots with CA, they were covering their bases.
Are you honestly suggesting that CA wrote Facebook a check?
If they're serious about this, and I'm leaning toward they aren't really serious about it, this will affect a lot of startups, specifically in advertising, but probably a lot of mobile apps as well:
“First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps.“
Might be a too little, too late attempt at self regulation since they've apparently known about this cambridge analytica situation since before the election and since then there have been facebook employees embedded with CA to help them target ads.
MSNBC is already playing this as: 1) starts with a denial 2) admits wrongdoing 3) claims behavior will change 4) changes are not carried out and we're in the same place a year down the road.
They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
I worked on an app eight years ago for a woman that went to Harvard with zuck. I only mention that because she did. Every. Day. Eventually they had me do a Facebook integration and when I saw how easy it was to escalate the permissions I was horrified. I was an inexperienced developer at the time and wasn’t calling the shots. We saved everything. The lead devs laughed about the policy that straight up said it was our responsibility to delete the data after use. That would be inconvenient! The system was designed for this. There is no way for them to do accounting on this issue. This blog is a farce. The company and project doesn’t exist anymore. I’d bet money that more than a few people have that tar ball.
> They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead.
Worth noting that Zuck wants something from Xi and Medvedev, has everything he needs in terms of support from the US Gov.
Facebooks real existential threat is just that . He thinks that having everything he wants now is correlated to the future. He should call his bud Billy Gates and see how not playing 5 moves ahead with the government worked out.
Facebook as almost insured that it becomes the whipping post of the FAANG companies as the government wants to look hard on tech. Im genuninely not sure what Zuck can do as long as Apple, Google and Amazon don't make mistakes in the same way.
The next 10 years are going to be a lot like the old ad age about being chased out of a campsite by a bear. It's unimportant to be the fastest (best) of the group, you just cant be the slowest.
Until that time, everyone can complain, but the concept of "revealed preferences" is relevant. Do people actually care? If so, they'll change their behavior, FB will likely notice, and changes will happen.
People have been complaining about FB and privacy since practically day 0. Throughout that entire period, FB has only become more popular.
"They’re also pointing out that Zuck is fine meeting with Xi Jinping, meeting with Medvedev but refuses to appear before the US Congress and sends the company counsel instead."
I get the feeling, but you'll agree that a legally binding legal procedure that is an actual existential threat to your hundred billion dollar company that employs 25k people requires a different approach then a seduction meeting with a despot to try to loosen regulations.
> but refuses to appear before the US Congress and sends the company counsel instead.
This strikes me as especially interesting. I mean, I'd personally theoretically have my reservations about this congress over and above the average congress, but FaceMark refusing strikes me as a deeply telling datum about how he's thinking about this.
Basically no CEOs ever want to testify in front of a congressional committee. There is only downside to such a situation for the company. There is zero upside.
This is not a deeply telling datum. It is a boring an standard one.
While that may be true for regulations/ethics related appearances, this is not true for things such as funding. For example both Elon Musk and Bruno have gone before congress multiple times to discuss private space programs/ contracts and to justify their cases. Admittedly Elon did send Gwynne Shotwell several times but she is still the COO of the company.
More accurately there is risk in talking to the US congress when the topic sounds more like an inquisition than when congress is asking for opinions.
The matter then is why is it a risk for Facebook to discuss the CA issue? Are they worried about a witch hunt or a public ethics execution?
testifying in front of congress is just an opportunity for politicians to win points by kicking you around like a ball. very few people (let alone CEOs) stand to gain anything from it, and corporate counsel is paid to put up with abuse.
> ...my reservations about this congress over and above the average congress...
ugh, come on. it's not like the whole congress votes on how you're to be treated, and the questions you'll be asked. the biggest heels on both sides of the aisle are free to harangue you all they want.
> it's not like the whole congress votes on how you're to be treated
Not the whole Congress, but how exactly do you think procedure and parameters for hearings are set and objections during hearings are resolved? Both the specific personalities in leadership positions and the attitudes of the majority matter a lot.
EDIT: It's true that there is a tradition of providing some semblance of balance in committee process, including hearings, with the majority (not necessarily by party) mainly controlling what items are considered and what hearings are held, and ultimately the outcome; but not only is partisanship greater than in the past, but precisely those traditions have noticeably weakened over the last couple decades and particularly in the current Congress.
Anyone else remember Beacon? This was how FB has always designed to work from the beginning. They've just been toying with the PR ways to say it to make people accept it without thinking.
"They're not a product company, they're a distraction company."
Correction, they are a surveillance company. Need I remind everyone of Google and FB's In-Q-Tel CIA partners in crime? They just figured out a way for everyone to willingly report on themselves, but not just on themselves, on others too! FB is bad and should collapse like every other dotcom boom-bust that uses and abuses it's users... but the difference is the level of monopoly on non-technical users that didn't exist in the 90's. Back then the technical community could have dropped a product like hotcakes and watched it bust... but due to the increase of non-technical users who sign any EULA/TOS and don't give a crap about privacy... I expect nothing will happen until something really bad and at a massive scale happens.
For those who are younger, consider this the slashdot/digg/reddit cycle. Reddit will die next the closer it gets to IPO too.
It's the beauty of computing though. Every market is ripe for disruption if someone has a good vision and follow-through. The problem is that so many of them use the exact same model and a few years later are the ones dying due to lack of integrity.
>Beacon formed part of Facebook's advertisement system that sent data from external websites to Facebook, for the purpose of allowing targeted advertisements and allowing users to share their activities with their friends. Beacon would report to Facebook on its members' activities on third-party sites that also participate with Beacon. These activities would be published to users' News Feed. This would occur even when users were not connected to Facebook and would happen without the knowledge of the Facebook user. One of the main concerns was that Beacon did not give the user the option to block the information from being sent to Facebook.
Yep. What was even more bizarre is how they never really acknowledged it might be a problem, they just keep taking this arrogant attitude that the plebe users are wrong and just don't understand this amazing new data feature. It was the first time I recall them using analytics to justify their bullshit, it pretty much set the tone for where we are at now.
This was how FB has always designed to work from the beginning
Not exactly. In all fairness, as Zuck points out, this is a key part of the story, and in theory, why this is different:
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent.
This raises the question of what Facebook was doing (if anything) to prevent this sort of action, but the fact that they just took CA at their word that they deleted this ill-gotten data (of course they didn't), it makes me think they did very little. I think this is just as concerning as any other part of this story. Even if people are knowingly willing to hand over data to Facebook (or the devs of some app) in exchange to use a service, they wouldn't think that it's a free for all and anyone can mine the data for whatever they want.
> It is against our policies for developers to share data without people's consent.
Not to mention that it's arguable that "consent" was really given for facebook to share the data in the first place. I'd be interested to see some polling results asking if facebook users knew what facebook was up to and whether they feel OK with it.
I'd also be really interested to see screenshots of what the users saw when they clicked "ok." I've been able to find a few screenshots online for other apps, but nothing that indicates what it would have looked like in 2013.
Nice. This doesn't seem to mention sharing data about your friends though? I wonder if that would have been mentioned separately to grant access to your friends data?
Thanks for playing into the story that Facebook created to set the conversation.
This was openly covered last year by BBC when they interviewed Trump's digital campaign manager at the time Theresa Hong [interview linked]. The campaign spent $16M on Facebook. Understandably, Facebook gave them the white glove treatment, even had their own employees embedded in Project Alamo (the headquarters of the campaign's digital arm).
But today Facebook claims they had no idea who one of their multi-million-dollar clients in 2015-2016 were. That it was just some random quiz-making hacker dude selling data to some other random company.
This piece of work posted today by Facebook is what we call damage control. Don't expect the truth from it-- it will contain truths, but it will not be the truth of the matter. And don't let it set your dialogue, man.
Exactly this; no one is sorry, they're sorry they got caught after the fact. This is simply an attempt at ass covering and shifting blame onto Cambridge Analytica so users / investors / governments don't sue Facebook.
You're absolutely right. This is a bullshit non-apology in which nobody is actually sorry for anything. It's crap and does nothing but try to shift blame.
Yet, maybe it serves a purpose? It doesn't matter how sorry they actually feel, nobody is going to feel better from a large, public, self-flagellating apology. Nobody is going to be happier or more mollified or satisfied. All it's going to accomplish is to provide grist for the lawsuit mill - after all, why apologize if you're not guilty?
Again, you're completely right. They're clearly not sorry.
I want to say yes, but there is a problem with suing these sorts of monopolistic companies. If Facebook were to be sued, for lets say $100 million in a class action lawsuit, even if they end up paying, it ultimately won't change their behavior. They have enough money to pay that, take a small hit in their earnings for the short term, and then continue as if nothing happened.
I had a very hard time imagining Zuckerberg actually writing these words, it just felt too carefully crafted and full of standard PR tropes. The CNN interview should be interesting, then we can hear the actual words from his mouth.
I always see people say this after any PR crises... but how would you say it instead if not that? Is there a way to be truly creative and not shoot yourself in the foot at the same time?
Sell the premuim quality cow to buy two crappy cows. Sell their crappy milk dirt cheap to undercut the premium quality milk market. Invest, repeat, and race to the bottom.
2018’s 3P data and tracking tech no longer needs to violate FB policy to do what was done in 2013.
FB provides a platform for ads, sure, but it can be used for way more than just ads.
3P tracking can infer who (even specifically) is viewing ads and it knows the social graph by other means. It can also do effective mass psychometric tests. A/B testing infrastructure can be used for more than just optimizing ads ... all of the psychometric dimensions tested by Kogan’s 2013 app can be expressed as embedded imagery and messaging in ads, and the same types of tests can operate at the same scale by integrating 3P data and tracking (some of which certainly originates from policy-compliant FB apps) which already knows the social graph beyond what FB will allow a single app or ad to draw.
All I want is Mark Zuckerberg to say is "this isn't a community, we're a multi billion dollar company. We have a pretty cool website where you can do a bunch of stuff, and we mine your personal data (and sell it) to pay for it. Sound like a good deal? We think so."
But no, we've got to pretend Facebook is a touchy-feely community. I get the feeling that Facebook is ashamed of the way they make their billions.
Stop repeating this meme. It's not productive and doesn't help drive the conversation forward in any way.
He said those things when he was 19 years old and Facebook was still a random side project in college.
And yes people were indeed "dumb fucks" to give a random college kid with a side project their personal information. If you went around Walmart and asked people for their SSNs with no value proposition, then they would indeed be dumb as well.
The $400 billion company that Facebook is now and the growth a person goes through over decades of being a CEO and managing people is significant.
For you to bring up something the guy said when he was a kid is disingenuous and does more to harm any point you may have had than help it.
Exactly. He was 19, didn't have a army of PR/lawyers acting as filters between his mouth and brain. Now his words (especially crisis-time statements) are shaped by a team of professionals. I think I would pay more attention to the former to get an insight into his "real" thoughts.
Have you actually talked to a 19yr old recently? Serious question. Because you seem to lack serious perspective about how a 19yo acts, thinks, or behaves when adults aren’t around.
Also can you explain to me how people submitting personal information to a random form from a college kid is not dumb? Facebook the product didn’t even exist at that point by the way and the information collected was more like email address, nothing really more nefarious than that.
> Have you actually talked to a 19yr old recently?
Yes, most of my intake into the military were 18 and 19. As a society we consider them adult enough to send them into harm's way. I was the Old Man at 31.
Your defence seems to be based on the premise that 19 year olds are children. No aspect of law or society concurs with that.
Law and society at one point were ok with kids under 15 working in mines and factories. Just because we consider 19 year olds right now 'adult enough' doesn't mean that this won't change with e.g. new scientific discoveries. For example neuroscientists believe that your brain is fully developed around 25 years old: http://www.businessinsider.com/age-brain-matures-at-everythi... (random link but there is more literature on this topic)
It's not really selling it's user data for money; it's selling access to it's users for money. Sure, user data allows some advanced targeting, but the reason they make the profit they do is that people are buying ads for people to see, not data about them. That's an important distinction.
Most people are buying ads for people to see. Some others, however, are using the platform as access to user’s extensive 3P data, once they click and are shuffled through multiple shady ad exchanges gathering and selling data, including the social graph.
Some others, like CA and ilk, also get easy access to user’s psychometric data, by embedding the psychometrics into A/B tested ad content.
The adage “you are the product” has only become more true as FB has advanced, whether by their intended or explicit policies or not.
Facebook has cut down more and more on that as they've decided to be an advertising company instead of a platform. The vast majority of their data issues come from pre-2015 when they were still experimenting with their business model.
Honestly, at the moment, Facebook has huge economic incentives to keep as much data to itself as possible. Facebook being the only place where you can microtarget to such an extent is a huge moat around their business.
Even with their data policies, FB (and any ad platform) enables massive data-mining by the third parties which host the ad exchanges and destinations.
And microtargeting can be abused (or just plain used, depending on your perspective) to infer only additional data by incorporating it into the 3P analysis once users click and start loading non-FB content. Microtargeting by its nature leaks information about the target segments ...
Figuring out the data FB uses to microtarget is simply a matter of buying enough ads, or getting in the middle of enough campaign-to-user relationships (as a central 3P ad exchange or tracking service).
I think there are some pretty large differences in scale between "buying ads to target people with specific attributes and saving the situations where they interact with your ad" and "enabling any application to view all of the information about any friend who has connected with the app user".
I wouldn't argue that micro-targeting can't end up with very specific privacy concerns, but I don't think it's nearly the same scale as "you should probably assume that if you signed up to enable the Graph API on Facebook all information about you prior to 2015 is available to people you probably don't trust".
It’s a different scale only in the timeframe of a single campaign. 3P trackers have an *ongoing central vantage over many campaigns, giving them data which they accumulate and sell to each other, causing leaked “private” data to accrete and spread in essentially a viral fashion.
The resulting dataset over even short periods (< 1 yr) time is comparable to a total datadump, including an accurate social graph. A “very specific pivacy concern” it is not.
I disagree, because Facebook's incentive is to keep all their information to themselves in an advertising business. If their the only ones who have it, then in order to get the same form of targeting you have to pay them.
It’s not possible for them to keep targeting data to themselves.
Targeted campaign products leak data about the user’s targeted attributes by their very nature.
If you want FB’s targeting data, simply buy targeted campaigns and associate the target attributes with the users who click once they are on your server. At scale, the targeting data is transparent.
Big ad analytics companies and ad exchanges can and do basically sit in the center of many campaigns and slurp up the targeting data which is naturally leaking from FB by virtue of their selling capaigns based on those targets.
Whether they want to or not, they are selling their data.
Edit: Care to reply instead of downvote? Is everything above not true?
People outside of the tech world, in general, do not really know what the "targeted" in targeted ads means though. They think advertising, they think ads in the NYT or on TV. People do "know", but unless you really think about it, or understand how it works, its invasiveness can be easily overlooked.
I like the part where he says, "We have a responsibility to protect your data, and if we can't then we don't deserve to serve you." Facebook doesn't "serve" consumers, it built an RPG that harvests user data and sells it to the highest bidder.
I just don't get it. I don't get the enormous backlash over this. Users give away their data to a company, the company sells that data to advertisers (or sells ads and targets the ads according to data), and the users can continue using the service they enjoy for free. This was the contract you agreed to when you clicked the sign up button. I don't get why people are mad at Zuckerberg and not themselves! People act like a company is supposed offer their services for free. The free services is the business model of the internet. You want to use Google Docs for free, fine they'll collect data on what you make. You want to play games for free, fine they'll display ads to you. You want to catch up with old friends, fine they'll collect data on your interests. No one is forcing Facebook users to use Facebook. You know they're collecting data on you. If you don't want them to, then don't use their "platform."
For one, I don't think most people are acutely aware that this is what is going on. The HN reading crowd is the enlightened technical elite. We know that this is the transaction which occurs whenever we do something for "free" on the internet. Regular people may wonder why something as sophisticated as this is free, but are not aware that the cost is privacy. They just think "oh, this is a cool free fun thing. Awesome!" and be on their way.
Even then, the surprising, and technically not against TOS, open secret to every single social app that leverages it, is the comparative ease with which one could scrape a user's peer nodes' entire public history without explicit permission from said node. This is the thing that allows for all those "you may know" invite suggestion engines.
And this is by design, obviously, as it is literally the only thing that makes facebook valuable.
> I don't get why people are mad at Zuckerberg and not themselves!
Because Trump won and Zuck is the new scapegoat. No one gave a shit when Obama's campaign did the same thing, and they were even public and boastful about it at the time.
"the enormous backlash" in this case is because it looks like the data was used to hack the US election. If it had been used to sell washing powder no one would give a damn.
This was the contract you agreed to when you clicked the sign up button. I don't get why people are mad at Zuckerberg and not themselves!
Zuckerberg is the one with control over the terms in a contract of adhesion. A contract of adhesion that gates access to a popular tool, that people want to use and don't bother reading or understanding the terms.
Whether or not you like contracts of adhesion, Facebook is taking advantage of people. When society finds terms in contracts (especially contracts of adhesion) to be immoral, they're allowed to voice that. And sometimes that causes the law to act to void contracts or prevent parties from certain actions even with contractual connect.
Also the people in this data scandal took a quiz from a shady company and probably clicked through the prompt to share their info without thinking twice. The only problem was it got some of their friends info... but that bug/feature was fixed years ago.
Personally I quit Facebook upon signing up the first time in 2009. I friended a real life friend then 20 assholes from high school saw that action on my friends timeline and tried friending me. Right away I closed my Facebook account and never signed up again.
It was obvious back then Facebook shares my actions I don't want public with anyone they feel like. If you don't want them to do that then you shouldn't use the service.
Well they sort of promised the FTC they wouldn't just do that without giving you a heads up, but then in the api they let your data out to apps that you didn't get a heads up about.
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity.
> Second, we will restrict developers' data access even further to prevent other kinds of abuse.
> Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data.
All of these points don't address what actually happened. It's not like CA had a feed of FB's user data. It was harvested. And saved. And data, like diamonds, is forever.
The problem is that FB simply has too much information about us. The #deleteFacebook moment is a wake-up call not so much for FB's tone-deaf execs, but moreso for the masses that (probably until now) never realized how much data FB has on every single one of us.
"... we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications."
How is it at all possible to certify that data has been deleted and no copies were taken..?
This feels strange to those of us from technical spheres, but much of the world works without formal verification of facts. There are in theory severe penalties, both implicit and explicit, that ostensibly act as deterrent to bad actors.
This is usually only noteworthy when it fails, but by and large it works. The friction involved if we did not generally accept someone's (signed, notarized, appropriately formalized) word as bond would cause our world to grind to a halt.
Notarization is another great example of "the real world is strange to technical folks." There are a thousand reasons that notarization is a terrible process. There are minimal requirements to be a notary. Notaries have predictable seals. Ordering a fake notary stamp is pretty easy. There's no technical deterrent to backdating requests. Notaries are frequently employed by the people they're notarizing for. Notaries are not trained to recognize fake IDs.
Even the flimsiest of technological improvements could radically improve the trustworthiness of the notarizations, but by and large the system works, so there's not much of a reason to fix it yet.
> This feels strange to those of us from technical spheres, but much of the world works without formal verification of facts.
Including many technical spheres, and particularly including the vast majority of the software world; formal verification exists in software, of course, but is most notable in not being used for most of it. Yet, even without that, there's usually a release process that includes the moral equivalent of a certification—without formal verification or certain knowledge—that the software does what it is supposed to.
This predisposition to believe our fellow humans is actually hard-coded into our genes. This is one of the reasons it's so easy to spread fake news. The ability to lie to someone without direct consequences to yourself (because, for example, you are thousands of miles away) was not part of our ancestral environment, but it is part of our contemporary one, and some of us are exploiting this newly created ecological niche.
I'm going to have request a source here, because this doesn't make much sense. We've spent very nearly all of our evolutionary time pillaging, raping, and murdering each other - not exactly singing kumbaya. There's tribal instinctual stuff that would include 'in-group' trust, but that's something far and away from trusting a person because they're a person as you're implying.
'Fake news' takes advantage of something much more simple. People believe what they want to believe. In other words the people led on by fake news tend to already want to believe what the news is about. When we see something that confirms our biases, many of us don't question it. And this is made even worse since people tend to work as apologists for fake news from sources they like, so instead of being able to discuss the issue of fake news - it ends up dividing people into splits on the 'fake' topic at hand, which in contemporary times is most often political.
Well, yeah, that's what I was referring to. If you perceive someone as being from your tribe then you are predisposed to believe what they say, and if you don't then you aren't. Nowadays tribes are much bigger than they used to be, and tend to center around ideologies more than geography. But otherwise it's monkeysphere dynamics at work.
Yup. Exactly. The tax system in many parts of the world works on an honor system. We all, as a society, rely on the idea that most of us are honest, good people. There is no way to audit everyone.
To be fair to the previous commenter, they did not explicitly propose cryptocurrency as a solution; they're just asking a question. Also, cryptocurrency != blockchain technology.
That said, I do agree with your sentiment that the blockchain-as-a-solution space is getting a bit ridiculous. I can't imagine a way to translate this problem into a smart-contract-solvable form.
This begins and ends with the fact that if you can see something, you can record it. The only way to prevent this is through on-site access only with a thorough physical search and no electrical devices allowed.
You can do some clever things to figure out who it is that leaks stuff, but that's another issue.
> How is it at all possible to certify that data has been deleted and no copies were taken..?
By, e.g., signing a paper which says that.
You probably meant how can you know that, which is a different issue than how can you certify it, and which amounts to a question of degree of internal accountability and control of data access.
Certification is about assuming responsibility for the truth of the thing certified, which doesn't actually require knowing it (though obviously knowing it to a reasonable degree of certainty makes it more comfortable to certify it.)
They are well aware there is no way to assure they haven’t copied data 5 years ago. Pointing that out with rightful indigence is not “being a hater”. (which isn’t a trope that belongs in reasonable conversation)
On this forum you are sorrounded by people with backgrounds in data storage and networking. Any one of them will tell you there can be no reliable “forensic” assurance the data was not copied somewhere by any number of methods, even for a recent timeframe, let alone 5 years ago. It’s a ridiculous idea on its face.
The definition of certify here is to "attest or confirm in a formal statement." So, Cambridge Analytica could be sued if they lied in such a statement.
This may be possible, however it is going to be interesting to see how Facebook gets standing to sue CA. From what I understood from Zuckerberg's statement, the agreement was between Facebook and Kogan. If CA got the data from Kogan, it seems like Facebook wouldn't have standing to go after CA directly. I haven't seen the specific agreements made by each party in this case, so I can't say for sure, but traditionally I think Facebook could only go after Kogan, who then may be able to go after CA depending on the specifics of their agreement.
Along those lines, how is it possible for Facebook to demand that CA deleted the data, when CA is a third party and never had any kind of agreement with facebook(at least, relating to the information Kogan shared)?
If someone leaks facebook data to a journalist, can FB demand that they delete it as well?
I think it's the very fact that they cannot truly verify, which is why Facebook is so willing to go through the motions. In reality, they know many developers will not comply, but they can claim they did everything humanly possible before a grand jury.
It costs Facebook nothing to ask developers to sign a contract written on virtual paper, yet can be used as a defense to cover their butt further down the line.
True, but audits aren't necessarily guarantees of anything either, and also rely on a certain amount of trust and disclosure that those being auditing aren't actively trying to game the audit.
There's tons of substance to be discussed here without reaching for the global hash table (https://news.ycombinator.com/item?id=9722096). Fortunately the community has mostly been doing that, but let's not spoil it.
@dang - Normally I agree with you, but I do feel in this instance, especially in regards to him trusting (or once trusting) a third party with his data that the quote is appropriate. It wasn't a troll comment to any HN'er but more of an observation of the pot calling the kettle black. Zuckerberg is absolutely 'trusting' that CA has deleted the data that they gained from FB. While it may be often brought up on HN, its one of Zuckerberg more profound quotes (especially in regards to privacy of information) and speaks to how he sees the world, the difference now is he's on the other side of the coin and we are getting to watch him deal with it.
You've made a better case for it than I'd imagined! But is it necessary? IMO the damage of internet shame-memeing is larger than the local benefit.
The point isn't to defend Zuckerberg, btw; it's to defend the quality of this community. If we're to realize our dreams of doing marginally better than internet median, we need people to check their worst instincts here.
He honestly isn't wrong. People use Google and Facebook everyday for free in exchange for their privacy and they are ok with that. You can ask most people and they'll say they didn't really care about Snowden's leaks. They are happy to use free software and to be under government supervision (I don't care about the inefficacy of it). Privacy really is a social norm of the past.
I understand your intention and don't mean to criticize, but please consider avoiding the term 'free software' when referring to businesses like Facebook and Google.
The parent says, 'They are happy to use free software and to be under government supervision.' In that context, 'free software' is referring to the previously-mentioned services (Google and Facebook), which do not cost money but are antithetical to the term as it is used by organizations like the FSF.
I worked at a company that had viral apps during the same timeframe mentioned in Zuckerberg's response. Our company pulled data of friends of friends. The amount of data people gave us was staggering.
A few years later the data was much more restricted, much harder to get, and much clearer to the user what was going on.
It's easy to point to Fb and say they are evil, but it really seems to me like it's the 3rd parties that are causing most of the issues, and Fb has been pretty decent at locking it down in the last number of years. Keeping in mind that they were the first who had social data in such large amounts where there was no prior example.
You're letting them off too easy. The FTC consent decree was in 2011 and this happened in 2013. They settled with the FTC and paid no fines by agreeing not to give users data away without express consent and then let the api continue doing exactly that until 2014.
Maybe in 2010 your comment would apply, but when this happened they knew very well they weren't supposed to be doing this.
The 2007 and 2013 paragraphs are pretty scary. Your friend takes a personality quiz from a random app developer and then that developer gets access to _your_ personal data. Obviously Zuckerberg doesn't go into detail about what kind of data that was, but if they needed to rein that in in 2014... I imagine it was potentially pretty personal.
This was the graph api that enabled companies like Zynga to grow virally.
You used to be able to do searches on Facebook like "photos taken by friends of _____" and you could easily stalk them without being friends with them. Today that search will yield photos only if you are friends with them, or they enabled their privacy settings to public.
Graph API and Graph Search were different. The latter was never exposed via a public API.
Neither exposed data that wasn’t available from using the product normally. For example you could visit someone’s profile, see who their friends were, and then look at those friends’ photos.
To me the ugly piece of this is that companies like Zynga really operate in the realm of addiction. So, Facebook is operating as a sort of addictive drug on steroids.
I created an app back in 2009, to grant myself access and look around.
Saw too much information about friends, realized some friends began "using" my app, just because it was made by me. Realized all the quizzes and dumb polls are data-gathering apps for the developers.
Realized, all the awkward "friend requests" from other side of the world, are fake profiles, to gather more data.
Made a fake profile, to seem from a different part of the continent, with such an awkward for us name. Facebook recommended me to befriend people from that area. Some accepted the friend request. Soon I had 30+ friends on a fake profile, all within the same "web" of friends, within same region.
Then I made more of those fake profiles, and made more fake apps with same theme, since this soo easy, for this and other regions which I suspected had hot women, and I was 25ish single at the time. My devious plan was to get outside my own social network and cast my net to catch the women who my algorithm would say - they are not on the edge of their own social network - they are a hub a popular and single etc. But that was too much work to complete. Later facebook gave me the graph search query where one could say "friend of my friends and single in this city", but that was still meh compared to what I had planned - since I would be doing the data gathering and analysis.
I wanted to catch the woman who had good privacy settings, those not show up on fb own searches, those not installing my app, but appearing in friend list of those who do accept fake friend requests or use dumb quizz apps. Those who appear to be recommended by many fake profiles.
It was years ago before they started locking down the data. I put together a web page I was going to post to show people what info they were releasing, but I felt I would get lynched by friends for posting it all.
Basically everything you see on yourself, plus pages you visit and like, groups you belong to, your picture, your friend list, as well as their friends, so it was really three steps out and included things not available through FB normally.
It was totally predictable this data would get collected and abused.
I generally don’t see the problem of a user being able to share information about himself with apps she wants to use.
That’s a fair product to build and provides lots of great value!
Now can it be abused...of course it can, a rouge app developer could try to use this data for whatever, or sell it. That’s a problem.
I think FB did a great job over the years though to improve UX for users to make it more clear what is happening and what information is being shared.
Back in the day you could for example get a users friend list for example (only public profile information though)...I think they did the right thing in limiting that to only users who also use the app.
It’s what makes apps like Spotify great...the fact that I can easily find my friends on it.
I think there is tradeoff in anything we build. And 2B people believe it’s a good tradeoff! And don’t tell me that users are stupid and don’t know...because then I am gonna flag you as stupid first :)
We also don’t sue Ford or the state that owns the streets for any accident that happens...that be nuts! We ask drivers and pedestrians to be responsible and that works for the most part. Of course there is the occasional deadly accident...but we accept that risk for the benefits of mobility.
If your friends can see your posts, the app can see your posts. If your friends can see your birthdate, the app can see your birthdate. And so on, so forth.
There were probably tens of thousands of apps like that. Maybe more. Most made by small teams that definitely don't care about the privacy implications at all. A lot of that data is still out there somewhere being sold.
Circa 2009 I was hired to build a Facebook graph app that did just this, only for dating profiles. Thankfully it was acquired and buried by the shady sociopaths who hired me to build it, before it ever saw the light of day.
However, there were many, many apps that also did the same. A number of dating apps, companies like Zynga, business networks like BranchOut etc. They were just inhaling data wholesale, especially Likes which were fully exposed to all friends in your graph.
I could totally see how you could use that data circa 2014 to build psych profiles and resell it for sophisticated targeting years later.
There is an enormous risk of companies who rose and fell during the Zynga era to have resold or improperly disposed of FB profile data during their final liquidation. This is just the tip of the iceberg.
Zuckerberg's comments are clear and seemingly accurate as far as the publicly available information goes. The only problem is, they do nothing to address the real elephant question in the room. That question is - what is the state of political campaigns powered by the extremely detailed personal data Facebook holds?
Democracy only works when everyone gets an equal vote. What are the consequences when special interest groups, including from abroad, can pay to use the official Facebook APIs and craft targeted messages to shape the public opinion?
We have seen the consequences of this approach. Trump won by the smallest margin. Brexit was nudged with 100% false and misleading statements (300m for NHS, showing refugees from Syria etc.) Should we be OK with this much power available on tap?
The real questions Zuckerberg needs to answer are: what political campaigns are being run? Who is being targeted with political messages? What are the budgets like? Most importantly, who is ultimately paying for shaping the political opinions in our democracies?
This is a question that strikes at the heart of our society. Facebook is not going anywhere. Google is not going anywhere. People will continue to put all their personal data in cloud services without thinking about any Privacy policies whatsoever.
We should have an open debate about how much digital opinion shaping is acceptable before we have built a tool that sells the democratic decision making to the highest bidder. There is a reason why political spending is so carefully controlled in Europe. Facebook needs to own up and come clean on the state of political advertising taking place on their platform.
> Democracy only works when everyone gets an equal vote. What are the consequences when special interest groups, including from abroad, can pay to use the official Facebook APIs and craft targeted messages to shape the public opinion?
Democracy only works with an educated population. Regulating data might act as a short to medium term buffer against this issue; however, as we learn more about how our psychology works, the more often we'll be manipulated. Combined with the interesting argument that being politically uninformed could be rational:
> If the odds that your vote will be decisive are minuscule—Brennan writes that “you are more likely to win Powerball a few times in a row”—then learning about politics isn’t worth even a few minutes of your time. [1]
... we have an interesting problem on our hands. I personally believe that the long term solution is within education, but instead of teaching people "what to think," we ought to be teaching people "how to think". People typically learn naively about how to think: both how to emotionally cope with struggling and with parsing and understanding information. At the minimum I think schools ought to be teaching kids strategies for both arenas.
I personally really enjoy two approaches around "how to think.":
1. The now out of print "How to Develop Your Thinking Ability," is a super old, easy read loosely based on Korzybski's controversial "Science and Sanity." It taught me how to approach my perception as gambles (i.e. to avoid universal judgment). The principles, all often obvious, as I recall, are roughly:
a. Up to a point e.g. that person is annoying, up to a point.
b. To me. That movie is awesome to me.
c. As far as I know. Warren Buffet is 92 as far as I know.
d. Indexing by time: I like John at this point in time.
e. Indexing by place: I like John when we're at the club, but not when we're at the office.
f. Indexing by subindex: Instead of: Guy_1, Guy_2, Guy_3 are all scum, therefore all men are scum, we use indices to avoid generalization.
2. The Happiness Trap: enumerates Acceptance and committance therapy. ACT roughly uses a variety of defusing techniques, mindfulness, and values oriented behavior to help a person handle their thoughts in constructive manners.
Standard operating procedure for Facebook.
Let violations go on unless and until seriously threatened by PR damage or legal action. Deny knowledge, claim "mistakes were made", claim "We're so sorry. We will make sure it can't happen again", repeat.
Data exfiltrations of roughly this kind has been going on at least since 2009. Handled according to the formula above.
I just posted links to two writeups from back then:
over in https://news.ycombinator.com/item?id=16640437 ("Big Other: Surveillance Capitalism and Prospects of an Information Civilization") is well worth reading and keeping in mind in this context.
"It was installed by around 300,000 people who shared their data as well as some of their friends' data... this meant Kogan was able to access tens of millions of their friends' data."
I wonder why the Facebook of pre-2014 thought that this was an acceptable level of data sharing to developers?
Did they trust that all platform developers would forever have good intentions? Or perhaps they felt the data being collected was harmless in perpetuity (i.e. did not anticipate the political ramifications of the CA abuse)? Or that their legal team could effectively enforce their TOS to prevent abuse?
In hindsight, the pre-2014 policy seems ridiculously careless - but I'd love to understand why they didn't anticipate a breach of this magnitude at the time (or if they did, why they dismissed the concern).
At a guess, I think they didn't really realize how serious this data is at scale. After all this was (I think, anyone know the actual details?) fairly mundane data. Name, age, last check-in date, pages you like, public figures you follow, posts you liked, friends, stuff you're interested in. MySpace/LinkedIn/etc made this stuff public, on FB it was mostly semi-private.
The reason this whole thing became a scandal is because data is powerful at scale. To quote The Guardian's original article:
"(Cambridge Analytica) used them (data) to build a powerful software program to predict and influence choices at the ballot box."
Facebook now have a $50bn business based on using exactly this kind of data "to predict and influence choices" people make. That's why their ad system works now, and didn't just a few years ago. They got very good at using large quantities of otherwise mundane data to guess who will click an ad and sign up to a cookie subscription or who needs a mortgage.
This is the same thing. If you know who's a maker and what seminar's they'll like, you know whos a Tory and what conspiracy meme they like. FB now know how powerful that is, but I'm not sure they did then.
Still... this sounds only semi-tangentially related to privacy. It is technically user data, but IDK if individual users were violated that much, at least relative to other breaches. It's not like the icloud breech, or Ashley Madison. Generally not data that people are horrified to find someone knows.
I think the meat of this issue is "power got into the wrong hands." At least that's what The Guardian was concerned with. User privacy is just the only violation. Power is the problem.
On that note, is FB the "right hands?."
Incidentally, all this is interesting in the context of Zuck running for office, or even just being involved in a politics. Whatever Cambridge Analytica could do, Zuck could day a lot more.
On top of that, he heads the biggest and most influential news/media company that ever existed. He should probably be banned from political involvement.
That last point is super critical, the real finding from this is that zuckerberg could very easily leverage his power to massively shape global politics in a way no person has ever come close to doing before. People talk about the dangers of fox news, but Murdock doesn't even come close to the level of power that Zuckerberg has.
On point 1: Facebook are going to be forced into this by law. The EU is introducing a major piece of regulation, entering into effect in late May. The 'right to be forgotten' is an important part of this[1]. It's possible this will become the de facto standard worldwide as a result. The penalties for non compliance: "In the case of non-compliance with key provisions of the GDPR, regulators have the authority to levy a fine in an amount that is up to the GREATER of €20 million or 4% of global annual turnover in the prior year."
How about additionally,
1) providing an "Advanced Feed" providing each user a log of every data field exfiltrated, date/time/who/IP addr,/ etc.
2) requiring every single advert and it's targeting parameters, that runs on the system to be made available in a fully searchable advert pool for anyone to search. This would allow people to see what is being targeted, journalists to figure out what targeting is happening (much of the Cambridge Analytica nonsense happened under the radar for a long time), and regulators to uncover scams.
Mere transparency. they could probably figure out a way that this will make them more money if they actually looked at it (e.g., support an ecosystem of people studying the data and figuring out what works better, to better utilize the platform).
What good would that do if you wanted to know what someone was doing with a profile they downloaded of you prior to 2015? FB says that any developer giving that data to another organization would violate its TOS, but given the sheer scale of apps that were on Facebook back then, it almost certainly happened many thousands of time.
I'm sure they totally have access to that information, but that will tell you "X organization accessed Y at Z time", and it may be almost impossible to determine what was done with that data after that point. Even if transferring the data collected was against Facebook's ToS, we at least know of one example (Cambridge Analytica) where that happened, and given that there were 8 million total developers on Facebook Graph, it's sort of unlikely to imagine there weren't thousands of other actors that did the same thing.
Which all leads you to conclude, any Facebook data you had prior to 2015 is probably in the hands of someone you don't really trust. That's not a fun scenario; even if facebook fixes things from that point forward.
Plus, a lot of the "news feed" articles started as re-shares of microtargetted ads. There absolutely should have been more transparency on virally reshared things that started as microtargetted ads.
exactly. There should be transparency similar to other media -- all someone has to do to see if a competitor is running an advert against you is look. On FB, it's all secret.
I think the main problem is less that users can't control their data; it's that for most people, your data is already out of the box. So many developers downloaded all of this data using the Facebook Graph API prior to 2015 when it was locked down. And given that 8 million developers used the API, it's almost a guarantee that someone you don't trust has access to all your data.
When you allow apps to access all the information about people's friends as well, informed consent becomes impossible.
The issue here is not some developer gaining a temporary snapshot of a few million records, it is that Facebook _itself_ is doing and productizing real-time psychological profiling on billions of people.
This was revealed years ago. Facebook was literally running experiments on people on the site, without their consent, to see how they reacted when their feed was made either more negative or more positive. Most people seemed to not care when this was revealed which is, in my opinion, just an indication of the level of addiction people have to the service.
You can find a plenty of hits searching for 'facebook run psychological experiment on users.' Here [1] is one of a plethora of sources discussing it.
I think Zuckerberg should have taken this opportunity to more substantially address the larger discussion around data privacy and Facebook. The outrage surrounding the CA revelations are based on a larger trend of continuous privacy abuse by FB.
Is it clear exactly what 'data' was actually obtained by CA on those millions of people? (I am referring specifically to the millions, not the original subset of 200,000 or so).
Is it all public and friends-visible activity? Or it is basics such as name, email, profile photo, etc.? I cannot find so far addressed in the coverage exactly what kind of information CA have had on a per-person basis.
If someone can share coverage that outlines this I would be very grateful.
The New York Times is reporting that they were also able to collect likes, because this was back when likes could be harvested through the API. Through likes alone, they could supposedly determine race, gender, sexual orientation, and other physical and mental traits/preferences with surprising accuracy.
The pre-2015 (FB Graph v1.0) API gave apps access to not only all of a user's own data, but also all the friends-only data of their FB friends. This archive.org link shows the sort of information that the Friends API used to provide: https://web.archive.org/web/20130911191323/https:/developers...
> It was always kind of shady that Facebook let you volunteer your friends’ status updates, check-ins, location, interests and more to third-party apps. While this let developers build powerful, personalized products, the privacy concerns led Facebook to announce at F8 2014 that it would shut down the Friends data API in a year. Now that time has come, with the forced migration to Graph API v2.0 leading to the friends’ data API shutting down, and a few other changes happening on April 30.
Yes, sorry, I should have been more specific. But what 'exactly' do they know. Does CA know that each person liked EXAMPLE POST ID #12379 on DATE XX/XX/XXXX and that you also like BRAND1, BRAND2, and that you also posted STATUS UPDATE1 on DATE .... What exactly do they have?
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.
What happens if they find that 50 different organizations had access to and may have saved as much data as CA? If they admit that, that would be the end of Facebook. This basically precludes Facebook from being open and honest here - even if its only this one bad actor, that the incentives for FB are not to be open would cause any reasonable person to question their efforts.
> What happens if they find that 50 different organizations had access to and may have saved as much data as CA?
Isn't this more than guaranteed? Literally any crappy quiz app or game had access to all of this data when they launch Facebook Platform. And I'm betting Kogan's app was far from the most popular on Facebook at the time -- the potential for hundreds of thousands of data harvesters to have exploited this is way too high. I disagree that it's "the end of Facebook", though... in general, users just don't seem to care that much about this sort of thing. I can say for certain that my parents don't understand what this means or why this is bad (they had trouble understanding what was so bad about the Equifax leak as well), so your lowest common denominator user is never going to budge, even after something like this. I'm just hoping that my more technically-inclined friends start to lean away from Facebook's collection of data aggregators.
I just cleaned out my "connected" apps, and found like 4 quiz apps that I don't remember at all with full access to my personal info and friends. Back when they launched login with facebook I used it like a fool (also Spotify at launch ONLY launched with it). I knew this had happened, but its more like now the consequences are coming much more into clarity.
> I disagree that it's "the end of Facebook", though
If they do what they say and publically shame the apps in a way that people would see, it might. It would be hilarious for them to put up 100 logos of apps on the top and say oh yeah by the way all these apps have been taking your personal information, with lovely trustable names like "e-quiz" and "sofunni"
If they knew about CA obtaining this data in 2015 and sat on it then, no telling what information they may be sitting on now.
Most companies, when they learn of a data breach, they take measures to notify their affected users (whether contacting them directly, or publicizing it). Instead, FB's saying, "Here's a tool at the top of your feed, you can check the safety of our data yourself". (And fat lot of good that'll do people who don't even use FB.) This lets them continue to do business as usual, putting the onus on users to safeguard their privacy, while they can stick their heads in the sand.
It's almost certain that they do, and almost certain they'll never be able to validate that this data won't go to any actors that you would not like to have access to your information. For anything on your Facebook prior to 2015, the cat has already jumped out of the bag.
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit.
I remember the old days of the graph API - there was a whole lot you could do with someone's account, and it was common for users, especially non technical users to blindly hit accept on the permissions screens. If I'm not mistaken, you could even make a developer account without a verified phone number at the time.
I don't think banning some of those apps is any kind of consolation really. The API sat like that for years. Anyone with nefarious motives already took what they wanted and ran. What repercussions will they face? The will be banned from Facebook? The apps are probably long gone. There was a time when Facebook apps were at their peak, that fad died off. The data is probably sitting in a database somewhere today. If I'm not mistaken, Facebook Games could still grab the friend list permission until quite recently.
Also what authority does Facebook have to do any kind of audit? How would that audit even work? If someone copies all the data to an external drive and locks it up, what will the audit reveal? "Yep there's no data here, pinky swear!"
The basic premise of Facebook is flawed. If your private thoughts or photos are posted on a platform where you don't control the data, the data is never safe. A good product that serves the user faithfully and takes this into account is probably closer to a decentralized product - maybe something like Mastodon (I haven't looked into it in much detail). Facebook has already acquired the users - if they can figure out a way to make money and switch to a decentralized model at the same time, they can solve this once and for all. They would arguably even gain users who would now have a reason to trust their service.
I have to wonder if Facebook was even competently exploiting people's data for money. It sounds like they essentially gave away nearly all of their users' profiles for free to any app developer who could get a user to sign up. Facebook didn't even charge app developers for access to the Platform.
This would allow app developers to create a copy of the data and re-sell that. And Facebook wouldn't see a dime. If their crown jewel is people's data, why would Facebook give away a copy of their crown jewels?
This assumes that the profiles are the most valuable data they have, which I tend to doubt. It seems like the real money is in the interactions between users, the engagement with the feed (what gets eyeballs) and the third-party sources like their tracking pixels and referrer data.
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity.
I am skeptical of Facebook's ability "guarantee" that a third party developer has not cached any data. I would equate this to finding all N needles in a haystack of size M where M >> N; it's just not a feasible task to complete. Even if you find N-1 needles, the last needle can be used as a sort of "mold" to reproduce all N-1 needles. My "golden rule" for the internet is: Once you give up information, assume it's everywhere and impossible to rescind.
I believe this statement will be eaten up by the press, and Facebook will continue to carry on its operations as if nothing happened. The stock will probably recover, and we'll go back to square one. Give it a few weeks.
The true demise of Facebook requires several things to happen simultaneously; a resonance of perfect conditions, so to speak. You've probably heard that Einstein and Newton discovered their theories because "they were decently smart men in the right place at the right time". A similar effect is also true here. If we want to see Facebook's demise, the world must have a suitable alternative in place, one which already has a critical mass. Until then, Facebook has a monopoly on attention and they will keep it as long as they are producing enough positive externality.
Facebook and Google provide very basic internet services that really should be provided by open systems. But through luck and timing they managed to establish monopolies.
They abuse their position to extract hundreds of billions of dollars by selling their user's limited time on earth to advertisers for peanuts per user.
Feel pity for Mark Zuckerberg and Larry Page that they have such fundamentally corrupt businesses and are apparently incapable of fixing them.
And applaud Apple for having created a business so well aligned with its users. It's very difficult to find examples of Apple failing to do the right thing and very easy for the other two, as in this example.
Startups should reject the rent-seeking Facebook/Google monopoly model and embrace the Apple innovation and direct-to-users model.
I don't understand why HN now finds this super interesting, but when it was reported more than a year ago, it was a lot of hand-waving and "no proof that it actually worked" over here: https://news.ycombinator.com/item?id=13542735
Perhaps on the first subthread that is the case, but reading through the entire thing paints an entirely different picture. Especially if you focus on comments from more frequent HN participants.
> In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends' birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.
Why do we not even have this?! These features sound fantastic -- where the hell are they? The most comical part of the situation with centralized social networks is that we give up our personal & friends' data to advertising monoliths, and we don't even reap any of the benefits from it in terms of improved user experience.
Facebook doesn't give you the best calendar, the best maps, and the best address book. It gives you the Facebook calendar, the Facebook maps, and the Facebook address book. For any of this to actually work from a user's end (never-mind the creepy privacy stuff), Facebook has to design the best-in-class experience for each of these product categories, which they of course have not done because that is an impossible thing to ask.
No one company is actually going to monopolize the social data market, yet every player in the field is trying (and assuming they eventually will) do so. It's maddening. Society will eventually wake up and realize we need a large-scale replacement to W3C focused on standardizing access to the social graph, which should be a public utility like roads. That's all there is to it. Until we figure that out, we're going to be having the same conversation every decade, nothing will change, and we'll keep begging Daddy Zuck to pwetty pwetty pwease fix all of our issues out of the kindness of his heart. Keep doing that, if you want.
It would be nice to think some government would take up the issue and do it well. I think it's more likely that someone will build a craigslist version (lowest tech possible, limited financial upside, widely useable) of Facebook that people will flock to.
I remember Windows Phone 7/8 used to do a lot of that (integrate the built-in apps with Facebook data). It worked fairly well for a time, at least if you didn't have a huge number of FB friends.
I don't agree with people who think these scandals will be the end of Facebook. Instead, Facebook will become more entrenched as it develops advanced features to handle national security issues, allow users more sophisticated control over their data, etc. I like that Zuckerburg explicitly takes personal responsibility for everything that happens on his platform in this post.
I realize you're not the poster I was responding to, but one has to pick one. Either this is a nullop statement, or it is an encouraging, commendable thing to say.
It seems that everyone just hates Facebook and we're collectively trying to nit pick any missteps that they make.
I think that the whole issue is not surprising at all. Definitely not newsworthy. I'm pretty sure that everyone and everyone's mum knew that Facebook was doing stuff like this.
What changed is that now we all distrust and dislike Facebook at some deep level... So we respond strongly to even the most mundane criticisms against it.
I think that what's happening is that Facebook is so good with analytics that they are always able to churn out a perfectly optimized statement in response to any situation... But it doesn't change the fact that everyone is eagerly awaiting the next scandal.
> I'm pretty sure that everyone and everyone's mum knew that Facebook was doing stuff like this.
"Everyone" knew Facebook was gathering tons of data on its users and everyone else, but we didn't understand many of the implications until now. People assumed it was just useful for banner-ad type stuff. There are probably even more implications of this data collection that we have still yet to discover.
No it isn't. I only learned of the political implications of FB data during this election cycle. It's gotten a lot more headlines, and the stories have been much more explicit about the implications. So we've proved "everyone" didn't understand the full implications until now.
What is the problem? 50 millions people voluntarily published information about them on the Internet and now they have complaints about someone using it? Don't use Facebook them.
And Facebook is not that bad. You can use Facebook with a fake name and without a phone number. But for example in Russia social network sites require a phone number to register. They have much more to leak.
"We have a responsibility to protect your data, and if we can't then we don't deserve to serve you," so Zuck says.
"Protect" is the keyword. He means protect while letting Facebook be profitable. Facebook's business will not let him do that without losing a large chunk of its profits.
I expect a lot of talk but little effect on how they sell their data. The problem is that "free to user" = "sell my data". The model fundamentally works against the everyday user.
The only fix is for people to guard their data. It's your data! Facebook is not your friend so don't act as if it is.
> First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit.
Isn't there a bit of moral hazard here - as in, Facebook potentially getting material business advantages from reading other people's repos? Are they going to put up a Chinese Wall between the audit team and the product team?
"I started Facebook, and at the end of the day I'm responsible for what happens on our platform."
Regardless of my feelings about how this was handled, I appreciate the level of leadership accountability demonstrated in this statement. I wish that more founders, CEOs, and politicians facing difficult circumstances would adopt this perspective. Even if you didn't personally code/negotiate/write it - you made managerial decisions (hiring, culture, review process, etc) that induced it.
Because months earlier, he was making public comments that it was a 'crazy idea' that his company could have influenced elections shortly after his chief information security officer was pushing against him and Sheryl Sandberg to investigate and disclose russian activity on the platform. Additionally, his company was actively advertising election related products.
This man isn't owning responsibility; he's trying to defuse a PR bomb of his own creation.
That is true. Words must be followed by actions to mean anything. I suppose I've gotten so jaded recently about people in power throwing their underlings under the bus (eg VW) that even statements that acknowledge the correct position of accountability seem miraculous. We'll see if he makes good on that perspective.
Once the toothpaste (data) is out of the tube (FB data center) its hard to put back (manage). An option would be to remove or hash (different nonce for every share of data) identifying information and add noise to the various metrics gathered. That way, the results of an analysis cannot be used to target individuals.
Short of some breakthrough in homomorphic encryption sharing data for analysis is problematic. Penalties can be enforced but the data is out at that point.
> Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months.
>> Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months.
The problem was data that was secretly kept, right? How would removing access change anything? Or am I misunderstanding what happened?
My guess is this might help prevent long term tracking-updating? He does mention this:
> we immediately banned Kogan's app from our platform.
So my guess is the app still existed, and could probably keep querying data to keep user profiles updated. What you like, do, who you interact with; it all changes with time, so the data's value is slowly lost as it diverges from reality.
Moreover, there might be apps I gave access to back in the day when I didn't know better that could potentially still access my information. Even if the actors aren't malicious now, this could probably to prevent Chrome-extension like situations. Apparently legit extensions have been bought in the past by malicious actors with the purpose of exploiting their broad install-base and permissions [1]. I could see something similar happening with Facebook apps.
That was one problem. The other problem was that some app permissions were too broad and not clearly communicated. When you gave an app permission to view your data it saw all your friends' data too, even though they never approved anything. Zuck claims they tightened it up a bit but they apparently want to go further.
Basically all I see is a blame game on CA and that researcher. For the sake of brevity, when someone is hosting a REST API web server at Facebook, did not they notice a large number of requests coming from a certain app id (3rd party dev) for a certain type of data (user, his data, his friends, their data and so on)? At that point you knew what exact users were compromised, 5 or 50 million. And then CA came back to your platform and bombarded those same users (or subset of those users) with their propaganda ads etc. Point being, there were data foot prints left when they pulled the data for those specific users and their were data foot prints left when they came back for those users. Facebook is not Techcrunch that you don't care much about the nature of traffic coming in b'coz it is a blog. Facebook is where people literally share their every day life activities. I am assuming that there should be some smart self-governing systems in place at FB platform that would raise flags when such type inward/outward traffic gets generated.
Deletion (or confirmed re-deletion) of the data is irrelevant at this point, it is the models created from that data, and their use, which will now persist in usefulness to Analytica. Armed with these models, and future refined/iterated versions, they likely will capture the data more directly from users in the future. Once the genie is out, it doesn't readily go back in.
"We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address" -- Profile photos can be used to infer age, gender and ethnicity.
So the above statement should be "...name, email address, age, ethnicity, gender and countless other things future ML systems can infer from profile pics."
Better than I expected, but still pretty standard damage-control boilerplate. Also, his "timeline" is definitely not the whole story. I'll be curious to see how it squares with that of Sandy Parakilas[1]. In particular, what was the real reason they shut down the friends-of-friends permissions? Parakilas' take was that they didn't care until they realized people could use it to build competitors:
“They were worried that the large app developers were building their own social graphs, meaning they could see all the connections between these people,” he said. “They were worried that they were going to build their own social networks.”
> I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.
It's good that Facebook took steps to secure the platform against third parties crawling the graph and scraping millions of users' personal details, which is what Cambridge Analytica reportedly did, and which was not a violation of their policy at the time. Kudos.
So the consensus seems to be that what Cambridge Analytica did in the Brexit vote and the 2016 US Election was wrong. Now they won't be able to do the same thing via Facebook. But what's keeping Facebook themselves from doing the same thing that Cambridge Analytica did in future elections?
I'm also not convinced that it wasn't Facebook that did most of the work for CA. It was to Facebook's short term profit motive to provide the best micro-targeting ad tools to potential ad buyers. It's still in the realm of possibility that CA did get rid of all illicit data sources such as Kogan's dataset in 2015 and still had enough tools in that FB's ad sales platform team were more than happy to build for them for the easy short term profits.
Great point. We already know from the 2012 election and from the 2015 Podesta e-mails that Facebook doesn't exactly have a great record of political neutrality.
Of course not - Democrats can do no wrong in Silicon Valley. But seriously, there has been a pretty big lack of mentions of the Obama campaign in the coverage of this story.
Maybe the story wouldn't have been a story if it weren't tied to Trump 2016.
Strange to see that bullet point, specifically, as an advertised facebook feature in 2018. I absolutely would not want my facebook friends to know my address, and that's without any specific threat to me such as a domestic abuse situation.
I am glad he said this. I developed an app that would scan your friends to see where they worked and if there was an on going union organizing campaign at their workplace, users could then message their friends with information on how to got involved. It was clear in the TOS that the information could not be stored except to enhance the application, which in my case involved caching. They obviously realized this feature was being abused and turned it off and thus broke my app. People are making a huge deal all of the sudden out of a feature that was hardly a secret for the last 7 years.
Edit: this feature was also heavily used by the Obama campaign in 2012 and I would venture to guess was designed at their request.
I think Zuckerberg just doesn't get it (or is playing dumb).
The point is this - I don't and will never trust facebook as a company. There is no way to salvage that. Once you've established yourself as untrustworthy you can't talk your way out of it, because there in no reason to believe your excuses and trust there aren't omissions.
Consider the microphone accusation (and the legalese they used in their oddly specific denials), Zuckerberg's wish to run for president, the incredibly extensive profiles they construct on their users ("Is the kind of person to look at their phone in akward situations") without justifying it to the users, and now this.
It's way too late for people like me. I've deleted my facebook.
> Once you've established yourself as untrustworthy you can't talk your way out of it
Can't you? I mean clearly this is a common opinion, it's the basis of the American belief that all crimes should result in 30 year prison sentences. But I don't see why you should think this way.
Why does Facebook need to give up any of this information to developers in the first place? Presumably these apps are part of the reason Facebook has the engagement it has? Or part of the reason for the user growth it has seen in the past?
Perhaps this will prompt Facebook to start cutting out third party developers and build more home grown social apps, leveraging the data it already has on what eye balls engage with. Maybe they'll only grant API access to large companies, eg. TripAdvisor, who they can hold accountable and would be worth investing the time to audit. Perhaps they will start their own internal Cambridge Analytica where no data sharing is even necessary, assuming they haven't already.
That’s answered in the post. It was a vision of socially enabled third party apps that never eventuated. They cut they feature when it turned out that no one really needed the data for legitimate reasons.
This statement doesn't go far enough. Its clear that Cambridge Analytica mislead those of the purpose of the app that it asked to download it. It then used this data to improve advertising target ting during the election.
Facebook needs to come clean with whose data has been abused, and by whom.
This scandal is bigger than Cambridge Analytica. It also covers what the Obama team did in 2012 and 2008, and probably a whole host of others that we don't yet know about.
We have a right to know.
It appears to be that facebook may be in breach of many legal obligations that are present throughout the world (i.e. Australian Privacy Act, GDPR, etc).
So what happens if FB learns that CA doesn't have the actual raw social graph data, but CA still uses and sells modeled data and insights built on the original graph data?
Does FB have the contractual power to stop CA from using works built or derived from this data?
I have Google scholar alerts for Kogan and Kosinski; Kogan especially has a lot of citations and continues to work. I would bet that Kogan will never let his work die, even if his OCEAN model ends up being experimentally proved ineffective (my personal opinion = actions are far more valuable than modeled emotions).
I don't actually think the data sharing piece is the most important concern highlighted in recent days. To me, it's Facebook's pernicious effect on those who become addicted to the service and can't stop generating and consuming mindless status updates and photos all day.
The introvert in me wants to say that's a good outcome in that Facebook serves as a honeypot for certain types of personalities which I tend not to like very much. But speaking more broadly, it makes me worry for society.
I haven't used Facebook in a while, so maybe things have changed, but don't they show you a permissions page where you agree to give the developer access to your information?
Yes, but the problem is that Facebook's API also allowed lax access to your friend data, so people who had not consented to the collection of their data through this app ended up with their data harvested as well.
Also, there are various ethical issues with the users consenting to provide data under the guise of academics and then the data being turned over to commercial and political interests.
Meta: Zuckerberg's statement is, unsurprisingly, hosted at "facebook.com" and uses JavaScript to render. My browser is configured to not execute JavaScript from facebook.com because that site has a reputation for undermining privacy. Therefore, I cannot read the statement that I assume at least has some tangential relationship to the way facebook.com handles privacy. Alas.
"I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue.
We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it.
Here's a timeline of the events:
In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends' birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.
In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends' data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends' data.
In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the data apps could access. Most importantly, apps like Kogan's could no longer ask for data about a person's friends unless their friends had also authorized the app. We also required developers to get approval from us before they could request any sensitive data from people. These actions would prevent any app like Kogan's from being able to access so much data today.
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent, so we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We're also working with regulators as they investigate what happened.
This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.
In this case, we already took the most important steps a few years ago in 2014 to prevent bad actors from accessing people's information in this way. But there's more we need to do and I'll outline those steps here:
First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.
Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days.
Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.
Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.
I started Facebook, and at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.
I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we'd like, but I promise you we'll work through this and build a better service over the long term."
I wonder, when these APIs were initially opened up that giving consumers kitchen sink access was a good idea. Even if a developer gets decertified does it matter? The "bad actors" have already run off with the data. This seems like it it will make life harder for legitimate app develpers, maybe rightfully so.
IMO, what CA did was more of a phishing attack than anything. People voluntarily gave the "personality quiz" access to their data, and that was abused. The reaction to the data being given to CA wasn't big enough, but beyond that I'm not sure I see what Facebook did wrong.
"Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days."
Nah. They need to not offer any identifiable information to 3rd parties. Email is fine. Yes, I know can figure out ways to identify an individual by their email, but there's nothing more closely tied to one's identity than their name and face.
So you knew that breach happened in 2015, they might have already handed off data to someone else, and you didn't bother telling people? Quoting:
"In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent, so we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications."
What does banning or disabling app means? If you were responsible enough and you cared about user data you should have made it public right there and then! This is no excuse!
1) Data acquisition. 2) Using such fine-grained psychographic data to politically influence voters at massive scale.
Cambridge Analytica was clearly outside the lines for #1, but they clearly are not the only organization that does such precise campaigns to influence people.
I think #2 is much more important as data is not just available from facebook (although it's an easy source) and working in adtech, this is basically the pitch for all the modern advertising companies that use psychographic and intent targeting. It really goes back to the wider ad industry having very little oversight while selling mass influence to anyone with a credit card... perhaps we should solve that first.
> Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified.
...so instead of looking into it at all, or vetting our partners, we relied on media to tell us?
Since you can’t “unshare” data, a solution for the breach is only meaningful if it helps to make the improperly-shared data useless. And there isn’t really any way for Facebook to do that since so many of the details they insist on collecting aren’t things that people can change. And even if Facebook could rein in all this data, they can’t undo side effects of having that data, like elections swayed or relationships ruined.
We all need a way to have the equivalent of frequently-rotated encryption keys in our online discourse. If I revoke privileges for some entity, everything they have on me should become useless to them.
Bye bye Facebook. I'm gone. Finally made it. Fear of missing out was keeping me from doing it. And for what? I was fearing to miss out on what? Being disconnected from whom? From people that I haven't met in years.
> We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address.
What about localization info such as language?
Are all apps switching to 100% US English? Are they supposed to guess a users language from their name and email?
My guess (and this is just a guess) is that the users preferred language is already considered “public” on the graph so perhaps apps don’t need special permission to access it.
If that’s the case, this statement is doing a bit of misdirection by calling out name and profile photo, instead of saying “we will only give your email address plus other information we deem public”
I would like to also hear an explanation of how the Obama campaign apparently grabbed data in 2012.
- how was this episode discovered? what was FB's initial reaction and can it be justified?
- are the allegations true? including the one that FB employees collaborated on it. Silence on it only deepens suspicions.
- was this done in a similar way (3rd party dev, etc)? If not, how was it done?
- who were the actors in that episode? What was the actions taken then?
- what about the pedigree of those who rightly raised the issue in the 'Cambridge Analytica' episode? Did they not have the information for the 2012 episode? How did they find this out?
I want to share an update on the Cambridge Analytica situation -- including the steps we've already taken and our next steps to address this important issue.
We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there's more to do, and we need to step up and do it.
Here's a timeline of the events:
In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends' birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them.
In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends' data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends' data.
In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the data apps could access. Most importantly, apps like Kogan's could no longer ask for data about a person's friends unless their friends had also authorized the app. We also required developers to get approval from us before they could request any sensitive data from people. These actions would prevent any app like Kogan's from being able to access so much data today.
In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people's consent, so we immediately banned Kogan's app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.
Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We're also working with regulators as they investigate what happened.
This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.
In this case, we already took the most important steps a few years ago in 2014 to prevent bad actors from accessing people's information in this way. But there's more we need to do and I'll outline those steps here:
First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.
Second, we will restrict developers' data access even further to prevent other kinds of abuse. For example, we will remove developers' access to your data if you haven't used their app in 3 months. We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days.
Third, we want to make sure you understand which apps you've allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you've used and an easy way to revoke those apps' permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.
Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.
I started Facebook, and at the end of the day I'm responsible for what happens on our platform. I'm serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn't change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.
I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we'd like, but I promise you we'll work through this and build a better service over the long term.
> But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.
This is so disingenuous. When almost the entire world has a FB account, ‘fixing it’ now is essentially closing the stable door after the horse has bolted. The data has been harvested, it can’t be unharvested. There is no fix.
Personally, I’m disgusted that my data has probably been compromised just because I have some less technically astute friends who sign up for these idiotic apps.
I could deal with FB having my data, and I could deal with being targeted for ads through tools provided by FB. But to have my raw profile data being handed around and potentially misused isn’t acceptable.
I hope the fallout from this is severe for them. It is absolutely a massive breach of trust.
"Personally, I’m disgusted that my data has probably been compromised just because I have some less technically astute friends who sign up for these idiotic apps."
Now just imagine what happens to your contact info when your friend links their email account or their address book to "find their friends" on all of these services?
Quite. Although it’s not quite as extreme as having access to my timeline (for example). What’s annoying is even now after a week of reporting I have no way of finding out what, if any, of my non-public content has been compromised; or what the scope of the data was available at various periods in facebook’s history.
Can 1 billion diverse people be a community? What ties them together? Not place, not creed, not nation, not language, nothing except their humanity (I don't think I'm going out on a limb on that one except for the bots) and the fact that they use FB. I guess it sounds better than "our users", community has that ‘we're all in this together’ vibe. Google would never call its users our community, nor Apple neither, nor Amazon, yet FB can? Is it because it's a social media platform? I think it's a gross sleight of hand to be honest.
With partial data of 50 million CA was able to do such a social damage, what about FB and others sitting on billions of full users psychological profiles and playing irresponsible saying trut us?
Sorry I'm a fan of FB. I use many of the tools they've provided to the open source world.
I know this won't be a popular opinion but I'd welcome some regulation and bureaucracy for technology firms. This way there wouldn't be this weird gray/imaginary line situation going on that we have right now. I think Congress should probably outsource the regulation and bureaucracy similar to how the SEC works...but maybe not tied to the POTUS and instead creating a structure outside the purview of the POTUS.
>“We also made mistakes, there’s more to do, and we need to step up and do it,”
I, for one, am tired of hearing tech companies say that they are sorry - believing that this makes whatever the issue is, immediately a thing of the past so that they can start again from day zero with regard to whatever they failed at.
A company the size of FB must police the usage of their data if they want to be able to promise security of it. That this wasn't being done is a mistake of unforgivable magnitude.
It seems to me that they are planning to restrict what an application can see about users, but what about what an user can take from her friends. Can a bad actor like Kogan harvest private information by using fake profiles or even some famous people with lot of friends? It seems to me that the problem is much worse than it seems, like a fundamental flaw in this particular social network, if everything becomes an island how users are going to network?
Would be cool to get something concrete out of this, eg a guarantee, at least, that people who do not use FB at all or are currently logged out are not tracked. Because right now whether you are logged in or not, you are tracked by dozens of different companies, including FB and Google. This non-consensual tracking is creepy AF, and the only reason it exists is because 90% of users are not aware of it, or its ramifications.
So, that's it? A third-party app that users explicitly granted access to their data misused that data and Facebook banned them? How is that a scandal?
If we discovered today that 6 years ago there was some malware on the Google Play Store that stole a bunch of user data and sold it (but the malware itself is long gone, having been removed from the Play Store 4 years ago), would there be a similar level of rage directed at Google?
Wait, so FB Login won't even give us public profile data anymore? Then that reduces FB Login to nearly just a competitor surveillance tool for Facebook.
Does no one realize that this a bed that we have made ourselves? That the sea of data that the large five swim in is product of the modern deal that society has with tech, which no one forced us to make? Why is everyone happy to blame Facebook for what is fundamentally modern society's fault? I feel like I'm taking crazy pills.
Everyone realizes that, and it's irrelevant. You know, all the Trump voters who were swayed by propaganda campaigns actually voted for Trump. All the depressed kids who get convinced to buy some product by crafty advertising actually go and buy the product.
> No one forced us
A totally useless sentiment. People weren't generated all this data with benefit of hindsight and understanding. Effectively ZERO users EVER read the terms-of-service even, let alone understand the ramifications of giving up all this personal data.
It's hard enough to get people to save enough buffer to get through predictable crises that come up. Get them to be thoughtful and responsible in the face of a system designed to hide every aspect of concern and hesitation??
Facebook is one of several actors here, but it's complete nonsense to suggest that they are just a normal, innocent player in a internet world where people just do things a certain way because of culture. Facebook actively designed things for certain results.
It's not like Facebook is solely to blame, but the situation is NOTHING like you describe. Facebook wasn't just a neutral tool like email protocol and people just used it.
Scraping likes and friend lists is still very easy to do today on Facebook. All of that is pretty out there in the open as long as you have a Facebook account. They can limit their API, but that's really not very consequential with bots and the lax privacy settings most have. But I guess all of this "sounds good".
If he have two shits about users' rights all he could do was sue CA. And he could extend that policy to any developer that went the same path. Warning perpetrators that all that will happen to them in case of misconduct is to get banned is irrelevant, they could simply reapply with a fake ID, new company and whatnot.
As a way to discourage Facebook and other companies from doing this in future, a fine per person per data item shared must be imposed.
Since the data that's leaked is more or less permanent, all the people affected should get a recurring, life-long annual payment for each personal leaked data item.
In 2014,"apps like Kogan's could no longer ask for data about a person's friends unless their friends had also authorized the app.", so what's matters with 2016(Brexit)&2017(Trump elected)? Do they know Trump will become nominee back in 2014?
Sharing public available data is fine. When you are on Facebook, you want to connect, and you want others to find you. That's the purpose of Facebook. I am sorry Trump's elected. Trashing Facebook is not going to replace your president.
"Whoopies, we gave out your information out to some really shadies companies. Sorry, our bad, won't happen again. Like, really, it isn't our fault what those people did with the data."
Facebook, you provided them with that data, you fuckers!
The great irony here is that Facebook dragged someone through the courts when they tried to ask for their personal archive, but is somehow now presenting the exfiltration of 50m people's data as no big deal.
No mention of the fact that fb knew that Kogan sold users’ fb data to CA, and fb intentionally didn’t inform its users about it until the Guardian’s investigative report blew up in the press
Block apps from requesting any data that isn't absolutely required for the apps functionalities and it would stop a lot of abuses... but of course they don't want to do that
Seems incredibly naive for a company who's core business is data, to grant apps full access to all of friend's data. Did nobody realize that was a bad idea?
I think ambition beat them. The idea was a two-way data exchange; if you use the facebook platform to build apps with user's data,
a. it might be in your interest to help keep the data updated. Say, if your app depends on an accurate home address, you'll ask the user to update it.
b. you might generate more data or content. Say, to publicize your app you might ask the user to post on facebook.
Moreover, it's always advantageous to have other developers depend and build on your platform. It makes your platform richer by adding functionality you may not have the resources to develop, and gives you leverage (ie, Apple and Google's mobile OSes).
If I recall correctly it wasn't "full access to all of friend's data", but similar access to what you had on the user? I thought it was very exciting. I could think of a ton of potential apps, and got to use it at a hackathon once before they limited the access. Back then I thought it was a bummer, but the move made a lot of sense.
Most of this applied years ago when the API was more open. I built a lot of apps on their platform for big brands. They were all chasing likes and building FB apps.
Their third party app platform really helped with user growth and engagement over the years but now it's come back to bite them.
How many shares did Zuckerberg sell just a few days before all this became known? (Yes, I know that such sales are announced publicly and planned in advance).
I know that it's unreasonable to hope that Facebook will die because of such minor nuisance, but I still hate how soothing it sounds.
Anyways, what really makes me wonder is that it's such a huge scandal right now, every newspaper does updates twice a day, society is "alert", Facebook loses 60B in capitalization over night. It's a big deal! And yet it seems there's nothing new here, nothing we didn't know a year ago. In fact, I remember hearing about Cambridge Analytica, Facebook and USA presidential elections/brexit voting a year ago from Jordan Peterson, and psychology professors aren't typically the ones to hear tech-related news first.
The whole operating model of the company based on invasion of privacy. Blatant invasion of privacy and data collection by Google and FB is a national security issue. No entity should be allowed to snoop on free citizens more than KGB has ever dreamed about. Orwellian!
Will do, should do, going to do. Lots of promises not a single action.
Since Friday, or earlier of knowing about the incident, they could have conducted the audit and closed any lose ends and used this opportunity to say we have already done x and y.
Instead, it's just announcements. Not good enough.
TLDR: We aren't going to change our business model of systematically harvesting your data, but we will at least make it slightly more difficult for other people to do so.
This is such a well crafted statement not from Zuck's heart but from a PR team that has been working over time.
Shame on you FB and Zuck you Suck.
If they are so honest or like to be honest going forward why can't they make it easy to see all my pics and posts that are public? Why can't they make it easy to see who can see what I posted? Which app or user is using my data. They have billions of dollars and can't implement simple features?
Its just a game they are playing and they will go back to their old ways in no time when no ones watching or under a different name.
I was reading FB fanboy Robert Scoble's post and it pathetic. You almost feel like he's being paid by FB. He used to be a well respected tech journalist.
I have deleted my FB account and I am so done with this company.
> If they are so honest or like to be honest going forward why can't they make it easy to see all my pics and posts that are public? Why can't they make it easy to see who can see what I posted? Which app or user is using my data. They have billions of dollars and can't implement simple features?
You can do all of that... trivially. As you say, those are simple features. Ones they implemented years ago. Perhaps you should be more specific about what you're asking for?
Robert Scoble is a sexual predator. I would not be surprised if he also allows companies to pay for him to have a positive opinion, he is an immoral creep.
>If they are so honest or like to be honest going forward why can't they make it easy to see all my pics and posts that are public? Why can't they make it easy to see who can see what I posted? Which app or user is using my data. They have billions of dollars and can't implement simple features?
It's fairly simple to do any of those things? You can even view your profile as certain friends to make sure everybody can see what you want them to see, or vice versa.
There's also always been an app list in the privacy settings, IIRC, where you can revoke their permissions and whatnot. I know this because I've had to delete permissions for dumb apps I installed when I didn't know any better.
Scoble was a fanboy of Rackspace until they stopped paying him and since then he doesn't have a single good thing to say about them.
Further there been plenty of sexual harassment claims under his name. I doubt that he worries too much that he will catch some more dirt for praising Facebook these days.
Because they are all awful, no one uses them, and they are ui clones with no features built by people that think the word decentralised is actually a selling point to the general public rather than having a single innovative thought about what social means and then using decentralisation as a tech solution.
This is spot on. I would get rid of it as the only thing I really use it for is social logins, but there's the other issue of wanting access to the Facebook Developer stuff. I guess I could just make an account only for that?
"The good news is that the most important actions to prevent this from happening again today we have already taken years ago"
What an asshole, if this were true this wouldn't have happened. I liked Zuck until today, now I think he's a lying sack of crap, which probably means he'll make a great politician.
As for the rest of it, it's progress. It seems like a lot of good changes, but Analytically Facebook execs probabaly summarized that this is the least they had to do to stave off regulations or monopoly anti trust from congress. Any less, regulations would still be placed on them so it's brilliant strategy to do this and frame it in a way that Facebook is concerned about all the damage and we voluntarily do this for you instead of the truth which was we knew about this forever and only are doing it because of threat of regulations.
Overall, an optimal outcome for all parties currently, except for society as a whole down the line