Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The answer to why people don't like this is simple, if a government like China says "Apple, you're going to add these image hashes to the database and report any device that has them in the next update or you're going to leave China," what do you think Apple is going to do?

I have read their papers, I understand the system and the safeguards they put in place, but none of them are good enough to have scanning on my device. There is nothing that is good enough. On device scanning for "illicit" content is a box that cannot be closed.



They have the whole system at their disposal for that, they don't have to do this. As an example (I know I could be out of date with this one), do you know why aren't there any iMessage bridges that don't require a mac?

IIRC it's deeply entrenched in the system and no one reversed their way deep enough to be able to replicate it. Now this might sound silly, but it's just an example, a contrast maybe of how the hard work the people behind asahi are putting or the huge jailbreak community, but the idea I'm trying to convey is that the playfield is HUGE and they just don't need this.

The one thing I would be 100% concerned about is the investigation process for matches because that's mainly where human interaction and decision making com into play and we humans SUCK, we've put people behind bars for years for no reason and with all this AI crap there have been a lot of news articles about that kind of stuff and that's something we should definitely be worried about, but I guess it's less about the tech and more about the people in charge right?


I don't understand the idea you are trying to convey. iMessage is not impenetrably complex, it is just an ugly API that uses an Apple provided certificate and a valid serial number as part of the authentication factor.

I also don't agree that the human-in-the-loop part of the process is a/the problem. Are you suggesting that it should just send the findings straight to the FBI... where a human would review it? Or maybe skip all of the messy middle part and if the model detects enough CSAM just send an APB to the local police to pick you up and take you straight to prison with no trial?


I was using iMessage as an example of something tedious and not overly complex but not exactly low hanging fruit that's yet to be completely reversed. The S/N or certificate parts don't even matter, if people had reversed their way through it, there would be at least an option to extract the required parameters out of your hardware and plug them in a stand alone server (in fact, IIRC there a valid S/N generator some time ago that was used to deploy osx in kvm?).

So, the idea is that even though it's not an impenetrable fortress, there are still plenty of dark places to introduce subreptious changes.

As for the human-in-the-loop part, I don't know why did you get so snarky, what I was talking about was that this is the layer that should be scrutinized the most, all of those components because even without the technology those are the people that will put someone in jail with no verifiable evidence.


So your argument is "iOS is complex, they could have just hidden it in there, but they didn't, they told us about it." I'm still not sure why this matters. From the standpoint of interacting with a government Apple could say "we cannot do that and maintain the security of the OS." Now, post announcement, they have to say "we will not do that."

That is a huge difference.

I got snarky because the human-in-the-loop for decision making is the the least concerning part of the process and the alternatives are as ridiculous as I laid out. There will always be a human-in-the-loop in this process - I'd rather it start with Apple's human, then law enforcement, then a prosecutor, then a judge, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: