> It wouldn't prevent malicious code going out but at least would require a chain of cooperation between employees, which would be harder to achieve.
No, it just requires a chain of cooperation between authorized accounts. That's a very important distinction, especially here, where the email in question alleged the following:
This included making direct code changes to the Tesla Manufacturing Operating System under false usernames
As a data point, in most of the ${BIGCORP}'s I've worked there are also infrastructure roles most people don't see or think about, which have access across wider environments.
* Storage engineers: Generally have access to most storage (all of dev/test/prod) in their group. Sometimes their access is silo'd, sometimes not.
* Backup engineers: Generally have read/write access to _everything_, and all historical versions of it, as backup systems need to be able to do both read/write. Fairly often there are ways for this access to be "unlogged" too, so the actions aren't captured into any system auditing logs (otherwise it can screw things up). I've not (yet) seen access for backup engineers ever be silo-d, but some places might be doing it and I've just never seen it. :)
They pretty much always can. Sure sure you can imagine some perfect system which would mitigate it but no one - definitely not your bank - is doing that.
It very much sounds like thats the case here - production code was edited, and subsequent auditing has found what should've been deployed and what is deployed differs.
Lots of people do work really hard on mitigating this problem. It's a tough and constant battle, but that doesn't mean you have to throw your hands up and not bother working on it. I'm sure you're right that my bank isn't working to mitigate insider threat to the extent I'd like, but Tesla's code is more safety critical than my bank and I think it would be worth their while to work very hard on keeping this from happening with their computer-on-wheels.
And what stops a single member of the staging or deployment team patching the build scripts, binaries or just installing their own software to a server?
It would not require any chain of cooperation I highly doubt staging team would catch some coefficient used for particular
industrial robot being off by 0.1% in some PR.
Programmers: read-write to repository
Staging team: read-only on repository, read-write to test servers and staging zone
Deployment team: read-only on repository and staging zone, read-write to production
It wouldn't prevent malicious code going out but at least would require a chain of cooperation between employees, which would be harder to achieve.