Hmm, true. But perhaps you could mitigate this with cookies as OP suggests. Simply don't return anything unless the GET request has a valid intranet cookie?
Or perhaps the client can tell the server what webpage it's fetching from and the security check can be done server-side?
It is just strange to me that this security check has to be done on client-side (in the browser) as opposed to on the web server actually responsible for distributing the content.
You definitely could. But I guess it should be secure by default. Even if you didn't implement any check on the server.
Because people are lazy or they may forget to implement the security checks, or simply be unaware about them.
Ex., when you hacked up a super simple script to display a number of today's users of your startup to display on the big screen in your office. You would probably want something as simple as possible. This page is just 3-5 lines of code. Maybe one-liner even. No authorization or other security, as it's for the office intranet.
Without CORS any website that is visited by people from your office could fetch that number on screen.
Even with CORS, DNS rebinding may be a concern here. I think HTTPS may prevent that as the cert wouldn't contain the original site but in this setup where you want "no other security" it would probably work.
the purpose of having engineers write software is that they can transparently prove that it works reliably, and they can be professionally held accountable and learn if it fails.
You're suggesting that reliability should be improved by being obfuscating the code through transpilation or by merit of being generated by a black box (LLM).
I really suspect that simply transpiling code to rust or ada or some other "safe" language largely wouldn't improve its security. The whole point of these "safe" languages is that they encourage safer practices by design, and that in porting the code to rust you have to restructure the program to conform to the new practices (as opposed to just directly re-implementing it).
I haven't seen a LLM that is reliably capable of logic/reasoning or can even reliably answer technical questions, much less synthesize source code that isn't some trivial modification of something it has been trained on. And it's not clear that future models will necessarily be capable of doing that.
Steam isn't a monopoly. I, and everyone I know who uses steam is familiar with GOG or Epic games or Battle.net or some other service. You can even distribute your game independently (e.g. in the case of minecraft and some of the most successful PC games of all time) or just distribute it as a web game (increasingly feasible as WebGL, WebGPU, WASM etc. continue to advance).
Steam is successful because it has good user experience compared to alternatives, and has a lot of major titles.
Sure, it's motivated by a bit of license turnover like you suggest. But mostly it's a case of securing their OS against adversaries (including their users). You can lock down the system a lot more with TPM on your side: Now you can keep secrets away from users reliably.
I think we are seriously nearing the point of no return. Once you have manufacturers start implementing TC, that will really hamper reverse engineering efforts. Over time, the side channels will get ironed out.
Enforcing TPM requirements isn't about making users make changes, it's to scare OEMs into including TPMs by default so they don't get complaints from users. Microsoft wants a more controlled hardware environment like Apple does, because it's more profitable for a variety of reasons.
> Enforcing TPM requirements isn't about making users make changes, it's to scare OEMs into including TPMs by default so they don't get complaints from users.
Would any OEM dare to use workarounds to install windows 11 on not officially supported hardware? I feel like most OEMs would simply upgrade the hardware no questions asked. Simply because should any problem occur, Microsoft would just tell them your problem not ours.
> Once you have manufacturers start implementing TC, that will really hamper reverse engineering efforts.
Doesn't 90% of the push for this come from Media companies to implement DRM?
> Would any OEM dare to use workarounds to install windows 11 on not officially supported hardware? I feel like most OEMs would simply upgrade the hardware no questions asked. Simply because should any problem occur, Microsoft would just tell them your problem not ours.
Well on a basic level, if the consumer buys your motherboard or laptop and it doesn't work out of the box (but your competitors do) then you are going to have a massive customer satisfaction problem.
> Doesn't 90% of the push for this come from Media companies to implement DRM?
I don't think so. DRM is an old lens of understanding the problem from the last generation. See https://www.youtube.com/watch?v=HUEvRyemKSg . The new methods use a softer approach. Consider something like iOS where the developers can just make it very uncomfortable to do something like download a video and watch it. There's no bittorrent app or p2p file-sharing, there's no real filesystem, and there is no real standalone video player. So users rely on streaming services to do this for them, and you can charge money to middle-man that service.
You don't need to strictly enforce copyright like with DRM, just use trusted computing so that the entire system discourages general computer-like operations (including copying files, running programs, etc.) and encourages acting like a thin client to some server. This is a much better model because some small fraction of users still DO need to have general-purpose computing to make consumables in the first place (for example, video editors or musicians, writers, programmers) but the majority of the user-base is discouraged for a variety of reasons. The more you can separate the creator of information from the user of information, the more you can charge the user to access the creator.
You can't replace the OS or any of the parts of the machine because of trusted computing, so you cannot really use reverse engineering to simply break the system (and if you do, it may break the trust chain you now need to access now-networked services). Another example is that on a lot of phones and laptops these days, you can't add removable storage, so you are heavily encouraged to use cloud storage. And you are discouraged from using cloud services from any third party (usually on an API level, as services provided from the OS vendor can integrate better with the system). Consider how Apple pushes iCloud and Microsoft pushes OneCloud.
> Well on a basic level, if the consumer buys your motherboard or laptop and it doesn't work out of the box (but your competitors do) then you are going to have a massive customer satisfaction problem.
Ah I see what you mean. I over-focused on the full integrated system with pre-installed windows.
> I don't think so. DRM is an old lens of understanding the problem from the last generation. See https://www.youtube.com/watch?v=HUEvRyemKSg . The new methods use a softer approach.
Once you have manufacturers start implementing TC, that will really hamper reverse engineering efforts.
Imagine the possibilities of capturing the non-Microsoft market be allowing the TC to be turned off, or by not including it.
When it comes to Microsoft, there is an old business saying: "Never turn away a paying customer", because you will not only lose this sale, you will lose all future sales too.
It's pretty much how every cryptocurrency works, with separation of public (receive) and private (send) keys.
The fact that invoices are temporary in LN is a weakness of the design, not an intentional choice. The lightning network represents a regression from the typical use-case of cryptocurrency because both sender and receiver need to be online to make a payment.
> It's just statecraft and covert influence campaigns
I'm sure that has something to do with it, but such campaigns are catalyzed by china's military aggression in the south pacific. Morality is an afterthought.
How difficult is it to test your site on the 3 major browser engines? I have done some web development before and when I'm on linux, I just test my site with chromium, firefox, and epiphany.
I think the onus is on the developer to use standards that are well supported and to try to avoid standards like webUSB that are niche. To use semantic HTML and such so that the website fails in a more useful way to the end user when the standards aren't supported.
> How difficult is it to test your site on the 3 major browser engines?
Given that one of those 3 requires sending thousands of dollars a year to Apple, I'd say "very".
Also, given that Google is a monopoly, I do place the onus on them to at minimum warn developers that they are deviating from well supported standards.
Or perhaps the client can tell the server what webpage it's fetching from and the security check can be done server-side?
It is just strange to me that this security check has to be done on client-side (in the browser) as opposed to on the web server actually responsible for distributing the content.