Hacker Newsnew | past | comments | ask | show | jobs | submit | tombakt's commentslogin

Yes, we can even call it “snow crash” ;)


Ah yes, a proper morning routine. A bit of coffee, exercise, becoming one with the void…


Melange.


Technically most sources of available energy on or near the planet are the output of fusion in some way, so this tracks.


Everything except geothermal and fission.

Unless you count where the fissionable elements came from, in which case you're only left with the portion of geothermal that's from gravity (residual heat from the earth compacting itself into a planet).


Tidal waves comes from earths rotation, so not fusion nor fission.


what set off the spinning?


Earth's spin comes from the parent molecular cloud which formed the Solar System (including any impacts during the protoplanetary phase.) And that ultimately from density fluctuations after Big Bang, and the way they led to coalescence of galaxies and galaxy clusters.


To be nitpicky, our uranium and thorium were made via r-process (rapid neutron capture), which is not the kind of fusion occurring in the Sun at present.

[1] https://en.wikipedia.org/wiki/R-process


No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application. I would be surprised if it is possible for the client browser to control this OPTIONS request more than just the URL. I am curious if anyone else has any input on this topic though.

Maybe there is some side-channel timing that can be used to determine the existence of a device, but not so sure about actually crafting and delivering a malicious payload.


This tag:

    <img src="http://192.168.1.1/router?reboot=1">
triggers a local network GET request without any CORS involvement.


I remember back in the day you could embed <img src="http://someothersite.com/forum/ucp.php?mode=logout"> in your forum signature and screw with everyone's sessions across the web


Haha I remember that. The solution at the time for many forum admins was to simply state that anyone found to be doing that would be permabanned. Which was enough to make it stop completely, at least for the forums that I moderated. Different times indeed.


Or you could just make the logout route POST-only. Problem solved.


<img src="C:\con\con"></img>


It's essentially the same, as many apps use HTTP server + html client instead of something native or with another IPC.


Exactly you can also trigger forms for POST or DELETE etc. this is called CSRF if the endpoint doesn't validate some token in the request. CORS only protects against unauthorized xhr requests. All decades old OWASP basics really.


That highly ranked comments on HN (an audience with way above average-engineer interest in software and security) get this wrong kinda explains why these things keep being an issue.


I'm betting HN is vastly more normal people and manager types than people want to admit.

None of us had to pass a security test to post here. There's no filter. That makes it pretty likely that HN's community is exactly as shitty as the rest of the internet's.

People need to stop treating this community like some club of enlightened elites. It's hilariously sad and self-congratulatory.


I don't know why you are getting downvoted, you do have a point. Some of the comments appear knowing what CORS headers are, but neither their purpose nor how it relates to CSRF it seems, which is worrying. It's not meant as disparaging. My university thought a course on OWASP thankfully, otherwise I'll probably also be oblivious.


If you're going cross-domain with XHR, I'd hope you're mostly sending json request bodies and not forms.

Though to be fair, a lot of web frameworks have methods to bind named inputs that allow either.


This misses the point a bit. CSRF usually applies to people who want only same domain requests and dont realize that cross domain is an option for the attacker.

In the modern web its much less of an issue due to samesite cookies being default .


> Exactly you can also trigger forms for POST or DELETE etc

You cant do a DELETE from a form. You have to use ajax. If cross DELETE needs preflight.

To nitpick, CSRF is not the ability to use forms per se, but relying solely on the existence of a cookie to authorize actions with side effects.


This expectation is that this should not work - well behaved network devices shouldn't accept a blind GET like this for destructive operations. Plenty of other good reasons for that. No real alternative unless you're also going to block page redirects & links to these URLs as well, which also trigger a similar GET. That would make it impossible to access any local network page without typing it manually.

While it clearly isn't a hard guarantee, in practice it does seem to generally work as these have been known issues without apparent massive exploits for decades. That CORS restrictions block probing (no response provided) does help makes this all significantly more difficult.


"No true Scotsman allows GETs with side effects" is not a strong argument

It's not just HTTP where this is a problem. There are enough http-ish protocols where protocol smuggling confusion is a risk. It's possible to send chimeric HTTP requests at devices which then interpret them as a protocol other than http.


Yes, which is why web browsers way back even in the netscape navigator era had a blacklist of ports that are disallowed.


The idea is, the malicious actor would use a 'simple request' that doesn't need a preflight (basically, a GET or POST request with form data or plain text), and manage to construct a payload that exploits the target device. But I have yet to see a realistic example of such a payload (the paper I read about the idea only vaguely pointed at the existence of polyglot payloads).


There doesn't need to be any kind of "polyglot payload". Local network services and devices that accept only simple HTTP requests are extremely common. The request will go through and alter state, etc.; you just won't be able to read the response from the browser.


Exactly. People who are answering must not have been aware of “simple” requests not requiring preflight.


I can give an example of this; I found such a vulnerability a few years ago now in an application I use regularly.

The target application in this case was trying to validate incoming POST requests by checking that the incoming MIME type was "application/json". Normally, you can't make unauthorized XHR requests with this MIME type as CORS will send a preflight.

However, because of the way it was checking for this (checking if the Content-Type header contained the text "application/json"), It was relatively easy to construct a new Content-Type header that bypasses CORS:

Content-Type: multipart/form-data; boundary=application/json

It's worth bearing in mind in this case that the payload doesn't actually have to be form data - the application was expecting JSON, after all! As long as the web server doesn't do its own data validation (which it didn't in this case), we can just pass JSON as normal.

This was particularly bad because the application allowed arbitrary code execution via this endpoint! It was fixed, but in my opinion, something like that should never have been exposed to the network in the first place.


This is a great example; thanks.


Oh, you can only send arbitrary text or form submissions. That’s SO MUCH.


Correct.


You don't even need to be exploiting the target device, you might just be leaking data over that connection.

https://news.ycombinator.com/item?id=44169115


Yeah, I think this is the reason this proposal is getting more traction again.


Here's a formal definition of such simple requests, which may be more expansive than one might expect: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...


Some devices don't bother to limit the size of the GET, which can enable a DOS attack at least, a buffer overflow at worst. But I think the most typical vector is a form-data POST, which isn't CSRF-protected because "it's on localhost so it's safe, right?"

I've been that sloppy with dev servers too. Usually not listening on port 80 but that's hardly Ft Knox.


It can send a json-rpc request to your bitcoin node and empty your wallet


Do you know of any such node that doesn't check the Content-Type of requests and also has no authentication?


Bitcoin Core if you disable authentication


There's no such thing, short of forking it yourself. You can set the username and password to admin:admin if you want, but Bitcoin Core's JSON-RPC server requires an Authorization header on every request [0], and you can't put an Authorization header on a cross-origin request without a preflight.

[0] https://github.com/bitcoin/bitcoin/blob/v29.0/src/httprpc.cp...


Good to know, I remember you used to be able to disable it via config but looks like I was wrong


I think that is because it is so old that its basically old news and mostly mitigated.

https://www.kb.cert.org/vuls/id/476267 is an article from 2001 on it.


You’re forgetting { mode: 'no-cors' }, which makes the response opaque (no way to read the data) but completely bypasses the CORS preflight request and header checks.


This is missing important context. You are correct that preflight will be skipped, but there are further restrictions when operating in this mode. They don't guarantee your server is safe, but it does force operation under a “safer” subset of verbs and header fields.

The browser will restrict the headers and methods of requests that can be sent in no-cors mode. (silent censoring in the case of headers, more specifically)

Anything besides GET, HEAD, POST will result in an error in browser, and not be sent.

All headers will be dropped besides the CORS safelisted headers [0]

And Content-Type must be one of urlencoded, form-data, or text-plain. Attempting to use anything else will see the header replaced by text-plain.

[0] https://developer.mozilla.org/en-US/docs/Glossary/CORS-safel...


That’s just not that big of a restriction. Anecdotally, very few JSON APIs I’ve worked with have bothered to check the request Content-Type. (“Minimal” web frameworks without built-in security middleware have been very harmful in this respect.) People don’t know about this attack vector and don’t design their backends to prevent it.


I agree that it is not a robust safety net. But in the instance you’re citing, thats a misconfigured server.

What framework allows you to setup a misconfigured parser out of the box?

I dont mean that as a challenge, but as a server framework maintainer Im genuinely curious. In express we would definitely allow people to opt into this, but you have to explicitly make the choice to go and configure body-parser.json to accept all content types via a noop function for type checking.

Meaning, its hard to get into this state!

Edit to add: there are myriad ways to misconfigure a webserver to make it insecure without realizing. But IMO that is the point of using a server framework! To make it less likely devs will footgun via sane defaults that prevent these scenarios unless someone really wants to make a different choice.


SvelteKit for sure, and any other JS framework that uses the built-in Request class (which doesn’t check the Content-Type when you call json()).

I don’t know the exact frameworks, but I consume a lot of random undocumented backend APIs (web scraper work) and 95% of the time they’re fine with JSON requests with Content-Type: text/plain.


I think you’re making those restrictions out to be bigger than they are.

Does no-cors allow a nefarious company to send a POST request to a local server, running in an app, containing whatever arbitrary data they’d like? Yes, it does. When you control the server side the inability to set custom headers etc doesn’t really matter.


My intent isnt to convince people this is a safe mode, but to share knowledge in the hope someone learns something new today.

I didnt mean it to come across that way. The spec does what the spec does, we should all be aware of it so we can make informed decisions.


Thankfully no-cors also restricts most headers, including setting content-type to anything but the built-in form types. So while CSRF doesn't even need a click because of no-cors, it's still not possible to do csrf with a json-only api. Just be sure the server is actually set up to restrict the content type -- most frameworks will "helpfully" accept and convert form-data by default.


> No, a preflight (OPTIONS) request is sent by the browser first prior to sending the request initiated by the application.

Note: preflight is not required for any type of request that browser js was capable of making prior to CORS being introduced. (Except for local network)

So a simple GET or POST does not require OPTIONS, but if you set a header it might require OPTIONS (unless its a header you could set in the pre-cors world)


It depends. GET requests are assumed not to have side effects, so often don't have a preflight request (although there are cases where it does). But of course, not all sites follow those semantics, and it wouldn't surprise me if printer or router firmware used GETs to do something dangerous.

Also, form submission famously doesn't require CORS.


There is a limited, but potentially effective, attack surface via URL parameters.


I can confirm that local websites that don't implement CORS via the OPTIONS request cannot be browsed with mainstream browsers. Does nothing to prevent non-browser applications running on the local network from accessing your website.

As far as I can tell, the only thing this proposal does that CORS does not already do is provide some level of enterprise configuration control to guard against the scenario where your users are using compromised internet sites that can ping around your internal network for agents running on compromised desktops. Maybe? I don't get it.

If somebody would fix the "no https for local connections" issue, then IoT websites could use authenticated logins to fix both problems. Non-https websites also have no access to browser crypto APIs so roll-your-own auth (the horror) isn't an option either. Fustrating!


I don't believe this is true? As others have pointed out, preflight options requests only happen for non simple requests. Cors response headers are still required to read a cross domain response, but that is still a huge window for a malicious site to try to send side effectful requests to your local network devices that have some badly implemented web server running.


[edit]: I was wrong. Just tested that a moment ago. It turns out NOT to be true. My web server during normal operation is current NOT getting OPTIONS requests at all.

Wondering whether I triggered CORS requests when I was struggling with IPv6 problems. Or maybe it triggers when I redirect index.html requests from IPv6 to IPv4 addresses. Or maybe I got caught by the earlier roll out of version one of this propposal? There was definitely a time while I was developing pipedal when none of my images displayed because my web server wasn't doing CORS. But. Whatever my excuse might be, I was wrong. :-/


If your navigation software says "Continue on I-50 for 350 miles", you will likely not need to touch the steering wheel for that stretch. If it says "Take exit 123 in 1/2 mile", you grab the wheel, take the exit, and let the comma take over after that decision. It feels more like a competent copilot than a full driver replacement.


What's the chance that a driver will sit there for 350 miles and not pull out their phone, fall asleep, or otherwise drift off while sitting for hours doing literally nothing but required to be fully alert.


Comma uses its driver-facing camera to detect is the driver is paying attention vs. looking at their phone or falling asleep. It chirps at you if you’re distracted and will eventually disengage if you’re not paying attention.


It removes almost all driving fatigue for me (RAV4) and I do not intend to purchase a car unless it is supported by comma. I needed a new car and specifically bought this RAV4 because of comma compatibility.

Driving is essentially 3 inputs (gas, brake, steer). I use the comma for steering to keep the car centered in the lane, which is does extremely well. My car has built-in radar cruise control which keeps the speed (gas) and distance from the car ahead (brake), so highway/city driving even in traffic is a breeze.

I have not tried the experimental mode that supposedly has some level of end-to-end capability where the comma controls the gas and brake, and have found the current balance absolutely perfect for my needs.


Something that worries me a little is how comma would handle anomalies. Telsa has such scale that they're likely to encounter more anomalies and their software will learn from them. I'm particularly concerned about the sudden kind of anomalies (e.g. animal jumping in front of vehicle, or a getaway car coming from an illegal ergo uncommon direction); one that comma may be unable to handle, but a human would have very little time to take over from.


Their compatibility page calls out which car models will lose their built-in advanced safety features (such as automatic emergency braking) when using comma, and whether comma replaces the built-in adaptive cruise control. Their FAQ includes:

> Do I retain my car factory safety features with openpilot installed?

>When openpilot is enabled in settings, Lane Keep Assist (LKAS), and Automated Lane Centering (ALC) are replaced by openpilot lateral control and only function when openpilot is engaged. Lane Departure Warning (LDW) works whether engaged or disengaged.

> On certain cars, Adaptive Cruise Control (ACC) is replaced by openpilot longitudinal control.

> openpilot preserves any other vehicle safety features, including, but are not limited to: AEB, auto high-beam, blind spot warning, and side collision warning.

The FAQs about comma's automated lane centering and adaptive cruise control also say:

> openpilot is designed to be limited in the amount of steering torque it can produce.

So comma isn't even trying to be the subsystem responsible for handling sudden surprises. It's only trying to upgrade a suitably-equipped car from SAE Level 0 or Level 1 up to Level 2.


> It should be the responsibility of public bodies that levy fees to make sure that people are made aware of the nature of those fees. The ISPs aren't responsible for this stuff, and shouldn't be asked to do more work to further conceal decisions our elected officials are making for us.

What are your thoughts on businesses incorporating and listing the amount of sales tax paid on receipts of transactions at your local grocery/convenience store?

It appears to me that the least surprising place for these things to be listed is where it is most relevant, which is alongside the primary transaction presented as an invoice or receipt. How would you improve on this UX assuming that the fee is definitely going to be incorporated into the cost?

> I generally think most middle-class people aren't taxed enough (yell at me somewhere else about this)

I'll refrain from yelling. Can you expound on this since you thought to mention it?


Sounds great until natural selection kicks in, and because DNA replication is largely a lossy process, suddenly the thing you programmed the organism to do mutates to do something else a whole lot more problematic.

Imagine a software heisenbug, but instead it's a life form that you can't kill -9.

The idea of tailor-made medicines in a vat is awesome, but as far as creating a bacteria to "specially target" certain cells seems like a disaster waiting to happen.


Those are certainly real problems, and I'm not a cell biologist, but I'm not convinced these problems are insurmountable.

For instance, it might be possible to use ECC to get around transcription errors. It could also perhaps be ensured that any rogue "clinical biocomputer" could be easily treated with antibiotics or specifically engineered bacteriophage virus.

Like I said, the technology is very far off from having real world applications like this. At the moment it feels like we're in the analogue of the 40s and 50s for conventional computing. The field is still just inventing the very basic building blocks. It's going to be very limited in use, wildly dangerous(look up mercury delay lines) and unreliable for decades to come.


Polymerase without error correct has error rate of 10**-6. With error correct it's 10**-9 to 10**-12.


Considering the current treatment in the worst cases(where more targeted treatments don't exist) is to blast the pasient with radiation and poison(chemotherapy) and hope it doesn't kill them, I'll take those odds.


Except a rogue bacteria won't just kill you, it could escape beyond you and kill millions.

Chemo/radiation only kills the patients it was given to ( not entirely true - if the treatment caused mutations in the germline and the patient subsequently had children, the effects of the treatment might be passed on - but still very limited ).


Bacterial infections are generally very treatable though. Even when the bacteria aren't engineered. And especially when they are, because why would you leave any antibiotic resistance in an engineered bacterium?

Bacteria are the scariest when they've had the time to develop resistance to multiple different antibiotics.

Additionally, a bacterium that's engineered to be almost completely harmless evolving into a deadly strain in vivo is fairy unlikely in itself, especially if transcriptional errors can be reduced several orders of magnitude like GGP suggested.

Adding to that the option of hospitalisation or even home isolation to reduce risk of transmission, the risk of this resulting in some huge lethal epidemic must be pretty miniscule.


It's hubris to think we are at a stage where human scientists are so disciplined and knowledgable that we can start patching existing life-forms in such a safe enough way so as to target certain types of cells reliably over time and not others.

Software is essentially a cleanroom in the sense that the environment tends to be deterministic and man-made, and that is still riddled with unexpected accidents. Fortunately we can turn it off, fix the bug, and redeploy and the people involved in that tend to survive.

> Additionally, a bacterium that's engineered to be almost completely harmless evolving into a deadly strain in vivo is fairy unlikely in itself, especially if transcriptional errors can be reduced several orders of magnitude like GGP suggested.

The proposition was to engineer a bacteria that targets and infects a particular type of human cell to kill it. Creating medicines in a vat (like insulin) is different from releasing infectious agents in the wild. I was under the impression that this was obvious, but apparently not.


>It's hubris to think we are at a stage where human scientists are so disciplined and knowledgable that we can start patching existing life-forms in such a safe enough way so as to target certain types of cells reliably over time and not others.

I never said we're at this stage now or even close to it, in fact I explicitly said the opposite:

>>>>Like I said, the technology is very far off from having real world applications like this. At the moment it feels like we're in the analogue of the 40s and 50s for conventional computing. The field is still just inventing the very basic building blocks. It's going to be very limited in use, wildly dangerous(look up mercury delay lines) and unreliable for decades to come

As for comparing creating medicines in a vat to using bacteria as an active treatment, you're the only one making that comparison. The paragraph you responded to wasn't about in vitro drug synthesis at all, so I'm not sure what your point is here. Yes, it's obviously different. I never said otherwise. It's perfectly possible to target bacteria to specific tissues; wild bacteria already do this.

My point was that a bacteria engineered to target a malignant tumour, to be very treatable with antibiotics or bacteriophage, and to have a strongly reduced rate of mutation, is extremely unlikely to evolve into a pandemic, and is likely to be much safer than chemotherapy and radiation.


> My point was that a bacteria engineered to target a malignant tumour, to be very treatable with antibiotics or bacteriophage, and to have a strongly reduced rate of mutation, is extremely unlikely to evolve into a pandemic, and is likely to be much safer than chemotherapy and radiation.

Did you know bacteria have horizontal gene transfer? ie antibiotic resistance isn't just evolved and passed to children ( vertical ), it can be passed to peers horizontally.

It also happens outside bacteria - but bacteria have active mechanisms to enable this - that's how antibiotic resistance spreads - not just from parent to child, but peer to peer like a meme :-).

Safety is a complex topic, and you'd need to consider on a case by case basis - PhD students engineer bacteria every day ( something that had a self imposed ban in the 1970's I think ) - however that's within the context of standard platforms and each and every one should have a risk assessment.

Don't get me wrong, I think it could be done, but there is a Genie and a bottle here and it's best to think twice.

I'd like to see both a kill switch ( beyond antibiotics ) and some sort of growth external dependency - ie they need something you provide to survive as well.


> A photon leaves a star, then strikes your eye a billion years later, and those events could be entangled, as far as the photon is concerned, they happened at the same instant in time.

If a photon does not experience time then I find it challenging to imagine that it could have a perspective at all.


It gets worse. Not only do photons not experience time, they don't experience space either. From a photon's point of view, it's always every "where" it needs to be. Which means there's no distinction between places. Which means space doesn't exist, from a photon's POV.


Reminds me of this quote "A wizard is never late, nor is he early, he arrives precisely when he means to"


I once read somewhere that if you swap time in spacetime with gravity, then the universe turns crystalline, and lots of stuff makes more sense.

In that way, perhaps photons experience other stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: