Hacker Newsnew | past | comments | ask | show | jobs | submit | more mbauman's commentslogin

I think the biggest thing is that programmers can safely assume that a floating point `x-y` is nonzero if `x != y`. You can actually go farther and know that it's an exact computation (with no error) if the two are close [1]. But both results only hold if subnormals don't flush to or behave like zero.

It's not too hard to imagine how an algorithm might depend upon that — there could be a branch for the case where `x == y` and then a branch that relies upon dividing by `(x-y)` and assumes that it's not a division by zero.

1. https://en.wikipedia.org/wiki/Sterbenz_lemma


I don't think an algorithm that relies on this is particularly well designed. Anything that trusts a float is non-zero is probably some kind of division where you must avoid division by zero. In that case you should explicitly checking for the zero condition instead of relying on the semantics of real numbers, since floats are not real numbers.


That’s precisely the misapprehension that makes folks think that -ffast-math is fine. Each and every floating point number is an exact quantity. They are real mathematical objects that identify exact values. They just have limited precision and might round (think snap) to the nearest representable number after each computation. You might not use them that way, but it doesn’t mean others shouldn’t.


For example, floating point without can represent two numbers whose difference is smaller than the number can represent and rounds to zero. Subnormal floating point doesn't fix this, it just moves the impacts of underflow to smaller magnitudes that are less likely to be seen.

Look, -ffast-math isn't always fine, but specifically I'm looking at DAZ/FTZ enabled, possibly non-deterministically process-wide (which is bad, don't get me wrong!). But part of why this doesn't phase people is because in practice, programs don't care and the observable effects don't lead to bugs.


> floating point without can represent two numbers whose difference is smaller than the number can represent and rounds to zero

Did you intend to say "without FTZ/DAZ enabled"? If so, that's completely and provenly false.

https://en.wikipedia.org/wiki/Sterbenz_lemma

Perhaps a better example than `z/(x-y)` is `log(x-y)`. Unlike division by 0.0, `log(0.0)` often throws an immediate error whereas `log(5e-324)` is a finite — and meaningful! — result.


Wouldn't the compiler optimize (x-y)!=0 to x!=y? Seems like a good optimisation to me and one probably in accordance to the C standard.

It would also make it impossible to have a decent non-zero check for

if ((x-y)!=0) progress((x-t)/(x-y));


fp contract isn't a uniform "decrease in error" either, though. As a simple example, it introduces error in the straightforward:

     a*b - a*b


In the case of flushing subnormals to zero, it's easy to end up with divides by zero when it wouldn't otherwise. `0/0.0` is `NaN` but `0/subnormal` is `Inf`.

In other cases, `-ffast-math` just introduces arbitrary and strange behaviors. Sometimes you end up with higher precision than you expected. Other times you end up with less. Other times it'll helpfully just re-arrange things such that it's a zero. For example, the classical Kahan summation does the following:

    t = sum + y
    c = (t - sum) - y
https://en.wikipedia.org/wiki/Kahan_summation_algorithm

A -ffast-math compiler will see that — algebraically — you can just substitute `sum + y` into the equation for `c` and get 0. It's `sum + y - sum - y`. And that's true for real maths. But it's not true for floating point numbers.

It explicitly destroys any attempt at _working with_ floating point numbers.


Ironically, so many of these users are those _with corporate support_. And demanding "IT" support from the open source project. Without consideration.


Oh, sure. But for me, the lack of self-awareness in "my command line inputs include extremely sensitive identifiers all the time, and this is fine, if it weren't for your optional AI plugins" is especially grating.

So, like, if I ever happen to execute 'history' in any session of yours that I manage to get access to, I hit the jackpot?


We know there's long-cons in action here, though. This PR needn't be the exploit. It needn't be anywhere _temporally_ close to the exploit. It could just be laying groundwork for later pull requests by potentially different accounts.


Exactly. If we assume the backdoor via liblzma as a template, this could be a ploy to hook/detour both fprintf and strerror in a similar way. Get it to diffuse into systems that rely on libarchive in their package managers.

When the trap is in place deploy a crafted package file that appears invalid on the surface level triggers this trap. In that moment fetch the payload from the (already opened) archive file descriptor, execute it, but also patch the internal state of libarchive so that it will process the rest of the archive file as if nothing happened, and the desired outcome also appearing in the system.


Is it even possible to have a video transcript whose copyright has expired in the USA? I suppose maybe https://en.wikipedia.org/wiki/The_Jazz_Singer might be one such work... but most talkies are post 1929. I suppose transcripts of NASA videos would be one category — those are explicitly public domain by law. But it's generally very difficult to create a work that does not have a copyright.

You can say that you have fair use to the work, or a license to use the work, or that the work is itself a "collection of facts" or "recipe" or "algorithm" without a creative component and thus copyright does not apply.


Be prepared for day-of eclipse traffic, too!


Oh god I didn't even think of that. But surely that would be in the opposite direction from me?


There are plenty of stories you can look up from the 2017 event where people did not consider the traffic. Reviewing some of those might give you a better idea of what to expect for places in the path


Don't expect to be able to leave the eclipse area for hours after.

Traffic will be in all directions. If you can just chill for several hours and then leave.


What _was_ the routing and speed? That was my very first question and the blog post doesn't really answer it. How much of the flight was supersonic? They talk about avoiding India for the BAH-SIN leg, and trouble over Saudi Arabia, but there's a lotta populated land between BAH-LHR. The flight listing says this:

    SIN-BAH: 3698 miles, 4 hrs  6 mins, Mach 2.02 cruising speed
    BAH-LHR: 3120 miles, 4 hrs 21 mins, Mach 2.02 cruising speed
Those numbers just don't make sense. That's 130 miles short of the great circle distance from SIN-BAH of 3935 miles. And then they talked about adding another 200 miles to go around India. So assuming the flight time itself is accurate, that leg should be:

    SIN-BAH: 4135 miles, 4 hrs 6 mins, Mach 1.3 average speed
But how much of that time _could_ be spent at the listed cruising speed? Mach 2 will travel 4135 in miles in just over 2.5 hours! So we're looking at less than half the flight spent in supersonic — and this is the leg that's mostly over the Indian Ocean.

The BAH-LHR leg is even trickier.

Anyhow, it's little wonder that a direct non-stop is near the Concorde's time with these restrictions and the refueling stop.


> Those numbers just don't make sense. That's 130 miles short of the great circle distance from SIN-BAH of 3935 miles.

In an aviation context, those are most likely nautical miles (equivalent to 1 minute of a degree in north-south direction, which is why 10,000 km (initially defined as the distance from the equator to the pole) is basically 5,400 NM (90 degrees from the equator to the pole, times 60 minutes/degree)) rather than statute miles, which are some certain number of yards and feet in that quaint customary system used still used by some people in the USA, Liberia, and Myanmar.

Indeed, according to the interweb, the distance between Singapore (Singapore Changi Airport) and Manama (Bahrain International Airport) is 3935 miles / 6333 kilometers / 3420 nautical miles.


Aha, of course!

OK, if those are nautical miles this does make much more sense — 3698nm is 280 nautical miles (320 miles) longer than the great circle route. It's a bit longer than the naive calculation I made — increasing the average speed (and time at cruise) a little bit, too.

Then the BAH-LHR leg is more interesting too — its great circle route is 2754 nm, so they're going 366 nm (420 miles) out of their way. Their listed distance is approximately what it'd be if they routed through the Mediterranean over Malta.

https://www.greatcirclemap.com/?routes=BAH-LHR%2C%20BAH-MLA-...


Yeah, that's not at all what the actual journal article claimed. It starts super simple:

We know Titan has two things that seem so promising for life as we know it! Organic compounds on the outside, liquid water on the inside.

And then it asks:

Might those two promising things meet? Other work hasn't found the tectonics that would do it; could cratering do it?

And that very particular answer is pretty definitively no.


It's always very fun when you realize something you learned very early by rote (like an alphabet) actually has a clear meaning you should've long been able to understand, had you thought about it.


Etymology as a whole is a hoot for this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: