Hacker Newsnew | past | comments | ask | show | jobs | submit | more mbauman's commentslogin

$20/hr fulltime is ~40k/yr or ~$3300/mo. As just one benchmark: can you find housing in your area for $1100/mo?


GOOG411 was actually very helpful in the dumbphone/limited-cell-data era! I'm not sure why I'd use this now.

It also brings back memories of trying random (and known) 800 numbers from payphones.


I spent so much time as a kid in front of a payphone dialing 1-800-XXX-9999 numbers. That and wardialing (NPA) XXX-9999 numbers in my area code.


Words and names are powerful — this is something that I've found over and over again as a programmer. If you can find the right name for something, then the abstractions immediately become clear.

This isn't unique to programming


My only regret is not making it even more explicit!


Sure should be. But Krebs’ story on it is proof it can happen.

Story: https://krebsonsecurity.com/2024/01/canadian-man-stuck-in-tr...

HN discussion: https://news.ycombinator.com/item?id=39056733


That’s a shocking case.

Several layers of problem there, particularly that a “criminal record” doesn’t mean you’ve been found guilty, only charged, and that the stay on proceedings means an innocent person can never clear their name- allowing the Canadian police to sweep their faulty charges under the rug.


The most valuable "metadata" in this context is typically with whom you're communicating/collaborating and when and from where. It's so valuable it should just be called data.


How is this relevant to the private cloud storage?


No point in storing data if it is never shared with anyone else.

Whom it is shared with can infer the intent of the data.


Backups?


yes, got me there.

but i feel in the context (communication/meta-data inference) that is missing the trees for the forest


On the other hand, useful features should not be ignored, because other, almost unrelated things are hard.


Atavising was new to me. From https://nbickford.wordpress.com/2012/04/15/reversing-the-gam... :

> First of all, while I said “Predecessorifier” in the talk, “Ataviser” seems to be the accepted word, coming from “Atavism”, which the online Merriam-Webster dictionary defines as “recurrence of or reversion to a past style, manner, outlook, approach, or activity”.


This is utterly terrifying:

> ... three-day totals that were well above 20 inches at multiple stations. For context, a three-day-long precipitation event in Asheville, N.C., the largest city in the most-affected region, is considered to be a once-in-1,000-year occurrence if it produces 8.4 inches of rain. (A once-in-1,000-year flood is one that has a 0.1 chance of happening in any given year.) The longest period that the National Oceanic and Atmospheric Administration calculates that out to is 60 days, for which a rainfall event in Asheville is considered to be a once-in-1,000-year occurrence if it produces 19.3 inches.

I've long known that we'll bee seeing "hundred year" and "thousand year" events much more frequently than their name suggests, but I didn't really fathom storms this far off the charts.

I'm not sure how you even begin preparing for or shoring up infrastructure against these sorts of extreme events.


The seemingly high occurence of rare events is an artifact of how much we're measuring. If you have a thousand independent weather stations, every year you'll see a "thousand year" event at one of them. We just didn't notice the other 999 that didn't.

Same deal as how data centers manage equipment failures. If the drives have a million-hour MTBF, but you have a million drives, you're replacing one every hour. Thousand-year events happen all the time when you have a thousand trials.

This one may have been something wilder like 1 in 20k years, and among 1k endpoints you'll see that every 20 years, and this could be comparable to Katrina which was now 20 years ago.

How do you shore up infrastructure: either you spend a ton of money, or you don't do it and deal with the cleanup afterwards. Each additional 9 of reliability costs 10x. At some point the cleanup for one point is less than shoring up all of them.


> The seemingly high occurence of rare events is an artifact of how much we're measuring. If you have a thousand weather stations, every year you'll see a "thousand year" event at one of them. We just didn't notice the other 999 that didn't.

That would only be true if all those weather stations are fully independent. They're definitively not.

Drives should be independent, but all drives in a datacenter would similarly share their environment. Run the datacenter too hot and your MTBF will be out of spec.


These are not purely random events. Every weather station doesn't rolls the dice on their own.

If I put 1000 weather stations in my back yard, it doesn't suddenly flood every year.


Some of Benoit Mandelbrot’s seminal work focused on Operational Hydrology. This work informed subsequent developments in extreme value theory. Frequency and level of tail risk events may not be characterizable by moments.

https://courses.physics.ucsd.edu/2016/Spring/physics235/Mand...


The traditional way to harden infrastructure against flooding was to locate 'most everything expensive or important on higher ground. It was always the poorest part of town that got built on the flood plain, or next to the swamp, or whatever. And when a major bridge or something had to be built at a low elevation - its construction was often "heavy stone, and lots of it"-durable.


Based on current costs and the increasing rate of events of this type, we are spending about 0.3% of global GDP on rebuilding from this type of event, and will cross 1% by 2035.

That may not sound like much, but 5%, which at current rates of increase we could see by 2060, is enough to put economic systems into a death spiral.


So where is the even split point? It's [0, 1.5) and [1.5, Inf)! There's `2^62-2^51-1` values in both those sets. Or `2^30-2^22-1` for 32 bit.

Or perhaps even more interestingly, there's the same number — 2^62 or 2^30 — in [0, 2) as [1, Inf).


But perhaps that's the interesting point — 1/x is *cannot be* an exact bijection for floating point numbers, because there's not a one-to-one mapping between the two sets.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: