Hacker Newsnew | past | comments | ask | show | jobs | submit | sleepydog's commentslogin

I guess you are referring to the TLS requirement? I guess I could see how on a more restrictive platform like a phone you could conceivably be prevented from accepting alternate CAs or self signed certificates.


AWS had a similar article a couple months ago:

https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data...

In either case I cannot find out how they dump the heat from the output water before recycling it. That's a problem I find far more interesting.


... or get out.


Encryption gets you data integrity "for free". If a bit is flipped by faulty hardware, the packet won't decrypt. TCP checksums are not good enough for catching corruption in many cases.


Interesting. When I read this I was thinking “that can’t be right, the whole internet relies on tcp being “reliable”. But it is right; https://dl.acm.org/doi/10.1145/347059.347561. It might be rare, but an unencrypted RPC packet might accidentally set that “go nuclear” bit. ECC memory is not enough people! Encrypt your traffic for data integrity!


The EU is better on average, but isn't universally great either. I pay 60 EUR for 200Mbit down/20Mbit up ADSL in Amsterdam, after my 6-month discount ran out. No fiber in my neighborhood yet. There's one gigabit provider in my neighborhood (Ziggo) and they have a bad reputation. For the same price I was getting FiOS gigabit in NYC.


I have 1.1Gb/100Mb on Ziggo in Amsterdam. I've had no real issues with them, at least for me the reputation is undeserved. I pay a bit more than that, but like €20 or so. They also give me a /57 or something which is nice (if a weird allocation, but I'm not going to use that many subnets anyway.)

I have had a fibre cable poking out of the footpath in front of my apartment for a year or two now, waiting for ODF or whoever to come and install it into the building.


That's crazy, 200M asymmetric for 60 EUR is robbery.


For very hot data centers, evaporative cooling is still popular. This is from 2012 but I doubt much has changed.

https://blog.google/outreach-initiatives/sustainability/gett...


13 years is an incredibly long time for something as fast moving as data center development. I guarantee that a _lot_ has changed. I know AWS in particular has gone through multiple entire revisions of their DC designs, and I recall a talk from some of their engineers saying how AWS actually found it more economical to use less cooling and let their DCs run hotter than they used to.

Here’s a recent article from AWS about using closed-loop systems for their AI data centers: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data...


Data centers may change but the physics of cooling doesn't.

It's more economical to run chips hotter but at the end of the day you'll still have heat that needs dissipating and it's hard if not impossible to beat evaporative cooling in terms of cost.


This is like someone in 1800 saying “at the end of the day you still have transportation needs and it’s hard if not impossible to beat horses and carriages in terms of cost”.

Literally just do a google search. There are advancements every day that improve upon evaporative cooling to make it use less and less water and energy, and alternative methods other than evaporative cooling.


Bleeding-edge advancement and commercially-viable solutions are not apples to apples.


If there were then datacenters would use them ;) There must be a catch eh?


Are new water-guzzling DCs and nuclear plants built on water sources unlikely to be affected by climate change?


Don't sell yourself short. If you have eyes and a voice, you can learn to read sheet music and hum a tune.


The traffic has a negative effect on more than just car owners--smog, noise, accidents, slower taxis, to name a few. Why should only car owners, who are a minority in Manhattan, vote on a problem that affects everyone?


I'm working on a multicast dns implementation in OCaml. It's a library allowing one to build custom queries/responders, plus a conventional querier that can be used as a stub resolver for .local to get the resolver functionality of avahi.

My main motivation was to implement a service that publishes the addresses of containers and vms that I run on my workstation to my local network, but it gradually has grown into a full-blown implementation of RFC 6762, which has been fun.


I used to work for a huge email sender (constant contact). Our mail servers needed to perform an absurd number of lookups while receiving and validating mail. When I was there, we used dnscache, running locally, on all our mail servers. But even with a local dnscache, the overhead of making DNS requests and handling their responses was high enough that adding nscd made a noticeable improvement in CPU usage.


i guess this shows that looking up getent hostname database cache is faster than looking up local dns cache because the former is simpler in data structure?


I didn't dig into it too deeply at the time, but I think part of it was that you don't need to open and write to a socket, so that's avoiding some system calls (socket(), bind(), sendto(), close()). IIRC we had nscd set up so clients directly read from shared memory that nscd kept updated, rather than getting requests over a socket.

There's also probably some savings around not having to convert between the structures used by gethostbyname and DNS questions&answers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: