Hacker Newsnew | past | comments | ask | show | jobs | submit | ThePhysicist's favoriteslogin

Sharing a simple thought experiment that was shared with me years ago that explains (to me at least) why this is an interesting question. Imagine a billiard ball with nonzero velocity bouncing around an enclosed box. When the ball encounters a side of the box, it bounces off elastically. A replay of this ball's path over time is equally plausible if the replay were run forward or reversed. The preceding is also true if one imagines 2 or 3 balls, with the only difference being that the balls may also bounce off each other elastically. Even in this scenario, reversibility of playback holds no matter the configuration of the balls: they could all start clustered or be scattered and the replay would be plausible when played in either time direction. But this is no longer true when the box (now much larger) contains millions of billiard balls. If the balls start clustered together, they will scatter over time about the box and the replay of their paths has only one plausible time direction. This is because it is extremely unlikely that all the billiards will, simply by chance at some point in the future, collect together so they are contained within a very small volume. To summarize, in the "few scenario" we can plausibly reverse time but in the "many scenario", we cannot. The only difference between scenarios is the number of balls in the box, which suggests that time is an emergent property. layer8's answer elsewhere in this thread says the same, but more succinctly.

Making systolic array chips (my BitGrid design) to bring provably secure Petaflops to the masses. They're handy for any algorithm that can tolerate extreme pipelining, including deep nets.

Because everything is trivial to reason about, there's nowhere for bugs, or zero day exploits, to hide.

Oh.. and we'd do an open source data diode product as well.


IIRC, when I was trying to mask the sound with the shower I noticed that it's not working when the water is cold or when I'm not comfortably under it. Then I had a few days of road trip without the laptop and noticed improvement when I was driving, sitting up straight.

So I decided to work on this and bought a keyboard and a mouse and made myself a rule that I will always use the laptop with a stand or external display so I don't lean over the laptop and sit straight up the way it is ergonomically recommended, pretty much like it says on articles like this: https://healthandbalance.com.au/workstation-desk-posture-erg...

I also begin doing neck exercises, recommended to me by an orthopedist(I got some neck pain for a few days, the orthopedist gave me a couple of movement I should do regularly to increase the straight of my neck muscles, I will leave links to the leaflets of the movements). I also did the push the chin to push your head back movement because although I didn't have clinically severe situation with my head moving forward I noticed that on my old photos my head wasn't leaning forward that much.

After a week or so after I started sitting right, my tinnitus begin to improve rapidly. I even began sleeping the orthopedically correct way and avoiding any stress positions. After some time I tried experimenting stress positions, like using the laptop the way I used to and the tinnitus returned in full force until I fix the posture and do some massages. After a year or two the tinnitus was almost completely gone and stress positions don't immediately bring it back anymore so I can use laptop again but if I'm not careful and overdo it, get carried away and lean into the screen it returns.

the leaflets:

https://imgbb.com/XWnTZVB

https://imgbb.com/r6PBbTK


Powerlifting.

Solitary and meditative if you want it to be; social and uplifting if you don't.

It's healthy in a variety of ways (including bone density; physical activity; higher BMR and glucose metabolism; improved cardiovascular function). Also being strong is useful surprisingly often.

Unlike many things in life, your progress is almost entirely dependent on your consistency and the effort invested, with the exception of (hopefully) temporary setbacks like injury. Hitting personal records and milestones feels particularly good because you know you've earned it. It's hard! But it's also not so hard that I'm liable to get discouraged.

Lots of people prefer bodybuilding style training, but there's something magic about the barbell for me. Olympic lifts are also a lot of fun, but they're more technical and you need more gear and space.

Also it's much easier to quantify your strength progress (I have a literal spreadsheet), and it feels less vain than focusing on looks (not that it isn't a significant bonus).

Dunno. Feels good. I'm gonna keep at it. Two thumbs up.


https://qntm.org/mmacevedo

> However, even for these tasks, its performance has dropped measurably since the early 2060s and is now considered subpar compared to more recent uploads. This is primarily attributed to MMAcevedo's lack of understanding of the technological, social and political changes which have occurred in modern society since its creation in 2031. This phenomenon has also been observed in other uploads created after MMAcevedo, and is now referred to as context drift.


Does anyone know of more technically in-depth presentations / blog posts about the inner workings of CDN's? I'm thinking of stuff like:

- How do they distribute a huge load between multiple servers in a DC (some kind of level 2/3 load balancing I don't know of probably?)

- How is the cached data distributed between POP's?

- The whole TLS termination thing is probably an interesting aspect as well. The sensitive key material is needed on every node / POP, but you probably don't want all keys just lying around in clear text on disks? Some kind of distributed HSM-thingy?

- Just practically, how do you manage all these machines. Something like Ansible, or something more like Nomad/K8S?

Perhaps the fly.io people could write something about this (if they haven't already :-))


On April 4, 1996, Terry Winograd (who I worked with at Interval Research) invited me to sit in on his HCI Group CS547 Seminar where Will Wright was giving a presentation called "Interfacing to Microworlds", in which he gave demos and retrospective critiques of his three previous games, SimEarth, SimAnt, and SimCity 2000.

He opened it up to a question and answer session, during which Terry Winograd’s students asked excellent questions that Will answered in thoughtful detail, then one of them asked the $5 billion question: "What projects are you working on now?"

Will was taken aback and amused by the directness, and answered "Oh, God..." then said he would back up and give "more of an answer than you were looking for."

The he demonstrated and explained Dollhouse for the first time in public, talking in depth about its architecture, design, and his long term plans and visions.

I took notes of the lecture, augmented them with more recent information and links from later talking and working with Will, and published the notes on my blog. But all I had to go on were my notes, and I haven't seen a video of that early version of Dollhouse ever since.

But only last week I discovered the Holy Grail I'd been searching for 27 years, nestled and sparkling among a huge dragon's hoard of historic treasures that are now free for the taking: Stanford University has published a huge collection of hundreds of Terry Winograd’s HCI Group CS547 Seminar Video Recordings, including that talk and two more by Will Wright!

I really appreciate Terry Winograd for inviting me to Will's talk that blew my mind and changed my life (it overwhelmingly and irresistibly convinced me to go to Maxis to work with Will on The Sims), and to the Stanford University librarians and archivists for putting this enormous treasure trove of historic videos online.

Guide to the Stanford University, Computer Science Department, HCI Group, CS547 Seminar Video Recordings

https://oac.cdlib.org/findaid/ark:/13030/c82b926h/entire_tex...

Will Wright, Maxis, "Interfacing to Microworlds", April 26, 1996:

https://searchworks.stanford.edu/view/yj113jt5999

Will Wright, Maxis, "Games and Simulation", May 2, 2003

https://searchworks.stanford.edu/view/pw467bz3079

Will Wright, Maxis / Electronic Arts, "Laughing Creative Communities: Lessons for the Spore community experience", May 22, 2009

https://searchworks.stanford.edu/view/pd936vc7267

I uploaded the video to YouTube to automatically create closed captions, which I proofread and cleaned up, so it's more accessible and easier for people to find, and you can translate the closed captions to other languages.

https://www.youtube.com/watch?v=nsxoZXaYJSk

And I updated my previous article "Will Wright on Designing User Interfaces to Simulation Games (1996)" to include the embedded video, as well as the transcript and screen snapshots of the demo, links to more information, and slides from Will's subsequent talk that illustrated what he was talking about in 1996.

https://donhopkins.medium.com/designing-user-interfaces-to-s...


Not to criticize, but to clarify:

> encrypted backups of the e-mails which I store in the cloud, the encryption key never leaves my device.

Doesn't in mean that in (unlikely) case something happens to your device (like harddrive crash), you won't be able to access the backups?


Data Hubs.

It’s queriable like a database but it doesn’t store your data - it proxies all your other databases and data lakes and stuff, and let’s you join between them.

Trino is a great example.


> They e.g. also do nothing against data exfiltration by popular extensions although they have known that issue for years.

This kind of sentiment compels mozilla into becoming an apple-like gatekeeper to a walled garden because people conflate the trustworthiness of extension authors with mozilla's trustworthiness, which leads to less software freedom, a single point of failure and a less diverse ecosystem.


>it will require moving the data fetching logic away from the main component into the decorator

We are not suggesting you to do anything like this.

The article was about migrating from mixins to patterns like higher order components. If you do your data fetching right in the component, we are not suggesting you to change anything. It’s only mixins that we found problematic, and we are just sharing our experience migrating away from them.

>the whole React.js stack gets more and more bloated these days. As an example, managing state through Redux requires writing actions, reducers and implementing a subscriber pattern in my components, just to fetch some data from the server

Redux has nothing to do with the “React stack”. React is just React; if fetching data in components works great for you, why are you migrating to Redux?

>I mean, if we're writing a version of "Photoshop" for the browser this level of complexity might be warranted, but in most cases we just want to fetch some JSON

Redux is overused. Nowhere in the article do we recommend to use Redux. The article is about mixins.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: