Interesting idea, but the generic green PCB is a bit of a missed opportunity. Some manufacturers now offer transparent solder masks which emphasize the copper traces and can look really cool with a clean PCB layout.
In addition to some other things, I was responsible for all vehicle simulation in Army of Two. This article is a good starting point. I was glad they mentioned implementing Pacejka’s tire model and the transmission differential in the article - those help a lot. Aside from that, I was surprised (not surprised) how important an anti-roll bar physics sim and suspension sim helped make driving feel “fun”.
That’s the most important follow up. Without it, you’ll notice that the driving feels icy - I see it in the demo video. Most folks who fail to do the anti-roll bar and suspension wind up with cars that easily flip on turns - so they make the tires slip or they play with the surface friction, which makes the driving experience worse.
> I’m actually a huge fan of Brett Victor and I felt like he’s kinda missing the dynamic, adaptable nature of AI that allows non-technical people like me to finally access the system layer of computation for our creative ends.
To "grok" something is to understand it on a deep, fundamental level. Following a checklist from an LLM and the thing you're doing eventually working isn't grokking.
To be clear, I'm very glad that you and others can throw together new projects. Your excitement seems genuine, and more excitement in the world is good. And perhaps you'll be one of the miniscule minority who will use LLMs to really get to a deeper level of understanding with new things.
But I wonder if your excitement may be misleading you here and making it harder for you to grok Bret Victor's post - on any level. I don't think Victor is interested in computing in the way you think he is. There's a world of difference between being able to cobble a web project together, and the kinds of philosophical shifts a project like dynamicland is proposing and enacting.
In the interest of people expressing themselves freely, I'd go so far as to say it's particularly surprising to read all this from "an artist". There was a time when being an artist implied the person had reflected and read and thought about larger perspectives across a range of subjects - philosophy, science, religion, etc.
Here, in this instance, I can't help feeling there's some crunchy irony in the fact that a deeply radical (scientifically, artistically, technologically, socially) project like dynamicland is met by an artist excited to be able to plug web services into each other, strongly claiming from the very heart of the cultural slop wars, that the dynamicland people might be confused and maybe LLMs is the real answer.
Respectfully, consider that maybe the perspective from which they're viewing the problem is simply much deeper than what you've been able to grasp so far. I don't mean it disparagingly or cynically, in fact it's great news, you've vistas to explore here!
I suggest reading more from dynamicland directly, and bret's website too, a few of bret's talks, alan kay is very good, there's tons of stuff if you get into it. Don't neglect the history of computing, it's full of amazing ideas.
The Metropolitan Museum has books of fabric samples for kimonos with colorful and complex patterns but unfortunately I could only find pictures of kimonos:
We had once an awesome unknown band from Belgium coming to play in our local club. I was the only person that came to the concert. For an hour they didn't start to wait for more listeners and they invited me to their table. No one showed up so they played for me and my brother whom I have summoned in the meantime. The best concert I have ever attended.
I can't believe I am learning of tabular options for fonts from this post... I have always just used a different monospace font for the numbers, didn't realize it was an option that some fonts supported.
.NET's Task<T> + the state machine box for it emitted by Roslyn start at about 100B of heap-allocated memory, as of .NET 8/9.
The state machines generated for those by F# don't seem to be far behind either (I tested this before replying, F#'s asynchronous computations aka async { } appear to be much less efficient however so the guidance is to avoid them in favor of task { } and .NET's regular tasks).
Notably, BEAMs processes come with their own per-process GC each, which is going to add a lot of additional cost every time a new process is spawned. In a similar vein, Go's goroutines pre-allocate quite a bit of memory for their virtual stacks (60 KiB?).
.NET's tasks, as sibling comment mentions, are stackless coroutines[0] so their memory usage is going to be much lower. They come with a different set of tradeoffs but overall their cost is going to be significantly cheaper because bytecode is JIT/AOT compiled to native, the GC has precise tracking of object liveness and because .NET does not perform BEAM-style preemptive userspace scheduling.
Instead, .NET employs work-stealing threadpool with hill-climbing thread count scaling to achieve optimal throughput. This way, when the workers cannot advance all submitted work items in time, additional threads are injected, which are then preempted by kernel thread scheduler. This means that even if other workers are busy, the work items will not wait in the queues indefinitely. This is a pathological case and usually the thread count varies between 1-2x physical core count.
This has a downside of achieving potentially worse scheduling fairness, and independent tasks allocating can and do affect each other w.r.t. GC pause impact. I believe this to be non-issue because this is more than compensated for by spending <10x CPU time vs BEAM on computing the same result, and significantly less memory (I don't have hard numbers but .NET is quite well behaved in terms of allocation traffic) too. At the end of the day, Task<T> is designed for much higher granularity of concurrency and parallelism so it would be quite unusable if it had greater cost.
If you're curious, I made an un-scientific and likely incorrect but maybe interesting comparison some time ago (it's in Ukrainian but the table is readable enough I hope):
This calculates the CPU time and max MEM RSS usage required to spawn 1M tasks/coroutines/processes/futures that sleep for 5s and await their completion.
[0]: This might stop being true in a pure sense of the word in .NET 10 because the task handling is going to be completely changed by replacing state machines generated by a language that targets .NET with specially annotated methods, for which the runtime itself is going to implicitly emit state machines instead, allowing to pay the cost only at "true" suspend points. Reference: https://github.com/dotnet/runtimelab/blob/feature/async2-exp...
There is a particularly nice geometric interpretation of attention I just realised recently in a flash of enlightenment, best explained with an interactive Desmos plot (black dot is draggable):
The above assumes the columns of K are normalised but bear with me. K and V together form a vector database. V are the payloads, each row containing a vector of data. K describes the position of these points in space, on the surface of a hypershpere. The query vector describes the query into the database: the vector direction describes the point in space that's being queried, the vector magnitude describes the radius of the query. The result is the weighted average of vectors from V, weighted by their distance from the query vector scaled by the query radius (which has a smooth Gaussian falloff). A recent paper from Nvidia I recommend, which derives a significant speedup by normalising vectors to a hypershpere: https://arxiv.org/abs/2410.01131v1
> Poverty, illness and starvation are the default state of mankind
A most wretched lie. Systems of entrenched inequality and widespread poverty mostly emerged after agriculture and centralized state systems, not during early human history.
That lie serves to shift the blame for people's struggles away from oligarchs and owners and onto their victims. Do you own a company perchance? ... What do your down-stream labourers earn per hour, compared to you?
... Do you think billionaire CEOs and trust-fund nepo babies work harder than people in sweat shops? In mines? Nurses?
Poverty, illness, and starvation are not inherent states. They are outcomes of structural and economic choices.
> the fact that almost nobody lives in deep poverty any more is because people get up in the morning and work hard to produce value.
Another lie, long debunked, labelled "The Protestant Work Ethic Myth". It can be disproved with Nobel economist research [0], World Bank reports [1], or a simple graph [2], not to mention the first few pages of 'Capital'.
Historical evidence and economic research overwhelmingly show that poverty, illness, and starvation are due to structural forces and political choices, and that poverty reduction comes from systemic changes far more than from individual work ethic.
To say otherwise is to blame the victims of terrible crimes of exploitation, while absolving the perpetrators, despite mountains of evidence.
It's fine and good to work, yes. But we have green power, machines, 120 IQ AI, instant global communication... A world where everyone works 20 hours a week with no reduction in Quality of Life for 99% of us is entirely possible, right now; but the current owners of the world would rather see us all burn than move toward it.
The lifespan is probably not as limitless as you might have imagined, the discs tend to fall off or get stuck. But they are really neat while they are working, especially how they sounds.
I was at an office with these flip dot displays, and eventually we dismantled the display. I took some picture of the pieces and you can see how stuck discs look like:
> I bet that WhatsApp is one of the rare services you use which actually deployed servers to Australia. To me, 200ms is a telltale sign of intercontinental traffic.
So, I used to work at WhatsApp. And we got this kind of praise when we only had servers in Reston, Virginia (not at aws us-east1, but in the same neighborhood). Nowadays, Facebook is most likely terminating connections in Australia, but messaging most likely goes through another continent. Calling within Australia should stay local though (either p2p or through a nearby relay).
There's lots of things WhatsApp does to improve experience on low quality networks that other services don't (even when we worked in the same buildings and told them they should consider things!)
In no particular order:
0) offline first, phone is the source of truth, although there's multi-device now. You don't need to be online to read messages you have, or to write messages to be sent whenever you're online. Email used to work like this for everyone; and it was no big deal to grab mail once in a while, read it and reply, and then send in a batch. Online messaging is great, if you can, but for things like being on a commuter train where connectivity ebbs and flows, it's nice to pick up messages when you can.
a) hardcode fallback ips for when DNS doesn't work (not if)
b) setup "0rtt" fast resume, so you can start getting messages on the second round trip. This is part of noise pipes or whatever they're called, and tls 1.3
c) do reasonable-ish things to work with MTU. In the old days, FreeBSD reflected the client MSS back to it, which helps when there's a tunnel like PPPoE and it only modifies outgoing syns and not incoming syn+ack. Linux never did that, and afaik, FreeBSD took it out. Behind Facebook infrastructure, they just hardcode the mss for i think 1480 MTU (you can/should check with tcpdump). I did some limited testing, and really the best results come from monitoring for /24's with bad behavior (it's pretty easy, if you look for it --- never got any large packets and packet gaps are a multiple of MSS - space for tcp timestamps) and then sending back client - 20 to those; you could also just always send back client - 20. I think Android finally started doing pMTUD blackhole detection stuff a couple years back, Apple has been doing it really well for longer. Path MTU Discovery is still an issue, and anything you can do to make it happier is good.
d) connect in the background to exchange messages when possible. Don't post notifications unless the message content is on the device. Don't be one of those apps that can only load messsages from the network when the app is in the foreground, because the user might not have connectivity then
e) prioritize messages over telemetry. Don't measure everything, only measure things when you know what you'll do with the numbers. Everybody hates telemetry, but it can be super useful as a developer. But if you've got giant telemetry packs to upload, that's bad by itself, and if you do them before you get messages in and out, you're failing the user.
f) pay attention to how big things are on the wire. Not everything needs to get shrunk as much as possible, but login needs to be very tight, and message sending should be too. IMHO, http and json and xml are too bulky for those, but are ok for multimedia because the payload is big so framing doesn't matter as much, and they're ok for low volume services because they're low volume.
Fun to see ternary weights making a comeback. This was hot back in 2016 with BinaryConnect and TrueNorth chip from IBM research (disclosure, I was one of the lead chip architects there).
Authors seemed to have missed the history. They should at least cite Binary Connect or Straight Through Estimators (not my work).
Helpful hint to authors: you can get down to 0.68 bits / weight using a similar technique, good chance this will work for LLMs too.
This was a passion project of mine in my last few months at IBM research :).
I am convinced there is a deep connection to understanding why backprop is unreasonably effective, and the result that you can train low precision DNNs; for those note familiar, the technique is to compute the loss wrt to the low precision parameters (eg project to ternary) but apply the gradient to high precision copy of parameters (known as the straight through estimator). This is a biased estimator and there is no theoretical underpinning for why this should work, but in practice it works well.
My best guess is that it is encouraging the network to choose good underlying subnetworks to solve the problem, similar to Lottery Ticket Hypothesis. With ternary weights it is just about who connects to who (ie a graph), and not about the individual weight values anymore.
Really exciting times for WebRTC in general right now!
If you are new to WebRTC and want to learn more about the protocol check out [0] (would love peoples feedback). It is used in lots of unexpected places like streaming (added to OBS)[1] and Embedded [2].
I am especially excited with new implementations popping up like [3] and [4].
I think the answer is: because Microsoft let it. I'm a big fan of modern .NET, but my biggest complaint is that Microsoft views, and always has, the CLR as the C# Language Runtime and not the Common Language Runtime.
For example, see the relationship between F# and C#. The CLR is constantly getting features that are only to support features in C#, leaving F# in a position where they either don't get the feature, can't add the feature, or begrudgingly add the feature to keep up compatibility with C#, which is something it does take seriously. But this has the effect of "dirtying up" the F# language by either adding features that don't really belong in the language or keeping features out.
The other thing is that C# consistently adds features to itself that are inspired by F#, since F# already implements these features on the CLR, thus showing their viability. So what happens is that C# continually approaches a more bloated language with a subset of it being a poor copy of F#. But then F# gets dragged along towards having a small subset of C# in it for compatibility purposes. So it's simultaneously making both languages worse.
Even the iron languages project that lead to IronPython, IronRuby, etc. was a bit of a Trojan horse to test out and exercise the CLR and .NET with no intention of ever providing long-term support for those projects. The DLR, which was implemented to support those, appears to be just maintained by a skeleton crew of people invested in it, probably by those interested in keeping IronPython up and running.
I do not understand why Microsoft takes this approach. It is myopic, shows a misunderstanding of their own technology in the CLR, and ultimately turns C# into another C++, leave dead languages and projects in the wake.
From the picture, the logic chips are all in SOIC packages. The use of surface-mount components with 4-layer PCB should already significantly boost routing density compared to a breadboard with DIP chips. All the chips can be tightly packed together.
Furthermore, both the ALU and the Control Unit are entirely in EEPROMs. The ALU uses 7 ROMs [2], the Control Unit uses 3 ROMs [3], the program counter uses 5 ROMs [4], the bit shifter uses another ROM [5], so I already see 16 EEPROMs in total. This means all the discrete components needed for random logic are largely eliminated, consolidating possibly hundreds (or thousands?) of gates into just a few chips and some lookup tables to program. In fact anther maker already demonstrated that it's sufficient to design a functional CPU entirely using RAM and ROM with just 15 chips in total. [6]
Programmmers usually think ROMs as data storage devices, but they are also the most rudimentary form of programmable logic, as they transform x-bit of address inputs into arbitrary y-bit data outputs, so they can implement arbitrary combinational logic. In fact, lookup tables are the heart of modern FPGAs. As a result, you may argue that this means any ROM-based design has ad-hoc FPGAs (especially when EEPROMs are so large after the 1980s, 64 K for 16-bit chips). But the use of Mask ROMs and PLAs in Control Units has always been a legitimate and standard way to design CPUs even back in the 70s, so I won't call it "cheating" (and using ROMs for ALUs or Control Unit wouldn't really be much different from using a pre-made 74181 or AMD Am2900 anyway).
Raymond Chen writes great stuff but gives a very one-sided picture of compatibility -- he doesn't mention all of the times that it was the other way around, with Windows doing something lame and application authors having to work around it.
Like, for instance, the time that they decided that LaunchAdvancedAssociationUI(), the previous officially recommended way to show UI to allow the user to associate file types with a program, just wouldn't work anymore in Windows 10. Instead of opening up the Default Programs UI in Settings, it just displays a dialog telling the _user_ to go there -- which is even modal so they can't even refer to it while doing so. No compatibility shim or grandfathering for old programs, they just broke all applications that used this like they originally said good programs should do for Windows 8.
Or the case of Dark Mode in Windows, which for some reason they've dragged their heels on implementing barely any Win32 support at all for -- even just simply a call to query whether it is enabled. The current silly recommendation is to obtain the foreground color through WinRT and do a dot product on it to compute luma and determine if it is a dark or light color:
https://learn.microsoft.com/en-us/windows/apps/desktop/moder...
Or the fact that the official way of reporting bugs on the Windows APIs is the Feedback Hub, which is completely unsuitable for task.
I don't have sympathy for the Windows team anymore. Their lack of developer support is partially responsible for all of the hacks that applications have to do to ship.
In a similar vein, this poem by Pedro Pietri:
Telephone Booth (number 905 1/2)
woke up this morning
feeling excellent,
picked up the telephone
dialed the number of
my equal opportunity employer
to inform him I will not
be into work today
Are you feeling sick?
the boss asked me
No Sir I replied:
I am feeling too good
to report to work today,
if I feel sick tomorrow
I will come in early
Many graph search algorithms can be expressed as a generic graph search algorithm where you have a "white set" of unvisited nodes, a "black set" of visited nodes, and a "grey set" of nodes that you've encountered but have yet to visit. The structure of the grey set determines the algorithm:
Queue = BFS
Stack = DFS
Priority queue by distance from start = Dijkstra's algorithm
Priority queue by distance + heuristic = A*
Bounded priority queue by heuristic = Beam search
Priority queue by connecting edge weights = Prim's algorithm
Pointer reversal = Mark & sweep garbage collector
Check on whether it points into from-space = Copying garbage collector.
Yeah, this. As humans and as the inventors of these technologies, we get to decide what kind of values we uphold and how our politics and policies should reflect them.
These think pieces that subscribe to an unspoken underlying technological determinism are so disgusting to me, they have a really narrow and pathetic view of what it means to be human (it mostly boils down to "economic agent") Half the time I wonder if the authors themselves even have requisite humanity, and I also wonder if the obsession around digital technology is partially driven by our having made machines of ourselves in the first place (all we care about is economy, work, etc. We've lost the old notion of the "human spirit" at great cost).
More people in power and decision making positions need to start broadening their reading lists with Goethe, William Blake, Novalis, and other awakened poets and start letting some of these imaginative ideas about humanity's potential drive their lives more than purely economically motivated crapped out thinkpieces.
The creator of de_dust (amongst others), did some great write ups[0] on how and why he made maps like he did. It’s a lost art now everything is squeezed through playtesting and data analysis.
https://arc-anglerfish-washpost-prod-washpost.s3.amazonaws.c...