What is the max token throughput when batching. Lots of agentic workflows (not just vibe coding) are running many inferences in parallel.
It seems like every time someone does an AI hardware “review” we end up with figures for just a single instance, which simply isn’t how the target demographic for a 40k cluster are going to be using it.
Jeff, I love reading your reviews, but can’t help but feel this was a wasted opportunity for some serious benchmarking of LLM performance.
I note the lack of human portraits in the example cases.
My experience with all these solutions to date (including whatever apple are currently using) is that when viewed stereoscopically the people end up looking like 2d cutouts against the background.
I haven't seen this particular model in use stereoscopically so I can't comment as to its effectiveness, but the lack of a human face in the example set is likely a bit of a tell.
Granted they do call it "Monocular View Synthesis", but i'm unclear as to what its accuracy or real-world use would be if you cant combine 2 views to form a convincing stereo pair.
Im not sure how the depth estimation alone translates into the view synthesis, but the current implementation on-device is definitely not convincing for literally any portrait photographs I have seen.
True stereoscopic captures are convincing statically, but don't provide the parallax.
Good monocular depth estimation is crucial if you want to make a 3D representation from a single image. Ordinarily you have images from several camera poses and can create the gaussian splats using triangulation, with a single image you have to guess z position for them.
For selfies, I think iPhones with Face ID use the TrueDepth camera hardware to measure Z position. That’s not full camera resolution, but it will definitely help.
Sure you can. A relay operator absolutely can censor what goes through their relay. More to the point, you cant even prove that such censorship has occurred.
Nostr is censorship resistant in that you can publish to multiple relays, but that is far from censorship-proof.
The problem is that (to use the comparisons given in the article) Nostr is a statically peered superpeer.
All the "downsides" of a superpeer (as the article says - "centralisation with extra steps") but without the benefit of dynamic peering thereby resulting in incomplete routing.
i.e. by its nature Nostr results in a fragmented network, which ends up looking very much like the federated network, albeit more interconnected.
Thats not necessarily a bad thing, but its a bit of a confused article, IMHO.
That's true. The hope is that users will favor generalist / unbiased relays (less fragmentation by design) rather than heavily biased / restricted ones. Maybe even fund them: I will pay you as long as you don't start banning large swathes of the network just because you don't like what they say.
Users you follow can also advertise relays behind the scenes, so it's more probable that, if you follow a coherent set of users, you will converge on a coherent subset of relays that doesn't really feel fragmented.
I believe so? A domain can not get renewed for many reasons - such as the death of the registrant. The domain can then get reregistered and the email addresses effectively "hijacked", leading to impersonation of the original owners.
A reliable email provider with a policy of never recycling an email address would mean that scenario wont happen. Obviously they can change policy, but if that happens while I am able then obviously I can inform everyone to migrate to a new email then.
This is an attempt to protect against a legitimate security concern.
A registrar isn't going to keep your domain active if you don't renew.
Maybe you are confused about what I mean by email service provider.
I am referring to an email provider that uses its own domain, and provides you with an email account - like gmail, live, hey (the examples I have given). I thought I made that clear when I said: "It would be nice to have a memorable user-part, so nothing oversubscribed would be ideal."
> I am referring to an email provider that uses its own domain
Well, where do you think they get their domain from? The same place you do, a registrar. You're just adding a layer.
For example, you mention hey.com.... do a `dig soa hey.com` and you'll see they're registered w/ cloudflare. If you register with cloudflare too, you will have the same chance of having your domain ripped away from you as hey.com does.
The email service provider isn't particularly special in that sense. That said, it is true that there's a lot about infrastructure people can use help with.
So, if you're not familiar w/ technicalities such as these I wouldn't blame you for outsourcing. It's a big world and we can't do it all ourselves. Good luck!
> do a `dig soa hey.com` and you'll see they're registered w/ cloudflare
Sorry, this should be a whois search to see their registrar, the dig will show you who provides their DNS. In hey.coms case both are the same.... cloudflare.
My point remains the same though. The worry of losing your address should remain largely the same because email depends on dns.
> The worry of losing your address should remain largely the same
You should actually worry more about losing your address because now there are two people who can screw you... the ESP (email service provider) _and_ their registrar.
If you hire the ESP to host email on your own domain though (or self host), then you can screw yourself (this is always a possibility) or the registrar can screw you... but you can always just switch ESPs if they're criminals or incompetent. This is what I was referring to when I said this:
> just shuffling things around (probably in the wrong direction).
... in the first reply. Phew... what a long strange trip! I hope the picture is clearer now.
Now I have to leave you to your own devices, sorry.
Don't worry about it - clearly I am not explaining it well enough for you to understand. It is a well documented security concern, so feel free to do your own research on why as we are just going in circles here.
VACUUM FULL is about cleanup up things above the level of a single page. Moving stuff around within a page doesn't allow you to reclaim space on the OS level, nor does it "compact" tuples onto fewer pages.
But it's important for normal vacuum to compact the tuples on the page, otherwise the space of deleted tuples couldn't effectively be reused. Imagine a page that's entirely filled with 100 byte tuples, then every other tuple is deleted. Then, after a vacuum, a single 108 byte tuple should be inserted onto the page. Without compacting the space in the page during the vacuum, there would not be any space for that larger tuple.
> Wireguard and remove Apple and Tailscale from the equation entirely
I agree you could send them a preconfigured pi, but can we stop pretending talescale is just wireguard - there is a lot of convenience in the NAT traversal that you otherwise need router config and/or a publically routable server to achieve.
> A natural question is how this differs from something like Tor. In a nutshell, Tor offers identity protection before you enter a P2P network, and Dandelion offers identity protection inside the P2P network.
It seems like every time someone does an AI hardware “review” we end up with figures for just a single instance, which simply isn’t how the target demographic for a 40k cluster are going to be using it.
Jeff, I love reading your reviews, but can’t help but feel this was a wasted opportunity for some serious benchmarking of LLM performance.
reply