I wrote an extended essay on “Rosencranz and Guildenstern Are Dead” in high school… 1998-1999 period. I loved his screenplays even though I’m not a great fan of theatre in general. 88 is a ripe old age but it’s still deeply saddening.
You're not losing much; the film is really good, and features Oldman... among other very good acting. There is elitist cohort bent on signaling how they cannot stand films, and how live theater is superior, blah blah. Well, usually the films are vastly superior to productions, in both interpretation of the script, and complexity of execution. The film starring Oldman is an instance of that, I think. But it's all secondary to reading the text itself.
R&G is a nice play, but honestly it doesn't come off the page nicely. The same is true for late-Beckett. I'm a huge fan of these guys, but I never understood the obsession literary teachers have with only a handful of plays, like R&G, or Waiting for Godot. These are very specific, nerd-like, I would even go as far as calling superficial—pieces of art. At any rate, Stoppard is best appreciated when read off the page, or on radio.. he's just one of these guys. Indian Ink is really good.
Please don’t. There’s enough AI slop sloshing around already. What’s the endpoint of this? Machines pretending to be human speaking to machines that pretend to ignore the fact they’ve guessed they’re speaking to a machine?
For the iOS world, ideally the Messages app would be mandated to become a generic interface receptive to protocol plug-ins to interoperate with all messaging networks; and it too itself would be replaceable in that role.
I'm not sure Apple can be a good steward of such an open, plug-in based solution - they would always put in some restrictions to make the process very complicated and not accessible to platforms and developers.
At the same time, making it possible to choose WhatsApp for the default messaging app has been a great relief for those not locked into Messages.
Well the thing is: if they were particularly obnoxious about their implementation, they could be replaced. I’m looking forward to a multi-protocol messaging client that implements other protocols as plugins. If and when such a thing arrives, I’m setting it to be my default.
A complication is that iMessage supports a ton of collaboration features that don't (and largely can't) exist across other messaging apps. The messaging bits will have the same nerfed interface as SMS/RCS because of missing capabilities.
Despite having the appearance of a messaging app, iMessage operates as a backbone for a lot of OS capability that is surprisingly deep.
That seems like a good idea in the sense that it's better than separate apps for everything, but it's also probably the wrong level of abstraction. For example: what happens if you try to create a group chat containing an RCS user, a WhatsApp user, and a Telegram user? Ideally it would just work, but I don't see how that's possible without support for such a thing at a deeper level than just the UI layer.
Usually when you’re running at a loss you generate positive externalities (I.e. old national railway and telecoms firms in the UK, for example) but these guys seem remarkable in that they’re not even generating excess value for their customers.
Its unpredictable what will get announced tomorrow morning. They leverage that space. It's like running the Manhattan Project. But its the Private Sector which is very rare. So the known frames don't fit.
Appreciate the honesty — this is exactly the feedback I need.
Core app is free, subscription is just for trends. But yeah, I'm not married to the model. If enough people prefer one-time purchase, I'd consider adding that option.
Okay, so here’s the thing: the ‘freemium’ model is always going to cause friction, the user will be using the app and suddenly run into an apparently arbitrary paywall and that’ll almost always solicit some kind of ire along the lines of “I want to see this thing, this thing is already there, but now they’re asking me to pay for what is already constructed and therefore has zero marginal cost to the developer” (or some inchoate variation thereof). Basically it generates frustration.
My take is that this is a fair app for the usage case you posit: determining sunlight exposure in regions where not much is available. Other use cases come to mind: for example beach-goers who are keen to make sure they don’t overexpose themselves but gradually build up a tan. It’s data they can piece together themselves numerically or (to be perfectly honest) that being humans who have evolved for millennia under sunlight, we can kind of intuit ourselves.
I’d say it’s a roughly 1.99 euro purchase fee for the ‘trends’ feature. It may even be a 1.99 euro for the app itself rather than half-free half-paid, but it’s definitely not something I want a large recurring subscription for. I can look at the sky and I can look at my skin, and I can figure out the rest. The only value is in quantifying it, and so the whole thing is meaningless unless it tells me something I don’t intuitively already know.
I like reversing statements just to put them in context, like for example “up to a ten thousand people are affected” can be converted to “at most ten thousand people were affected”, and in this case it becomes “physicists (except one guy speaking to the Daily Mail) don’t speculate that consciousness may be part of the universe”.
About a month ago I had a rather annoying task to perform, and I found an NPM package that handled it. I threw “brew install NPM” or whatever onto the terminal and watched a veritable deluge of dependencies download and install. Then I typed in ‘npm ’ and my hand hovered on the keyboard after the space as I suddenly thought long and hard about where I was on the risk/benefit curve and then I backspaced and typed “brew uninstall npm” instead, and eventually strung together an oldschool unix utilities pipeline with some awk thrown in. Probably the best decision of my life, in retrospect.
This is why you want containerisation or, even better, full virtualisation. Running programs built on node, python or any other ecosystem that makes installing tons of dependencies easy (and thus frustratingly common) on your main system where you keep any unrelated data is a surefire way to get compromised by the supply chain eventually. I don't even have the interpreters for python and js on my base system anymore - just so I don't accidentally run something in the host terminal that shouldn't run there.
Why not? Make a bash alias for `npm` that runs it with `bwrap` to isolate it to the current directory, and you don't have to think about it again. Distributions could have a package that does this by default. With nix, you don't even need npm in your default profile, and can create a sandboxed nix-shell on the fly so that's the only way for the command to even be available.
Most of your programs are trusted, don't need isolation by default, and are more useful when they have access to your home data. npm is different. It doesn't need your documents, and it runs untrusted code. So add the 1 line you need to your profile to sandbox it.
The right way (technically) and the commercially viable way are often diametrically opposed. Ship first, ask questions later, or, move fast and break things, wins.
Here I go again: Plan9 had per-process namespaces in 1995. The namespace for any process could be manipulated to see (or not see) any parts of the machine that you wanted or needed.
I really wish people had paid more attention to that operating system.
The tooling for that exists today in Linux, and it is fairly easy to use with podman etc.
K8s choices clouds that a little, but for vscode completions as an example, I have a pod, that systemd launches on request that starts it.
I have nginx receive the socket from systemd, and it communicates to llama.cpp through a socket on a shared volume. As nginx inherits the socket from systemd it does have internet access either.
If I need a new model I just download it to a shared volume.
Llama.cpp has now internet access at all, and is usable on an old 7700k + 1080ti.
People thinking that the k8s concept of a pod, with shared UTC, net, and IPC namespaces is all a pod can be confuses the issue.
The same unshare command that runc uses is very similar to how clone() drops the parent’s IPC etc…
I should probably spin up a blog on how to do this as I think it is the way forward even for long lived services.
The information is out there but scattered.
If it is something people would find useful please leave a comment.
Plan9 had this by default in 1995, no third party tools required. You launch a program, it gets its own namespace, by default it is a child namespace of whatever namespace launched the program.
I should not have to read anything to have this. Operating systems should provide it by default. That is my point. We have settled for shitty operating systems because it’s easier (at first glance) to add stuff on top than it is to have an OS provide these things. It turns out this isn’t easier, and we’re just piling shit on top of shit because it seems like the easiest path forward.
Look how many lines of code are in Plan9 then look at how many lines of code are in Docker or Kubernetes. It is probably easier to write operating systems with features you desire than it is to write an application-level operating system like Kubernetes which provide those features on top of the operating system. And that is likely due to application-scope operating systems like Kubernetes needing to comply with the existing reality of the operating system they are running on, while an actual operating system which runs on hardware gets to define the reality that it provides to applications which run atop it.
You seem to have a misunderstanding of what namespaces accomplished on plan9, or that it was extending Unix concepts and assembling them in another way.
As someone who actually ran plan9 over 30 years ago I ensure that if you go back and look at it, the namespaces were intended to abstract away the hardware limitations of the time, to build distributed execution contexts of a large assembly of limited resources.
And if you have an issue with Unix sockets you would have hated it as it didn’t even have stalls and everything was about files.
Today we have a different problem, where machines are so large that we have to abstract them into smaller chunks.
Plan9 was exactly the opposite, when your local system CPU is limited you would run the cpu command and use another host, and guess what, it handed your file descriptors to that other machine.
The goals of plan9 are dramatically different than isolation.
But the OSes you seem to hate so much implemented many of the plan9 ideas, like /proc, union file systems, message passing etc.
Also note I am not talking about k8s in the above, I am talking about containers and namespaces.
K8s is an orchestrater, the kernel functionality may be abstracted by it, but K8s is just a user of those plan9 inspired ideas.
Netns, pidns, etc… could be used directly, and you can call unshare(2)[0] directly, or use a cri like crun or use podman.
Heck you could call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
Kubernetes is an operating system on top of an operating system. Its complexity is insane.
The base OS should be providing a lot/all of these features by default.
Plan9 is as you describe out of the box, but what I want is what plan9 might be if it were designed today and could be with a little work. Isolation would not be terribly difficult to add to it. The default namespace a process gets by default could limit it to its own configuration directory, its own data directory, and standard in and out by default. And imagine every instance of that application getting its own distinct copy of that namespace and none of them can talk to each other or scan any disk. They only do work sent to them via stdin, as dictated in the srv configuration for that software.
Everything doesn’t HAVE to be a file, but that is a very elegant abstraction when it all works.
> call the ip() command and run your app in an isolated namespace with a single command if you wanted to.
I should not have to opt in to that. Processes should be isolated by default. Their view of the computer should be heavily restricted; look at all these goofy NPM packages running malware, capturing credentials stored on disk. Why can an NPM package see any of that stuff by default? Why can it see anything else on disk at all? Why is everything wide fucking open all the time?
Because containers on Linux will never be able to provide this, they are fundamentally insecure from the kernel layer up, adding another OS stack on top (k8s) will never address the underlying mess that Linux containers are fundamentally.
The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it, but using the OS APIs directly sucks. Otherwise the only "safe" implementations of such features would need a full software VM.
If using software securely was really a priority, everyone would be rustifing everything, and running everything in separated physical machines with restrictive AppArmor, SELinux, TOMOYO and Landlock profiles, with mTLS everywhere.
It turns out that in Security, "availability" is a very important requirement, and "can't run your insecure-by-design system" is a failing grade.
> The fact that tools like docker, podman and bubblewrap exist and work points out that the OS supports it
Only via virtualization in the case of MacOS. Somehow, even windows has native container support these days.
A much more secure system can be made I assure you. Availability is important, but an NPM package being able to scan every attached disk in its post-installation script and capture any clear text credentials it finds is crossing the line. This isn’t going to stop with NPM, either.
One can have availability and sensible isolation by default. Why we haven’t chosen to do this is beyond me. How many people need to get ransomwared because the OS lets some crappy piece of junk encrypt files it should not even be able to see without prompting the user?
This sounds very interesting to me. I'd read through that blog post, as I'm working on expanding my K8s skills - as you say knowledge is very scattered!
That can only go so far. Assuming there is no container/VM escape, most software is built to get used. You can protect yourself from malicious dependencies in the build step. But at some point, you are going to do a production build, that needs to run on a production system, with access to production data. If you do not trust your supply chain; you need to fix that.
If you excuse me, I have a list of 1000 artifacts I need to audit before importing into our dependency store.
Containers don't help much when you deploy malware into your systems, containers are not and never will be security tools on Linux, they lack many needed primitives to be able and pull off that type of functionality.
It's funny because techies love to tell people that common sense is the best antivirus, don't click suspicious links, etc. only to download and execute a laundry list of unvetted dependencies with a keystroke.
The lesson surely though is 'don't use web-tech, aimed at solving browser incompatibility issues for local scripting'.
When you're running NPM tooling you're running libraries primarily built for those problems, hence the torrent of otherwise unnecessary complexity of polyfills, that happen to be running on a JS engine that doesn't get a browser attached to it.
I'm sorry, but this is just incorrect. Have you ever heard of ljharb[0]? The NPM ecosystem is rife with polyfills[1]. I don't know how you can make a distinction on which libraries would be used for "local scripting" as I don't think many library authors make that distinction.
[0] - TC39 member who is self-described as "obsessed with backwards compatibility": https://github.com/ljharb
Yes. I'm on TC39 as well, and I've talked to Jordan about this topic.
It's true that there are a few people who publish packages on npm including polyfills, Jordan among them. But these are a very small fraction of all packages on npm, and none of the compromised packages were polyfills. Also, he cares about backwards compatibility _with old versions of node_; the fact that JavaScript was originally a web language, as the grandparent comment says, is completely irrelevant to the inclusion of those specific polyfills.
Polyfills are just completely irrelevant to this discussion.