> i dont know what the solution here is other than stop using npm
Personally I think we need to start adding capability based systems into our programming languages. Random code shouldn't have "ambient authority" to just do anything on my computer with the same privileges as me. Like, if a function has this signature:
function add(a: int, b: int) -> int
Then it should only be able to read its input, and return any integer it wants. But it shouldn't get ambient authority to access anything else on my computer. No network access. No filesystem. Nothing.
Philosophically, I kind of think of it like function arguments and globals. If I call a function foo(someobj), then function foo is explicitly given access to someobj. And it also has access to any globals in my program. But we generally consider globals to be smelly. Passing data explicitly is better.
But the whole filesystem is essentially available as a global that any function, anywhere, can access. With full user permissions. I say no. I want languages where the filesystem itself (or a subset of it) can be passed as an argument. And if a function doesn't get passed a filesystem, it can't access a filesystem. If a function isn't passed a network socket, it can't just create one out of nothing.
I don't think it would be that onerous. The main function would get passed "the whole operating system" in a sense - like the filesystem and so on. And then it can pass files and sockets and whatnot to functions that need access to that stuff.
If we build something like that, we should be able to build something like npm but where you don't need to trust the developers of 3rd party software so much. The current system of trusting everyone with everything is insane.
I couldn't agree with you more, the thing is our underlying security models are
protecting systems from their users, but do nothing for protecting user data from the programs they run. Capability based security model will fix that.
Only on desktop. Mobile has this sorted. Programs have access to their own files unrestricted, and then can access the shared file space only through the users specifically selecting them.
I think there's 2 kinds of systems we're talking about here:
1. Capabilities given to a program by the user. Eg, "This program wants to access your contacts. Allow / deny". But everything within a program might still have undifferentiated access. This requires support from the operating system to restrict what a program can do. This exists today in iOS and Android.
2. Capabilities within a program. So, if I call a function in a 3rd party library with the signature add(int, int), it can't access the filesystem or open network connections or access any data thats not in its argument list. Enforcing this would require support from the programming language, not the operating system. I don't know of any programming languages today which do this. C and Rust both fail here, as any function in the program can access the memory space of the entire program and make arbitrary syscalls.
Application level permissions are a good start. But we need the second kind of fine-grained capabilities to protect us from malicious packages in npm, pip and cargo.
I would also say there is a 3rd class, which are distributed capabilities.
When you look at a mobile program such as the GadgetBridge which is synchronizing data between a mobile device and a watch, and number of permissions it requires
like contacts, bluetooth pairing, notifications, yadda yadda the list goes on.
Systems like E-Lang wouldn't bundle all these up into a single application. Your watch would have some capabilities, and those would interact directly with capabilities on the phone. I feel like if you want to look at our current popular mobile OS's as capability systems the capabilities are pretty coarse grained.
One thing I would add about compilers, npm, pip, cargo. Is that compilers are transformational programs, they really only need read and write access to a finite set of input, and output. In that sense, even capabilities are overkill because honestly they only need the bare minimum of IO, a batch processing system could do better than our mainstream OS security model.
Ironically, any c++ app I've written on windows does exactly this. "Are you sure you want to allow this program to access networking?" At least the first time I run it.
Yeah, but if that app was built using a malicious dependency that only relied on the same permissions the app already uses, you’d just click “Yes” and move on and be pwned.
And if you use bun or nodejs, you also have out of the box access to an HTTP server, filesystem APIs, gzip, TLS and more. And if you're working in a browser, almost everything in jquery has since been pulled into the browser too. Eg, document.querySelector.
Of course, web frameworks like react aren't part of the standard library in JS. Nor should they be.
What more do you want JS to include by default? What do java, python and go have in their standard libraries that JS is missing?
> When people say "js doesn't have a stdlib" they mean "js doesn't have a robust general purpose stdlib like C++ ...
It does though! The JS stdlib even includes an entire wasm runtime. Its huge!
Seriously. I can barely think of any features in the C++ stdlib that are missing from JS. There's a couple - like JS is missing std::priority_queue. But JS has soooo much stuff that C++ is missing. Its insane.
That's what I assume people mean, because they can't mean trivial stuff like "left-pad" and "is-even" because why would that be part of any language's standard library?
Weird that the JS community relies entirely on external libraries with arbitrarily deep and fragile dependency trees that default fail to wrecking the entire web because JS "doesn't have a stdlib" for this sort of thing then. ¯\_(ツ)_/¯
It is. Though in their defence, I think this api was added after the leftpad fiasco.
Also not many people seem to know this, but in the aftermath of leftpad being pulled from npm, npmjs changed their policy to disallow module authors from ever pulling old packages, outside a few very exceptional circumstances. The leftpad fiasco can’t happen again.
I've worked in plenty of javascript shops and unfortunately its not so far off the mark. Its quite common to see JS projects with thousands of transitive dependencies. I've seen the same in python too.
It's funny how Py has less of this reputation just because the package manager is so broken that you might have a hard time adding so many deps in the first place. (Maybe fixed with uv, but that's relatively new and not default.)
> They're an end-run around the underlying version control system
I assume by "underlying version control system" you mean apt, rpm, homebrew and friends? They don't solve this problem either. Nobody in the opensource world is auditing code for you. Compromised xz still made it into apt. Who knows how many other packages are compromised in a similar way?
Also, apt and friends don't solve the problem that npm, cargo, pip and so on solve. I'm writing some software. I want to depend on some package X at version Y (eg numpy, serde, react, whatever). I want to use that package, at that version, on all supported platforms. Debian. Ubuntu. Redhat. MacOS. And so on. Try and do that using the system package manager and you're in a world of hurt. "Oh, your system only has official packages for SDL2, not SDL3. Maybe move your entire computer to an unustable branch of ubuntu to fix it?" / "Yeah, we don't have that python package in homebrew. Maybe you could add it and maintain it yourself?" / "New ticket: I'm trying to run your software in gentoo, but it only has an earlier version of dependency Y."
No, other trusted repositories are legitimately better because the maintainers built the software themselves. They don't purely rely on binaries from the original developer.
It's not perfect and bad things still make it through, but just look at your example - XZ. This never made it into Debian stable repositories and it was caught remarkably quickly. Meanwhile, we have NPM vulnerability after vulnerability.
Npm is all source based. Nobody is compiling binaries of JavaScript libraries. Cargo is the same.
I’m not really sure what you think a maintainer adds here. They don’t audit the code. A well written npm or cargo or pip module works automatically on all operating systems. Why would we need or want human intervention? To what? Manually add each package to N other operating systems? Sounds like a huge waste of time. Especially given the selection of packages (and versions of those packages) in every operating system will end up totally different. It’s a massive headache if you want your software to work on multiple Linux distros. And everyone wants that.
Npm also isn’t perfect. But npm also has 20x as many packages as apt does on Ubuntu (3.1M vs 150k). I wouldn’t be surprised if there is more malicious code on npm. Until we get better security tools, its buyer beware.
But do they audit the code? I say mostly no. They grab the source, try to compile it. Develop patches to fix problems on the specific platform. Once it works, passes the tests, it's done. Package created, added to the repo.
Even OpenBSD, famous for auditing their code, doesn't audit packages. Only the base system.
While I haven't audited line by line everything that I've uploaded in Debian, I do look around and for new versions I check the diff with the old version.
nix is designed to support many versions of your dependencies on the same system by building a hash of your dependency graph and using that as a kind of dependency namespace for the various applications you have installed. The result is that you can run many versions of whatever application you want on the same system.
> Nobody in the opensource world is auditing code for you
That's still true of nix. Whether you should trust a package is on you. But nix solves everything else listed here.
> I want to use that package, at that version, on all supported platforms...
Nix derivations will fail to build if their contents rely on the FHS (https://refspecs.linuxfoundation.org/FHS_3.0/fhs/index.html), so if a package tries to blindly trust that `/bin/bash` is in fact a compatible version of what you think it is, it won't make it into the package set. So we can each package our a bash script, and instead of running on "bash" each will run on the precise version of bash that we packaged with it. This goes for everything though, compilers, linkers, interpreters, packages that you might otherwise have installed with pip or npm or cargo... nix demands a hash for it up front. It could still have been malicious the whole time, but it can't suddenly become malicious at a later date.
> ... Debian. Ubuntu. Redhat. MacOS. And so on. Try and do that using the system package manager and you're in a world of hurt.
If you're on NixOS, nix is your system package manager. If you're not, you can still install nix and use it on all of those platforms (not Windows, certain heroic folk are working on that, WSL works though)
> Oh, your system only has official packages for SDL2, not SDL3. Maybe move your entire computer to an unustable branch of ubuntu to fix it?"
I just installed SDL3, nix put it in `/nix/store/yla09kr0357x5khlm8ijkmfm8vvzzkxb-sdl3-3.2.26`. Then I installed SDL2, nix put it in `/nix/store/a5ybsxyliwbay8lxx4994xinr2jw079z-sdl2-compat-2.32.58` If I want one or the other at different times, nix will add or remove those from my path. I just have to tell nix which one I want...
$ nix shell nixpkgs#sdl2-compat
$ # now I have sdl2
$ exit
$ nix shell nixpkgs#sdl3
$ # now I have sdl3
> "Yeah, we don't have that python package in homebrew. Maybe you could add it and maintain it yourself?"
All of the major languages have some kind of foo2nix adapter package. When I want to use a python package that's not in nixpkgs, I use uv2nix and nix handles enforcing package sanity on them (i.e. maps uv.lock, a python thing, into flake.lock, a nix thing). I've been dabbling with typescript lately, so I'm using pnpm2nix to map typescript libraries in a similar way.
The learning curve is no joke, but if you climb it, only the hard problems will remain (deciding if the package is malicious in the first place).
Also, you'll have a new problem. You'll be forever cursed to watch people shoot themselves in the foot with inferior packaging, you'll know how to help them, but they'll turn you down with a variant of "that looks too unfamiliar, I'm going to stick with this thing that isn't working".
Yep. This was the biggest thing that turned me off Go. I ported the same little program (some text based operational transform code) to a bunch of languages - JS (+ typescript), C, rust, Go, python, etc. Then compared the experience. How were they to use? How long did the programs end up being? How fast did they run?
I did C and typescript first. At the time, my C implementation ran about 20x faster than typescript. But the typescript code was only 2/3rds as many lines and much easier to code up. (JS & TS have gotten much faster since then thanks to improvements in V8).
Rust was the best of all worlds - the code was small, simple and easy to code up like typescript. And it ran just as fast as C. Go was the worst - it was annoying to program (due to a lack of enums). It was horribly verbose. And it still ran slower than rust and C at runtime.
I understand why Go exists. But I can't think of any reason I'd ever use it.
> I understand why Go exists. But I can't think of any reason I'd ever use it.
When you want your project to be able to cross-compile down to a static binary that the end user can simply download and run without any "installation" on any mainstream OS + CPU arch combination
From my M1 Mac I can compile my project for Linux, MacOS, and Windows, for x86 and ARM for each. Then I can make a new Release on GitHub and attach the compiled binaries. Then I can curl the binaries down to my bare Linux x86 server and run them. And I can do all of this natively from the default Go SDK without installing any extra components or system configurations. You don't even need to have Go installed on the recipient server or client system. Don't even need a container system either to run your program anywhere.
You cannot do this with any other language that you listed. Interpreted languages all require a runtime on the recipient system + library installation and management, and C and Rust lack the ability to do native out-of-the-box cross compilation for other OS + CPU arch combinations.
Go has some method to implement enums. I never use enums in my projects so idk how the experience compares to other systems. But I'm not sure I would use that as the sole criteria to judge the language. And you can usually get performance on par with any other garage collected language out of it.
When you actually care about the end user experience of running the program you wrote, you choose Go.
Rust gets harder with codebase size, because of borrow checker.
Not to mention most of the communication libraries decided to be async only, which adds another layer of complexity.
I strongly disagree with this take. The borrow checker, and rust in general, keeps reasoning extremely local. It's one of the languages where I've found that difficulty grows the least with codebase size, not the most.
The borrow checker does make some tasks more complex, without a doubt, because it makes it difficult to express something that might be natural in other languages (things including self referential data structures, for instance). But the extra complexity is generally well scoped to one small component that runs into a constraint, not to the project at large. You work around the constraint locally, and you end up with a public (to the component) API which is as well defined and as clean (and often better defined and cleaner because rust forces you to do so).
I work in a 400k+ LOC codebase in Rust for my day job. Besides compile times being suboptimal, Rust makes working in a large codebase a breeze with good tooling and strong typechecking.
I almost never even think about the borrow checker. If you have a long-lived shared reference you just Arc it. If it's a circular ownership structure like a graph you use a SlotMap. It by no means is any harder for this codebase than for small ones.
Disagree, having dealt with +40k LoC rust projects, bottow checker is not an issue.
Async is an irritation but not the end of the world ... You can write non asynchronous code I have done it ... Honestly I am coming around on async after years of not liking it... I wish we didn't have function colouring but yeah ... Here we are....
Funny, I explicitly waited to see async baked in before I even started experimenting with Rust. It's kind of critical to most things I work on. Beyond that, I've found that the async models in rust (along with tokio/axum, etc) have been pretty nice and clean in practice. Though most of my experience is with C# and JS/TS environments, the latter of which had about a decade of growing pains.
I still regularly use typescript. One problem I run into from time to time is "spooky action at a distance". For example, its quite common to create some object and store references to it in multiple places. After all, the object won't be changed and its often more efficient this way. But later, a design change results in me casually mutating that object, forgetting that its being shared between multiple components. Oops! Now the other part of my code has become invalid in some way. Bugs like this are very annoying to track down.
Its more or less impossible to make this mistake in rust because of how mutability is enforced. The mutability rules are sometimes annoying in the small, but in the large they tend to make your code much easier to reason about.
C has multiple problems like this. I've worked in plenty of codebases which had obscure race conditions due to how we were using threading. Safe rust makes most of these bugs impossible to write in the first place. But the other thing I - and others - run into all the time in C is code that isn't clear about ownership and lifetimes. If your API gives me a reference to some object, how long is that pointer valid for? Even if I now own the object and I'm responsible for freeing it, its common in C for the object to contain pointers to some other data. So my pointer might be invalid if I hold onto it too long. How long is too long? Its almost never properly specified in the documentation. In C, hell is other people's code.
Rust usually avoids all of these problems. If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I wholeheartedly concur based on my experience with Rust (and other languages) over the last ~7 or so years.
> If I call a function which returns an object of type T, I can safely assume the object lasts forever. It cannot be mutated by any other code (since its mine). And I'm not going to break anything else if I mutate the object myself. These are really nice properties to have when programming at scale.
I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information: who owns an object, what other callers can do with that object, the lifetime of that object in relation to other objects. And critically, in safe Rust, these are _guarantees_, which is the essence of real abstraction.
In large and/or complicated codebases, this kind of information is critical in languages without garbage garbage collection, but even when I program in languages with garbage collection, I find myself wanting this information. Who is seeing this object? What do they know about this object, and when? What can they do with it? How is this ownership flowing through the system?
Most languages have little/no language-level notion of these concepts. Most languages only enforce that types line up nominally (or implement some name-identified interface), or the visibility of identifiers (public/private, i.e. "information hiding" in OO parlance). I feel like Rust is one of the first languages on this path of providing real program dataflow information. I'm confident there will be future languages that will further explore providing the programmer with this kind of information, or at least making it possible to answer these kinds of questions easier.
> I rarely see this mentioned in the way that you did, and I'll try to paraphrase it in my own way: Rust restricts what you can do as a programmer. One can say it is "less powerful" than C. In exchange for giving up some power, it gives you more information
Your paraphrasing reminds me a bit of structured vs. unstructured programming (i.e., unrestricted goto). Like to what you said, structured programming is "less powerful" than unrestricted goto, but in return, it's much easier to follow and reason about a program's control flow.
At the risk of simplifying things too much, I think some other things you said make for an interesting way to sum this up - Rust does for "ownership flow"/"dataflow" what structured programming did for control flow.
I really like this analogy. In a sense, C restricts what you can do compared to programming directly in assembly. Like, there's a lot of programs you can write in assembly that you can't write in the same way in C. But those restrictions also constrain all the other code in your program. And that's a wonderful thing, because it makes it much easier to make large, complex programs.
The restrictions seem a bit silly to list out because we take them for granted so much. But its things like:
- When a function is called, execution starts at the top of the function's body.
- Outside of unions, variables can't change their type halfway through a program.
- Whenever a function is called, the parameters are always passed using the system calling convention.
- Functions return to the line right after their call site.
Rust takes this a little bit further, adding more restrictions. Things like "if you have a mutable reference to to a variable, there are no immutable references to that variable."
I think it depends on the patterns in place and the actual complexity of the problems in practice. Most of my personal experience in Rust has been a few web services (really love Axum) and it hasn't been significantly worse than C# or JS/TS in my experience. That said, I'll often escape hatch with clone over dealing with (a)rc, just to keep my sanity. I can't say I'm the most eloquent with Rust as I don't have the 3 decades of experience I have with JS or nearly as much with C#.
I will say, that for most of the Rust code that I've read, the vast majority of it has been easy enough to read and understand... more than most other languages/platforms. I've seen some truly horrendous C# and Java projects that don't come close to the simplicity of similar tasks in Rust.
Rust indeed gets harder with codebase size, just like other languages. But claiming it is because of borrow checker is laughable at best. Borrow checker is what keeps it reasonable because it limits the scope of how one memory allocation can affect the rest of your code.
If anything, borrow checker makes writing functions harder but combining them easier.
> it was annoying to program (due to a lack of enums)
Typescript also lacks enums. Why wasn't it considered annoying?
I mean, technically it does have an enum keyword that offers what most would consider to be enums, but that keyword behaves exactly the same as what Go offers, which you don't consider to be enums.
It’s trivial to switch based on the type field. And when you do, typescript gives you full type checking for that specific variant. It’s not as efficient at runtime as C, but it’s very clean code.
Go doesn’t have any equivalent to this. Nor does go support tagged unions - which is what I used in C. The most idiomatic approach I could think of in Go was to use interface {} and polymorphism. But that was more verbose (~50% more lines of code) and more error prone. And it’s much harder to read - instead of simply branching based on the operation type, I implemented a virtual method for all my different variants and called it. But that spread my logic all over the place.
If I did it again I’d consider just making a struct in go with the superset of all the fields across all my variants. Still ugly, but maybe it would be better than dynamic dispatch? I dunno.
I wish I still had the go code I wrote. The C, rust, swift and typescript variants are kicking around on my github somewhere. If you want a poke at the code, I can find them when I’m at my desk.
That wouldn't explain C, then, which does not have sum types either.
All three languages do have enums (as it is normally defined), though. Go is only the odd one out by using a different keyword. As these programs were told to be written as carbon copies of each other, not to the idioms of each language, it is likely the author didn't take time to understand what features are available. No enum keyword was assumed to mean it doesn't exist at all, I guess.
C has numeric enums and tagged unions, which are sum types without any compile time safety. That’s idiomatic C.
Go doesn’t have any equivalent. How do you do stuff like this in Go, at all?
I’ve been programming for 30+ years. Long enough to know direct translations between languages are rarely beautiful. But I’m not an expert in Go. Maybe there’s some tricks I’m missing?
Here’s the problem, if you want to have a stab at it. The code in question defines a text editing operation as a list of editing components: Insert, Delete and Skip. When applying an editing operation, we start at the start of the document. Skip moves the cursor forward by some specified length. Insert inserts at the current position and delete deletes some number of characters at the position.
Eg:
enum OpComponent {
Skip(int),
Insert(String),
Delete(int),
}
type Op = List<OpComponent>
Then there’s a whole bunch of functions with use operations - eg to apply them to a document, to compose them together and to do operational transform.
C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
> How would you model this in Go?
I'm committing the same earlier sin by trying to model it from the solution instead of the problem, so the actual best approach might be totally different, but at least in staying somewhat true to your code:
type OpComponent interface { op() }
type Op = []OpComponent
type Skip struct { Value int }
func (s Skip) op() {}
type Insert struct { Value string }
func (i Insert) op() {}
type Delete struct { Value int }
func (d Delete) op() {}
op := Op{
Skip{Value: 5},
Insert{Value: "hello"},
Delete{Value: 3},
}
> C has unions, but they're not tagged. You can roll your own tagged unions, of course, but that's moving beyond it being a feature of the language.
This feels like a distinction without a real difference. Hand-rolled tagged unions are how lots of problems are approached in real, professional C. And I think they're the right tool here.
> the actual best approach might be totally different, but at least in staying somewhat true to your code: (...)
Thanks for having a stab at it. This is more or less what I ended up with in Go. As I said, I ended up needing about 50% more lines to accomplish the same thing in Go using this approach compared to the equivalent Typescript, rust and swift.
I wish I'd kept my Go implementation. I never uploaded it to github because I was unhappy with it, and I accidentally lost it somewhere along the way.
> the actual best approach might be totally different
Maybe. But honestly I doubt it. I think I accidentally chose a problem which happens to be an ideal use case for sum types. You'd probably need a different problem to show Go or C# in their best light.
But ... sum types are really amazing. Once you start using them, everything feels like a sum type. Programming without them feels like programming with one of your hands tied behind your back.
> As I said, I ended up needing about 50% more lines to accomplish the same thing in Go
I'd be using Perl if that bothered me. But there is folly in trying to model from a solution instead of the problem. For example, maybe all you needed was:
type OpType int
const (
OpTypeSkip OpType = iota
OpTypeInsert
OpTypeDelete
)
type OpComponent struct {
Type OpType
Int int
Str string
}
Or something else entirely. Without fully understanding the exact problem, it is hard to say what the right direction is, even where the direction you chose in other language is the right one for that language. What is certain is that you don't want to write code in language X as if it were language Y. That doesn't work in programming languages, just as it does not work in natural languages. Every language has their own rules and idioms that don't transfer to another. A new language means you realistically have to restart finding the solution from scratch.
> You'd probably need a different problem to show Go or C# in their best light.
That said, my profession sees me involved in working on a set of libraries in various languages, including Go and Typescript, that appear to be an awful lot like your example. And I can say from that experience that the Go version is much more pleasant to work on. It just works.
I'll agree with you all day every day that the Typescript version's types are much more desirable to read. It absolutely does a better job at modelling the domain. No question about it. But you only need to read it once to understand the model. When you have to fight everything else beyond that continually it is of little consolation how beautiful the type definitions are.
You're right, though, it all depends on what you find most important. No two programmers are ever going to ever agree on what to prioritize. You want short code, whereas I don't care. Likewise, you probably don't care about the things I care about. Different opinions is the spice of life, I suppose!
Yes I think I mentioned in another comment that that would be another way to code it up. It’s ugly in a different way to the interface approach. I haven’t written enough go to know which is the least bad.
What are you “fighting all day” in typescript? That’s not my experience with TS at all.
What are the virtues of go, that you’re so enamoured by? If we give up beauty and type safety, what do you get in trade?
I don't become enamoured by language. I really don't care if I have to zig or zag. I'll happily work in every language under the sun. It is no more interesting than trying to determine if Milwaukee or Mikita make a better drill. Who cares? Maybe you have to press a different button, but they both do the same thing in the end. As far as I'm concerned, It's all just 1s and 0s at the end of the day.
However, I have found the Go variant of said project to be more pleasant because, as before, it just works. The full functionality of those libraries is fairly complex and it has had effectively no bugs. The Typescript version on the other hand... I am disenchanted by software that fails.
Yeah, you can blame the people who have worked on it. Absolutely. A perfect programmer can program bug-free code in every language. But for all the hand-wringing about how complex types are supposed to magically save you from making mistakes that keeps getting trumped around here, I shared it as a fun anecdote to the opposite — that, under real-world conditions where you are likely to encounter programers that aren't perfect, Go actually excelled in a space that seems to reflect your example.
But maybe it's not the greatest example to extol the virtues of a language. I don't know, but I am not going to start caring about one language over another anyway. I'm far more interested in producing great software. Which brand of drill was used to build that software matters not one bit to me. But to each their own. Different opinions is the spice of life, I suppose!
There's a lot of ecosystem behind it that makes sense for moving off of Node.js for specific workloads, but isn't as easily done in Rust.
So it works for those types of employers and employees who need more performance than Node.js, but can't use C for practical reasons, or can't use Rust because specific libraries don't exist as readily supported by comparison.
Sounds like exactly the sort of thing the IETF's IPv6-only network is trying to shake out.
I went to IETF a few years ago and ran into issues on their IPv6 only network because I host some stuff from home, and my residential ISP doesn't support IPv6 at all. It made me really want to get all that fixed.
Urgh I wish it were like that here in Australia! We have a fast, modern fiber internet connection in inner Melbourne. But my ISP still doesn't support IPv6 at all. I file a ticket about once a year, and I'm always met with more or less the same response - essentially that there's no demand for it.
I'd love to test all the internet services I host to make sure everything works over IPv6, but I can't. At least, not without using a 4to6 relay of some sort - but that adds latency to everything I do.
I just checked - apparently my ISP is "evaluating IPv6" because they're running out of IPv4 addresses and want to use CGNAT for everyone. I suppose its not the worst reason to switch to ipv6. But they've been making excuses for years. I really wish they'd get on with it.
> when you start using rust in the real world to get real work done a lot of the promises that were made about safety have to be dropped because you need to boot your computer before the heat death of the universe.
Safe rust isn't slow like Python, Go or Fil-C. It gets compiled to normal native code just like C and C++. It generally runs just as fast as C. At least, almost all the time. Arrays have runtime bounds checks. And ... thats about it.
> The result will be that we end up with something about as safe as C is currently - because CPUs are fundamentally unsafe and we need them to work somehow.
Nah. Most rust is safe rust. Even in the kernel, not much code actually interacts directly with raw hardware. The argument in favour of moving to rust isn't that it will remove 100% of memory safety bugs. Just that it'll hopefully remove most of them.
Saying "appeal to authority" doesn't refute the point made above. Expertise is real. Someone with 25 years of experience with the linux kernel will know a lot more about linux and C than the average HN commenter. Almost certainly more than me.
Its possible that you might be right about whatever point you're trying to make. But if you are, I can't tell that from your comments. I can't even find a clear claim in your comments, let alone any substantive argument in support of that claim.
Personally I think we need to start adding capability based systems into our programming languages. Random code shouldn't have "ambient authority" to just do anything on my computer with the same privileges as me. Like, if a function has this signature:
Then it should only be able to read its input, and return any integer it wants. But it shouldn't get ambient authority to access anything else on my computer. No network access. No filesystem. Nothing.Philosophically, I kind of think of it like function arguments and globals. If I call a function foo(someobj), then function foo is explicitly given access to someobj. And it also has access to any globals in my program. But we generally consider globals to be smelly. Passing data explicitly is better.
But the whole filesystem is essentially available as a global that any function, anywhere, can access. With full user permissions. I say no. I want languages where the filesystem itself (or a subset of it) can be passed as an argument. And if a function doesn't get passed a filesystem, it can't access a filesystem. If a function isn't passed a network socket, it can't just create one out of nothing.
I don't think it would be that onerous. The main function would get passed "the whole operating system" in a sense - like the filesystem and so on. And then it can pass files and sockets and whatnot to functions that need access to that stuff.
If we build something like that, we should be able to build something like npm but where you don't need to trust the developers of 3rd party software so much. The current system of trusting everyone with everything is insane.
reply