As I understand it, it's related to the decision to make the entire interpreter part of the public C extension interface which drove everyone to write C extensions for anything that was remotely performance sensitive (and the breadth of the extension interface and the pervasiveness of C extensions in the Python ecosystem made it really difficult to make substantial backwards compatible performance improvements in the interpreter). Since so much of the ecosystem is C, Python package management has had to try and figure out how to distribute arbitrary versions of C software to arbitrary target systems which is really hard in large part because C projects don't have any standard build tooling or dependency management. As far as I know, no one has figured out how to do this well given Python's constraints (the best language package managers just absolve themselves from Python's original sin--pushing the ecosystem toward FFI rather than making the core runtime model fast enough for native code).
None of the above is an argument as to why pip, poetry, virtual envs and half a dozen other tools and methods are needed to manage packages with Python. I'd go so far as to say that this isn't even a language level issue, as much as it's an implementation detail, a rather important one. However, for some context, Python first came out in 1991. Back then, nobody was really focusing on making anything run well on multiple cores, best you could get was multiple CPUs in somewhat esotheric systems. It's far easier to understand the design decisions for a programmer friendly (and hardware hostile) language in the 90s.
You also can easily guess who of these two get i18n right earlier.
The AOLServer was made possible by the decisions to have an API to have specific versions of interpreters (including highly constrained safe ones), by forcing extensions to have per-interpreter state and make interpreters communicate by messaging on Tcl level. This resulted that if your extension worked with several interpreters in single thread execution, it would work with several threads ruunning in parallel. The same is for just Tcl code, which does not even know which thread it gets executed on.
Thus, Python developers did not do any looking around.
Tcl also has a simple but effective package management architecture, that required specification of package versions from the start, so no version hell.
Later, a fully functional multiplatform binary package server/client was developed for Tcl [0]. It died of neglect while pythonistas spent (are spending) years trying to build something satisfactory.
Yeah, this matches up pretty much exactly with my understanding of the issue. Python as a language is "pretty good" but far from perfect, and it's really unfortunate that C's packaging/building ecosystem is in such a horrible state.
> it's really unfortunate that C's packaging/building ecosystem is in such a horrible state.
This is a recurring theme in my career. It's one of my cornerstone grievances with Nix as well (any time I need to write a Nix package I end up having to package some obscure C dependency that my package transitively depends on). :)
It is related, but not in a way that matters much. My main issues with Python are related to the language itself, primarily around duck-typing, lack of good lambdas, import side-effects, etc..