Because it is like saying you use a bash script to configure and launch a c++ application and saying it is a bash script. Python is not a high performance language, it isn't meant to be and it's strengths lie elsewhere. One of it's great strengths is interop with c libs.
Your assertion was that numpy etc will be faster than something else despite being python:
> Try writing a matmul operation in C++ and profile it against the same thing done in Numpy/Pytorch/TensorFlow/Jax. You’ll be surprised.
No. When I write Tensorflow code I write Python. I don’t care what TF does under the hood just like I don’t care that Python itself might be implemented in C. Though I got to say TF is quite ugly and not a good example of Python’s user friendliness. But that’s another topic.
That's a known and widely publicised trait of Python.
In the early days, Python tutorial warned against adding to strings by doing "+" even though it works because that performed a new allocation and string copy.
What you were asked to do was use fast, optimized C-based primitives like "\n".join(list_of_strings) etc.
Basically, Python is an "ergonomic" language built in C. Saying how something is implemented in C at the lower level is pointless, because all of Python is.
Yes, doing loops over large data sets in Python is slow. Which is why it provides itertools (again, C-based functions) in stdlib.