Julia adds some pretty amazing stuff with multiple dispatch and run-time compilation. What this means is that you can glue code together in ways impossible for other languages.
One example is a system that I built using three libraries. One was a C library from Postgres for geolocation, another was Uber's H3 library (also C) and a third was a Julia native library for geodesy. From Julia, I was able to extend the API of the H3 library and the Postgres library so that all three libraries would inter-operate transparently. This extension could be done without any mods to the packages I was important.
Slightly similar, if you have a magic whizbang way of looking at your data as a strange form of matrix, you can simply implement a few optimized primitive matrix operations and the standard linear algebra libraries will now use your data structure. Normal languages can't really do that.
More on that second case and the implications in the following video:
Julia is generally higher level than Fortran, with syntax inspired by Python/R/Matlab. We've been able to reliably hire Math PhDs and quickly get them productive in Julia, which would take much longer with Fortran.
Fortran does quite well on almost any major CPU since 1950's, including GPUs.
Actually one of the reasons CUDA won the hearts of researchers over OpenCL, is that Khronos never cared for Fortran, and even C++ was late to the party.
I attended one Khronos webminar where the panel was puzzled with a question from the audience regarding Fortran support roadmap.
NVidia is sponsoring the work on the LLVM Fortran frontend, so same applies.
“sponsoring” in this case means writing nearly all of it ourselves (although we’ve had lots of help from Arm and some others on specific areas like OpenMP).