Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't forget that a sequence of numpy operations will likely each allocate their own temporary memory. Numba can often fuse these together, so although the implementation behind numpy is compiled C you end up with fewer memory allocations and less memory pressure, so you still get your results even faster. Numba also offers the OpenMP parallel tools too. I have a nice sequence of simulations in my Higher Performance Python course going from raw python through numpy then to Numba showing how this all comes together. Just try having a function with a=np.fn1(x); b=np.fn2(a); c=np.fn3(b) etc and compile it with @jit and you should get a performance impact. Maybe you can also turn on the OpenMP parallelizer too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: