Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

DuckDB is a columnar database so computing the sum of one column should be really fast. Yet it requires more than a minute for 280k rows? That seems way too slow so probably just a limitation of the shared file system.


It's metadata, so comparing how good a count(*) is is pretty pointless.


Especially since SQLite would give a near-instant reply if there was an index in the table. The entire example is wrongly reasoned from start to finish.


I agree that the article is not very good




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: