Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing with the HDD beating SSD's is that this is pure sequential writes, no seeking involved, which is quite rare in real life. If this was random smaller writes the HDD would be awful.


Maybe it's rare for you to ingest large files or a large dataset, but I'm going to bet it's not as uncommon, especially if you work with photography, virtual machine images, (raw) video files and so on.

None of the cheap SSDs are fit for purpose in my opinion for any of this.


I would say, show me the data (and with a file system on top). Log-structured file systems don’t overwrite data, so the randomness of writes becomes moot, as it converts writes to become all sequential.


I wouldn't expect using a log-structured filesystem to help much, because the SSD is already running its own log-structured filesystem internally.

A log-structured filesystem doesn't magically turn a random write workload into sequential writes; it incurs more or less the same overhead that the SSD's FTL does doing read-modify-write cycles causing write amplification.


Fragmentation? Cluster sizes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: