Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, but that is backwards compatible / a superset of ASCII.

What I meant to say, is that it's totally possible that, in the coming few centuries, even basic ASCII won't be readily understood. As in the character mapping in modern systems will break down, i.e., int 97 is no longer 'a', but some glyph from a language not yet conceived.

We take for granted backwards comparability. Just because ASCII has been readable for the past 60 or so years, doesn't mean it will continue to be for the next 60.



We do not take it for granted.

Instead, we design systems to be such because it makes many things much simpler.

This is the reason why UTF-8 has basically "won" over UTF-16 or UCS-4 when it comes to encoding Unicode characters.

If anything, with the amount of data we have today, unless there is a big reason (probably political, but even they exist today for eg. China not to want to use an Unicode transformation based on American Standard Code for Information Interchange) to re-encode all historical data, backwards compatibility will be maintained with the computers of the future (if they still exist). Yes, even if we move their bytes to be 13-qubit qubytes :D

To elaborate on the cost: re-encoding all data from 2050 is probably not going to be too expensive in 2400, but by then you'll need to re-encode data from up to 2400. To me this seems like a reason that backwards compatibility will make sense to be kept because there is not much to be gained. Eg. UTF-8 approach has shown us the best way forward.

The trickiest is going to be to keep all video/audio encoding algorithms, especially as they are patent encumbered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: