> typing performance in our testing is around 30-70% faster compared to Draft.
At how many "Lines of Code" ? I am skeptical that scaling characteristics from 10k to 100k to 1 million is linear, so I'm curious what 30-70% actually means.
For example, VSCode tried a new text buffer and benchmarked at different source text sizes. There is a critical point where their PieceTree implementation scales , whereas the line-based text buffer approach does not scale.
Mostly referring to high-traffic surfaces here, like Facebook Feed or Messenger. The % varies depending on the environment and the previous Draft.js implementation (and their plugins) but overall it's been very positive, especially with low-end devices that went way over 16ms response time per keypress.
This is my point. Facebook surfaces include stuff like editing a post, sending a message. All of these have relatively small character limits versus a text editor that handles millions of lines with tens of millions of characters.
In the case of text editors, 1 object node per line implementations work fine for small files. They'll even edge out the overhead of more esoteric implementations requiring trees. Lookups are O(1) versus O(log n) in the depth of trees, but insertions get slow as you have to shift all the array elements. Again for few number of lines, that's fine. For small data ( tens of thousands of characters), even a giant string will do fine and within tens of milliseconds, too.
Facebook Blog / Article posts may get relatively long, but still not in the Megabytes ( metadata, not images ).
My point of contention is the claim and marketing that the library is suitable as a high performance text editor implementation. Depends on what the constraints and use case is. For Facebook posts and small editor widgets, I'll believe that. There is a hint that it can serve as the foundation for an IDE, but likely only a small snippet editor or a toy IDE implementation. The devil is in the details for a "serious" IDE / text editor like VSCode / Ace. It's an apple to bananas performance claim.
> low-end devices that went way over 16ms response time per keypress
Shaving milliseconds of keypresses for users on low end devices is msotly a Facebook and FANG concern. This performance concern is invisible and not important for someone looking to build an IDE where the users are accessing from a powerful device. For people editing huge files, the bottleneck are caused by data structures choices which I don't think will be mitigated by having a high level and convenient programming paradigm in the case of Lexical.
At how many "Lines of Code" ? I am skeptical that scaling characteristics from 10k to 100k to 1 million is linear, so I'm curious what 30-70% actually means.
For example, VSCode tried a new text buffer and benchmarked at different source text sizes. There is a critical point where their PieceTree implementation scales , whereas the line-based text buffer approach does not scale.