Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends. Have you ever read a parser made using combinators? It would be composed of hundreds of small, compact functions called "combinators" that all take the same input (a character or token stream) and produce the same output (a partial parse tree).

They are readable because you can look at any one part and understand what is going on. Referential transparency is important here, as you can be assured each combinator has no strange side effects. Composability is also important, as it gives you an idea as to what each function call does without having to look it up.

If your functions have side effects and 100 different interfaces, then yes, that's a mess and hard to understand. If your functions have predictable interfaces and are referentially transparent, then that's a completely different scenario.

I mean, pretty much every Haskell program ever is split between a number of clean small functions. But for someone who understands Haskell it's hardly unreadable. Quite the opposite.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: