Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The question of computation vs. interaction is an fascinating one, but I think there's another question (perhaps related) to the applicability of "rich" types to practical programming, and that is regarding the role of types, and how far they should be stretched. I think there is little question that "simple" type systems (like in C/C++/Java/C#) aid in developing large systems. But much of the fascination with type theory (as the article states) has to do with the Curry-Howard isomorphism, and the ability of programs to carry proofs to some of their properties. This is believed to be a powerful tool in software verification. The question then becomes to what extent should this tool be used? What properties can be proven? What properties programmers are willing to try and prove? How much harder is it to prove a property than write a possibly-incorrect algorithm without proof?

The benefit of proofs in code would only be worthwhile if it was relatively easy for common programmers to prove the absence of bugs that would have been more costly to find and fix using other methods, and that that ease would not come at the cost of other properties (such as performance). So far I have a feeling that this is not being delivered. Your run-of-the-mill clever use of types usually prevents very "cheap" bugs, and hard bugs are even harder to prevent with types.

I'll give a concrete and recent real-world example. A few months ago, a bug was discovered in Python's and Java's sequential sorting algorithm[1], Timsort. It was introduced to Python in 2002, and its adoption by Java in 2011 has probably made it the most used sorting algorithm in the world. Yet, it contained a fatal bug that could cause it to crash by overflowing an array. There are a few interesting observations one could make about the bug and its discovery with relation to type systems.

1. The bug was discovered by a verification program that does not rely on type-based proofs in the code. The output from the verification system quickly showed where the bug actually was. AFAIK, the verification program required no assistance with proving the correctness of the algorithm, only that the variants the algorithm seeks to maintain be stated.

2. Proving the invariants required by the (rather complicated) algorithm with types would have required dependent types, and would have probably been a lot harder than using the verification program used. It is also unclear (to me) whether the problem and the fix would have been so immediately apparent.

3. The interaction of "type-proofs" with program semantics (namely, the requirement of referential transparency) would have made the type-proven algorithm much slower, and this particular algorithm was chosen just for its speed.

4. The corrected version ended up not being used because it could have adversely affected performance, and it was noticed that the "incorrect" algorithm with a slightly larger internal array would still always produce correct results for the input sizes currently supported in Java.

This example of a purely computational real-world bug shows some more limitations of the type-system approach to verification, I think.

Going back to effects and interactions, pure-FP approaches have a hard time dealing with timing properties. I already gave the same example on another discussion today, but the Esterel language (and languages influenced by it) has been used successfully to write 100% verifiable programs for safety-critical domains in the industry for a long time now. It is an imperative language (though it's not Turing complete) with side effects and a lot of interactions, yet it has a verifier that uses Pnueli's temporal logic to guarantee behavior, without requiring the programmer to supply proofs.

To summarize, type systems have shown their usefulness in reducing the cost of software development, but they need to be considered as a component in a larger verification space, and they need to strike a balance between usefulness and ease of use.

[1]: http://www.envisage-project.eu/proving-android-java-and-pyth...

[2]: http://www.envisage-project.eu/key-deductive-verification-of...



> The benefit of proofs in code would only be worthwhile if it was relatively easy for common programmers to prove the absence of bugs that would have been more costly to find and fix using other methods, and that that ease would not come at the cost of other properties (such as performance).

I don't understand why proofs need to be universally easy to write/understand to be useful. Why can't they be useful for a subset of programmers on a subset of projects?

> So far I have a feeling that this is not being delivered. Your run-of-the-mill clever use of types usually prevents very "cheap" bugs, and hard bugs are even harder to prevent with types.

Types can be used to encode complicated invariants of software systems, which will be automatically be machine checked throughout the entire lifetime of your project. There is a vast gradient from simple properties like 'this value is a number' to 'your data migrations are guaranteed to be completely reversible without any information loss' to 'the sequential composition of a set procedures will terminate within 600 cycles'.

Types are not only for preventing bugs. They model invariants, help design and allow easy reasoning over large software projects.

> This example of a purely computational real-world bug shows some more limitations of the type-system approach to verification, I think.

No it doesn't. You just pick one example for which it didn't really work out for some reason. That tells you exactly zero about if type based verification could be useful in the general case. (Or a specific subset of verification cases)

> To summarize, type systems have shown their usefulness in reducing the cost of software development, but they need to be considered as a component in a larger verification space, and they need to strike a balance between usefulness and ease of use.

Of course they need to. Luckily that's what smart people are working on every day, making types easier to use.


> Why can't they be useful for a subset of programmers on a subset of projects?

They can, but that would hardly make a deep impact on the industry. And my question would then be why do you think they'll be useful only for a subset of programmers and a subset of projects?

> Types can be used to encode complicated invariants of software systems, which will be automatically be machine checked throughout the entire lifetime of your project.

I know they can, but so can other verification mechanisms that are less intrusive, or even pluggable types, like those that have recently been added to Java 8.

One point of difficulty -- that the article is talking about -- is precisely those variants that are really hard to enforce, like "every request X will be handled within 300 ms", or "the server will never respond Y if the user has never asked for Z", or "the system will never contact the database if the operator has turned on the offline flag". Similar (though less sophisticated) kinds of guarantees are made and verified by Esterel, without requiring the programmer to supply the proof. Other approaches include behavioral programming[1] (made by the same people behind the research that had led to Esterel), which simply let you specify a rule: "if offline flag is on, disable calls to the database", which will be enforced no matter how much lower priority rules try to access the database.

Types are just one approach for verification. There are invariants they excel at proving, and those that not so much. People who research type systems should, of course, explore anything that's possible with them, but software developers should realize that types are not the be-all and end-all of writing verifiably correct programs, and that for some program properties there are better approaches than types.

> They model invariants, help design and allow easy reasoning over large software projects.

I agree in theory. But I also think they may help up to a certain point and then start hurting. The problem is, this has never been put to the test.

[1]: http://www.wisdom.weizmann.ac.il/~bprogram/more.html


> But I also think they may help up to a certain point and then start hurting. The problem is, this has never been put to the test.

You're right that the current cutting-edge of types in programming haven't been put to the test yet. Scala and OCaml and Haskell all get varying degrees of industry use, but it's nothing compared to C++, Java, Python, etc. So it's clear that we haven't even almost gotten to the point where the industry uses proper types at large.

I guess my point is this: Why then do you think at a certain point they start hurting? It's not like the industry has even gotten close to using types to their full potential. We've certainly never reached that "certain point" where they start hurting.


> Why then do you think at a certain point they start hurting

Oh, it's just a guess, but partly because of Haskell and its success at being continuously rejected by the industry for two decades now (it was touted as the next big thing when I was at university over 15 years ago). There have been few if any languages of Haskell fame that have clearly tried to break out of academia and have had so little use in the industry.

And the problem is less with types themselves but with the referential-transparency-everywhere and non-strict-evaluation approach. The result is that Haskell allows -- nay, demands -- a lot of reasoning about the program upfront at the expense of reasoning about it as it runs, with a debugger and a profiler (because of the complicated execution model). I also think Haskell places an unreasonable burden of thinking about the types vs. the algorithm an domain.

As to Scala, because it's such a "multi-paradigm" language, it's pretty hard to know exactly how it's used in the industry. There are certainly those that use it anywhere between Haskell-like with scalaz to Java-like only without having to write constructors and getters (in fact, Spark is written, I believe, closer in the latter style). But even if Scala is to be counted as an advocate for rich typing, its success record is pretty spotty (and, again, I wouldn't blame that necessarily on the type system).

I could, of course, be very wrong. One thing is certain: what a type system can do is a comp-sci question; how useful it is is a psychological question, and trying to argue about the usefulness of type systems (or any programming approach) as if it were a mathematical problem is just silly.

> We've certainly never reached that "certain point" where they start hurting.

I think Haskell is beyond that point, although I still believe a language with a type system as sophisticated as Haskell's might not be (because getting more for the same price might be worth it). By that I mean that Haskell types -- just like the article said -- waste a lot of mental power in the wrong places (i.e. on proving trivial stuff). I don't know what that imagined language would look like, but it would probably have some effect types (and like the author of the article has noted, there's a huge difference between being able to simulate effect systems and effect systems controlling actual effects).


> I also think Haskell places an unreasonable burden of thinking about the types vs. the algorithm an domain.

When you are encoding your logic in types the distinction between types and "the algorithm and domain" is very blurred if not the exact same thing. I wouldn't want to be programming in any other language if I knew very little about the domain.


But "encoding your logic in types" is often non-trivial and very different from thinking about the algorithm itself. Of course, most algorithms require dependent types (at the very least!) to be fully encoded, so you can't really encode them in Haskell...


Far and away the most obvious is null checking. It's not a big deal in a thousand line program, but as things grow, the need to always remember to check for null becomes a real problem.

Related to null, you only get one null. the classic assoc problem. if i lookup a key in a hash, does the hash have the key, or is they value of the key null? Optional/Maybe is just wonderful for this case.

Semantic meaning of common types.

    update(String name, String email, String address, ...)
vs

    update(Name name, Email email, Address address, ...)
Strings get overloaded for lots of uses, and it's so easy to swap the order when you have multiple arguments.

Status codes, stuff like Email vs VerifiedEmail. Encoding those random flags in types makes it so much easier to enforce rules.

Yes, they are all "easy", but we've all lost hours or days to those problems. The sooner we can stop worrying about bonehead, forgot to check for null errors, the sooner we can concentrate on real problems. They're trivial problems, but we waste a big fraction of our lives dealing with them. Just taking those issues off the table makes it possible for average programmers to find bugs in timsort. The cognitive load of these dozens of details detracts from average programmers (like me) ability to actually detect the hard cases.

Typing isn't a panacea, but it is a help. It's akin to a socket driver vs a wrench. sockets abstract away the need to remove rotate and refit the wrench for every stinking quarter turn. You can do it with a wrench, but it takes so much longer.

There are no silver bullets. but there are regular bullets, and a good type system is one of them.


You don't need a rich type system for null safety. Something like what Kotlin offers[1] is more than enough (in fact, I think it's better than what languages with far richer type systems offer).

My point isn't that types are not helpful -- they are extremely helpful. My point it that on the continuum between no types and "types try to prove everything", there is a point that is the most useful, and that point is probably far from either extreme (I don't know if it's equally far from both, but I think it's pretty far from both).

[1]: http://kotlinlang.org/docs/reference/null-safety.html


I guess, it's hard to get a handle on the rare cases, because they are rare, each appears unique. Structurally eliminating the common problems frees up a lot of mental power to face the next most common problem.

I think, I'd rather have a type system that's too powerful and sometimes misapplied than have the "easy" but common problems i listed above. It makes the next level of error accessible to mere mortals.

I think you have a great deal of insight, but i don't think computer science is remotely close to being able to say, oh a type system only needs X power, beyond that we use this other tool.


> i don't think computer science is remotely close to being able to say, oh a type system only needs X power, beyond that we use this other tool.

That's because that's not a problem for computer scientists to solve but for language designers. Programming language design is 5% theory and 100% psychology. The question of what's useful cannot be answered by math, but by psychology (well, it's determined by psychology, but could be answered with "clinical research").


> in fact, I think it's better than what languages with far richer type systems offer

Can you explain why? I don't know Kotlin, but from this page it seems to divide types into nullable and non-nullable (correct me if I'm wrong). Is it possible to have a type "T??" that has three possibilities - "null", "wrapped null" and "T"? If not, this approach will not help in the assoc problem mentioned by the parent poster.


The "assoc problem" is completely separate from null safety (it also demands a solution in a language without null).

Kotlin's handling of null safety is especially nice because of Kotlin's guarded casts. I.e. if you have a variable x of type A and B <: A, then within any block where the value of x is known to be of type B, the variable x can be treated as if it's of type B. So, because A <: A?, if you have x:A? and any test of the kind (x != null) -- it can be in an if, a subexpression of if, a while, a when -- then x can be treated as non-nullable. This, combined with handling nulls from Java[2] and how all this interplays with method and property access and the other operators make up a particularly nice way of handling nulls.

[2]: http://kotlinlang.org/docs/reference/java-interop.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: