Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a programmer, I don't know if it's still relevant to make a strict separation between programming paradigms. You can use immutable types, pure functions, closures and so on in most languages. Conversely, you can define mutable types and imperative code in most functional programming languages.

I'm always surprised reading comments on these topics, people saying they don't grasp FP. But don't we use higher-order functions, closures, combinators all the time in most mainstream languages? How hard can it be to learn OCaml or Clojure for someone who use closures all over the place in JS?

Monads have a steeper learning curve, but besides Haskell, they aren't that pervasive. And there are constructs with similar flavor in mainstream languages too (result types in Rust...)



True, the conceptual difference of (pure) functional and imperative programming is disguised by the many functional patterns most mainstream languages have absorbed. While these patterns are useful, there is more to say about pure functional programming.

I recently gave a talk to some colleagues about that, which was divided into two parts: Practical functional programming patterns we can use today in our codebases (we use mainly C++, Python) and a more abstract part about pure functional programming.

The first part basically boils down to using functions as first class "objects", while the point of the second part was, that there can't be implicit state in pure functional language. The consequence of that is, that there is no strict order of execution, which is in direct contrast to imperative programming, which is all about list of statements, that are executed one after another.

I presented small code examples in Haskell and showed corresponding execution graphs to emphasise, that the compiler can easily optimise the execution order.

I like that POV, because it clearly distinguishes imperative from functional programming. Starting from there, it's also easy to understand the motivation behind monads or elaborate on architectural patterns like " functional core, imperatively shell".


I don't understand the bit about the execution order, nor how it's relevant in your day-to-day programming.


When you execute a function in a mainstream languages, the arguments are usually evaluated first, then the function is called with the result of that evaluation.

E.g. the following in JS will always print the confirm message. But in a lazy language like Haskell, as the x parameter is never used, it'll never be printed.

  (function(x) { return "bar"; })(confirm("foo"));
This can help with implementing things like infinite streams (e.g. iterate[0] in Haskell), which may be a pleasant patterns to solve some problems sometimes. On this example alone, I wouldn't qualify it as highly relevant for day-to-day work, but there may be more interesting use cases I'm not familiar with (I'd guess there might be interesting side-effects in concurrent settings).

[0]: https://hackage.haskell.org/package/base-4.20.0.1/docs/Prelu...


Mainstream languages are not expression orientated like true FP languages are. Most people working in mainstream languages aren’t aware of the significance of this and wonder why FP seems awkward in their language, despite it having closures, some immutable types, etc.


One thing that struck me while learning nushell is that instead of:

    $ echo "hello world"
    hello world
I can do:

    $ "hello world"
    hello world
Is this an indication that it is expression oriented? (just checking that I've understood the phrase).


This is the statement/expression distinction; `echo "hello world"` is a statement that, when evaluated, has the side effect of printing "hello world" to stdout (or wherever), and has no value (for the purposes of this example).

`"hello world"` is an expression with the literal string value "hello world"; your REPL probably prints out that expression's value once it is evaluated for convenience.

FWIW I'm not actually familiar with nushell and am speaking fairly generally about statements vs. expressions.


Yes.

Essentially, statements have no value (or return no value). Expressions have a value.


I will add to that the other big difference is how much the compiler enforces the style.


I don't think we should think of things as having a strict separation, but certainly some languages push you harder than others toward certain programming paradigms, and some make other paradigms difficult or awkward to use.

For example, while I can do FP in Rust, I would not really call Rust a FP language. It does have some features that make doing FP possible, but a lot of the code I see (and write) is a mix of imperative and FP.

But if I'm writing Scala, I'm going to be mostly writing FP code, and the language helps me there. Recent versions of Java make it easier to write FP code if you so choose, though you'll still see a lot of people write more imperative code in Java.

(I think if Rust had Scala's for-comprehensions, I'd write more FP-ish Rust. I know there are crates that emulate them, but they're not the most ergonomic, and I don't want to write what ends up being unidiomatic Rust code.)


For-comprehensions are actually not functional programming, their raison d'etre is to embed traditional imperative programs in expression-based referenetially transparent FP programs.

I think if Rust had Scala's for-comprehensions, it would be a strictly worse language than it is now. FP langs' do- and for-comprehensions are highly unergonomic, not least due to all the ways it could be overloaded. Haskell has tons of extensions, Scala's is basically desugaring to flatMap calls. This has a lot of abuses in the wild, and with how it depends on implicits causes practical problems when you write that code.

Scala's new wave of direct syntax and general direction from Monads to algebraic effects is more sane in how it presents compiler messages, and opens ways to have better performance.


Monads are pervasive: async-await.

We just don’t call them monads.


How is async-await monads? Isn’t it just syntax sugar over callbacks?


Consider async-await a syntactic sugar over Promises (from JavaScript). Then, Promises constitute an instance of the Monad typeclass where monadic `bind` or (>>=) is `Promise.then()`, and `return` is `Promise::resolve()`.

Here is a translation of a modification of the example given in [1]:

    const promise1 = Promise.resolve(123);

    promise1.then(v => v * 2).then((value) => {
      console.log(value);
      // Expected output: 246
    });
into Haskell:

    ghci> let promise1 :: IO Int = return 123
    ghci> promise1 >>= (return . (* 2)) >>= print
    246
One key discrepancy worth pointing out is that in the `Promise.then()` API of JavaScript, the function provided to `then` (e.g. `v => v * 2` above) is implicitly composed with a call to `::resolve` in order to turn that function's pure return value, the `Number` 246, into a Promise resolving into the `Number` 246; in Haskell, this operation must be made explicit (hence the composed function `return . (* 2)` passed to the first application of (>>=) or `bind`).

You could say that the instance method `Promise.then()` expects an argument of type (a -> b), with the return value of type `b` being implicitly and automatically wrapped into a Monad `m b`, whereas Haskell's bind expects an argument of type (a -> m b) on that operator's right-hand side, with the return value explicitly wrapped into a Monad `m b` by the provided function argument itself.

[0] https://wiki.haskell.org/Monad

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Maybe I’m confused, but I dont see how Promise.then() corresponds to bind? If I understand correctly, the point of the bind function is you pass a callback which itself return the monad type. But the Promise.then() callback should not return a monad but just the regular result value of invoking the callback.

So in essence Promise.then() is like Array.map() while bind is like Array.flatMap()

Edit: It seems you are correct, you can indeed return a promise in the callback and it compose correctly. But in your example the callback does not return a promise and is therefore not an example of monadic bind.


I had a caveat in my original post re. the implicit wrapping of the `Promise.then()` callback's (pure) return value in a new Promise, and how this differs from Haskell's monadic bind; I had hoped to make a loose analogy while pointing out the differences for the sake of illustration. However it is indeed also possible to return a Promise from `.then()`'s callback, which is closer to Haskell's bind: [0]

    The behavior of the returned promise (call it p) depends on the handler's
    execution result, following a specific set of rules. If the handler function:

    * returns a value: p gets fulfilled with the returned value as its value.

    [...]

    * returns an already fulfilled promise: p gets fulfilled with that promise's value as its value.
... then you can obtain a solution closer to the Haskell translation by using the behaviour of the second cited bullet point from the MDN article:

    const promise1 = Promise.resolve(123);

    promise1.then(v => Promise.resolve(v * 2)).then((value) => {
      console.log(value);
      // Expected output: 246
    });
[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


This article argues that while promises support the required operations for monad, they do not support the monad laws:

https://www.siawyoung.com/promises-are-almost-monads/


It's a good point. `Promise::resolve()` "flattens nested layers of promise-like objects (e.g. a promise that fulfills to a promise that fulfills to something) into a single layer." [0]

The example was meant to be more of an illustrative analogy than an exact correspondence.

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


At that moment, the student became enlightened.


Callbacks are not the same as monads if that is what you are implying.


That's right, but only because it's a 1-to-many comparison error.

Callbacks are one particular monad, not monads in general: https://jsdw.me/posts/haskell-cont-monad/


But then CPS machinery is often used to implement Monads.

> Programming with monads strongly reminiscent of continuation—passing style (CPS), and this paper explores the relationship between the two. In a sense they are equivalent: CPS arises as a special case of a monad, and any monad may be embedded in CPS by changing the answer type.

https://cs.uwaterloo.ca/~david/cs442/monads.pdf


Promises in JavaScript are not monads as far as I can tell. While they have a somewhat similar interface, they do not appear to obey the monad laws.


There is one particular edge case in which they do not satisfy the laws. That happens to make them much more practical in day to day coding than a strict interpretation would be.


So having Promises be monads would make them less useful? This is an interesting point.

It seems to confirm the argument that monads are not that useful or pervasive outside of Haskell.


We might consider promises “applied monads” or “engineered monads”. Monodic in inspiration and they solve the same core problem, but they aren’t straightjacketed into the ivory tower “laws” in absolutely every edge case (they do satisfy them in the vast majority of cases). Which is good, because it means we never need to write things like “await await await await await foo()”


> Which is good, because it means we never need to write things like “await await await await await foo()

But what your describing is the actual value proposition of monad though! It's literally the construct you use to avoid excessive unwraps.

Look at the type signature of bind:

  m a -> (a -> m b) -> m b
It returns m b. Not m (m b). Not m (m (m b)).

> but they aren’t straightjacketed into the ivory tower “laws” in absolutely every edge case

In other words, you can't abstract over them. If you have libraries that manipulate monads, they will be incompatible or buggy, because they will assume (by design) behaviour that a non-monad will not live up to.


I should not need to explain to you that the problem is when b itself is internally an (m b’). Or when it is a sum of many types, of varying m-nests.

I can assure you that JS libraries work with Promises just fine.


This is all besides the root topic though, which is that CPS and Monads are isomorphic. Which is true. Promises are a different thing.


I was not actually thinking of Javascript (or any particular language) when I mentioned async-await. I was just referring to the concept.


Async-await in Rust isn't a Monad.


they don't grasp FP. But don't we use higher-order functions, closures, combinators all the time in most mainstream languages?

We don’t really use them in mainstream languages. They exist as idioms for shallow application, but one doesn’t go full-on combinators in javascript, for at least performance reasons. What we do in mainstream is as FP as walking stairs compared to rope climbing.

That said I think deep FP makes no sense and only entertains bored intellectuals by being a subspecies of code golf.


"People who closures all over the place in JS" have IME much to gain in learning about and using FP.

- understanding paradigms of state handling enables you to make conscious choices about what to aim for, how to design your data model, etc

- learning the benefits of having all data as values, except at the edges of the system (instead of object refs encompassing mutable state and/or IO handles)

- eg the resulting ability to reason about your system in a principled way (this can still be hard to mentally reach, even given all the ingredients, if you're very used to the "poking the mystery ball" way of relating to your app)

- how it comes together so you can just test a subset of your code at your REPL, being able to paste the input datastructure there (because it's just values, not active objects) without having your app framework up and running in a stateful context in browser or server environment.

Another strength of FP is about paralell programming. The bottleneck of parallelizing code by using multiple threads is concurrency bugs which come from mutable data. Immutable data and referential transparency is close to a silver bullet for this. (But JS programmers aren't as likely to appreciate this since JS is nearly always single-threaded)


There is an interesting trend where things with a strong mathematical definition tend to have the advantage. "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything. People seem to want to describe something (in today's article, data flow) but they don't quite have the language to do it.

If you go to https://en.wikipedia.org/wiki/Functional_programming right now you will see "functional programming is a programming paradigm where programs are constructed by applying and composing functions" which is a bit of a ... it is quite hard to program without doing that. Even SQL is basically applying and composing functions. There are languages that are branded together because they have similar communities or techniques, but the brand isn't describing something beyond people thinking that things belong in a basket together.


> Functional Programming” doesn't have a precise definition at all

It may be that different people mean different things when they talk about FP. I come from the Haskell corner and from my point of view that's a completely insane claim, at least when it comes to pure FP.

[ ] Are you writing a procedure that just processes its arguments into a return value? [ ] Does the procedure also operate on a non-static context? [ ] Does the procedure generate additional side effects?

If you answer yes|no|no, you are programming purely functionally.

The formal litmus test is the question of whether your procedure actually fulfills the definition of a mathematical function with regard to the relationship between the arguments and the return value. If it also does not generate any side effects, it is a purely functional procedure!

Then you can apply the principles of mathematical composition of functions to the construction of programs. Purely functional languages force you to program this way by default and to use special constructs for everything that has to do with side effects: IO monad, algebraic effects, The Elm Architecture.

How are you supposed to program like this? This is where (in addition to the concepts mentioned above) the higher-order functions that everyone knows come into play: map, fold, filter, flatMap, ... If you only know these and their use in impure languages, as if this were nothing formally thought through, but just a programming style for hipsters, then I can understand how the impression arises that there are no precise definitions here.

In practice, the concepts are often very distorted or not understood at all. For many programmers, OOP mainly seems to mean that you get the drop-down list with completions in Rider when you press “.”.


I also come from the Haskell corner and I agree with roenxi. I don't think that FP is a well-defined concept, nor do I understand your characterization of pure FP. Let's take a simple program

    main = do
      v <- newIORef 0

      let procedure = do
          n <- readIORef v
          print n
          modifyIORef v (+ 1)

      ...

and have a look at the conditions you laid out

* Are you writing a procedure that just processes its arguments into a return value?

  No, it has no arguments and its return value is just (), but it does more besides.
* Does the procedure also operate on a non-static context?

  Yes, it operates v
* Does the procedure generate additional side effects?

  Yes, it prints to the terminal.
So I have answered no | yes | yes, the exact opposite of what I should have to be classed as doing pure functional programming.

You might say "Ah, but the return value of `procedure` is not (), it's IO ()", but I don't see that that changes anything. I can write my entire program in IO if I want, in a way that's hard to distinguish from having written it in, say, Python. Is that not pure functional programming, despite the fact that it's being carried out in Haskell? Then you might say "Ah, but the difference is that you could have written your program purely, without IO". But again I fail to see how that differs from Python. I can write programs that don't do any IO in Python too.

So what is it that makes Haskell a pure functional language and Python not?

My response to all this is that the notion of "pure" is very unhelpful and the correct way to describe this property that Haskell has is "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs.


main is never pure. Your `procedure` is also not pure (and also not referentially transparent). You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

> but I don't see that that changes anything

It changes everything, but I can't explain it any better than I did above.

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure. If f1 evaluates x to y once, I know that it ALWAYS evaluates x to y. If a main program in which f1 is used generates a screen output, for example, I can safely exclude f1 as the source of this side effect. If you give me a function `f2 :: Int -> IO Int` instead, I can't draw all these conclusions. In Python, these conclusions would be invalid from the start.

Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it. But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

Of course you can program purely functionally in Python. Just as you can program in a dynamically typed language as if you were dealing with static data types. Or in Java without throwing exceptions and null references around everywhere. Or in C without segmentation faults.

These are not normative or moral comparisons, but other examples of the fact that you can of course program in such a way that the code has certain properties, even if the compiler does not secure these properties. The question is whether you want that.


> main is never pure. Your `procedure` is also not pure

Ah, it's possible I misunderstood you. I missed that you said "Purely functional languages force you to program this way by default". I mistakenly assumed you were saying that in a purely functional language you can only program purely.

So you are saying that Haskell is a pure functional language, but you can also write non-pure things in it? So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

> (and also not referentially transparent)

Interesting. Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

> You can write impure code in Haskell, which your example demonstrates very well. But then it is necessarily IO code. Non-IO code, on the other hand, is always pure (and referentially transparent).

That's interesting. How do you distinguish these two pieces of code:

    f = do
      v <- newIORef 0
      modifyIORef v (+1)
      readIORef v

    g = do
      v <- newSTRef 0
      modifySTRef v (+1)
      readSTRef v
is the former impure and the latter pure? Or are both impure?

> So what is it that makes Haskell a pure functional language and Python not?

If you give me a Haskell procedure `f1 :: Int -> Int` without showing me the content, I still know that it is referentially transparent and pure ... In Python, these conclusions would be invalid from the start.

Do you mean because of the type? If not, I don't understand what distinction you're drawing, and what makes the conclusions invalid for Python. If so, then does a pure language have to be typed? Would it be impossible for a pure language to be untyped?

> Now you can say, “So what? Who cares?” But it helps me enormously when designing and structuring programs, understanding programs, localizing program behavior, etc. If you don't see any added value in this distinction (ensured by the compiler), we don't have to argue about it.

We don't have to argue about it. I program in Haskell every day and reap the benefits :)

> But I simply cannot agree with you that this does not represent a conceptual and formally justified difference between Haskell and Python.

I agree there's a conceptual and formally justified difference between Haskell and Python. I just don't agree with you about how to characterize it :)

To reiterate my characterization, it is that Haskell has "referential transparency", that is

    let x = rhs in ... x ... x ...
has the same result as

    ... rhs ... rhs ...
regardless of how many times (or zero) x occurs. That's it! Nothing to do with "purity", "IO", "processing arguments", "non-static contexts" or "side effects". Now I may be wrong. That's why I'm trying to understand what you think the characterization is.

But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?


This will be a bit more detailed. I apologize in advance.

> So being a "pure language" is more a case of what sort of style is encouraged rather than what sort of style is possible?

The crucial point is that the (potentially) impure code is clearly and explicitly differentiated from pure code (IO code vs. non-IO code. Haskell uses the type system to draw this distinction.

>> (and also not referentially transparent)

> Could you explain what you mean by that? (The definition of referential transparency that I know doesn't apply to functions but to languages.)

Functions are expressions in Haskell (and in Python). Applications of functions to their arguments are also expressions. I use "procedure" as a synonym for "function".

As I understand it, referential transparency is a property of expressions. But if we understand a language as the total set of expressions that result from its alphabet and its production rules, then we can say that a language is referentially transparent if and only if all its expressions are referentially transparent. Consequently, the views are compatible.

An expression is referentially transparent if it can be replaced by its value (what the expression evaluates to) without changing the behavior of the program.

This implies that if side effects are emitted during the evaluation of an expression, then the expression cannot be be referentially transparent, because these side effects would disappear if the expression is simply replaced by its value in the program. If a dynamic (i.e. changeable) context is also processed during the evaluation of an expression, then the expression cannot be guaranteed to be referentially transparent over time either.

A function (i.e. procedure) is pure if it (1.) always returns the same return value for the same arguments and (2.) does not produce any side effects. Note that the definition is also compatible with functions that do not process arguments. The requirement is the same: the same return value must always be returned.

This implies that a function is pure if the relation between its arguments (even if they are 0 arguments) and its return value is like that of a mathematical function. I already wrote that above. It simply boils down to the fact that you can execute the function as many times as you want: it will always return the same value (for the same arguments). This is only guaranteed if the function does not operate on a dynamic context in addition to its arguments (including system time, values from a random generator, etc.). Pure functions are just stupid simple things that deterministically transform their arguments into some value. I want these things to make up as much of my code as possible, because they combine like Lego and are pretty foolproof to handle. The value proposition of purely functional programming languages is that they semantically distinguish these pure functions from impure functions. In Haskell, I only have to look at the type of a function. If it is something like `t1 -> t2 -> ... -> IO tn`, then the function is potentially impure because it evaluates to a value that is wrapped in the IO monad, so to speak. For any other function in Haskell, I know for sure that it is pure. So I can write as much pure code as possible and then add a bit of IO code to interface the pure code with the execution context, or with other subsystems of the program where it is unavoidable to work with shared state. I can also do all that in Python. But in Haskell, the language semantics ensure that my code intended to be pure is really pure, and recognizably different from impure code.

To answer your question: purity implies referential transparency. I can understand that from the definitions. Conversely, the implication does not apply. I don't understand exactly why, but so far it hasn't been a priority for me to understand this.

> is the former impure and the latter pure?

Correct. You can recognize this by the data types:

f :: IO Integer

g :: GHC.ST.ST s Integer

You use do notation in both cases, but that has nothing to do with it. You can write not only IO code with the do notation, but all kinds of other code.

foo :: String

foo = do

    s <- "hello"  

    pure s
The do notation is only syntax to avoid the bind operator. Otherwise your functions would look like this:

f :: IO Integer

f =

    newIORef 0 >>= \ v ->  

    modifyIORef v (+1) >>  

    readIORef v

g :: GHC.ST.ST s Integer

g =

    newSTRef 0 >>= \ v ->  

    modifySTRef v (+1) >>  

    readSTRef v
> ...

> Do you mean because of the type?

Exactly. I can see from the data type that f1 is pure and f2 is impure.

> what makes the conclusions invalid for Python.

Consider this code:

def add(a: int, b: int) -> int:

    print("hello")  

    return a + b;
The function is impure because of the print. Without the print, it would be pure. But the type annotation would be exactly the same. The example is a bit silly, because the type annotations are not taken into account in Python anyway (unless you use typeguard)

In Haskell you cannot simply print something in a procedure of type `Int -> Int -> Int`. The type checker would reject it. If you want to do that, you have to type it as `Int -> Int -> IO Int`. You can then no longer simply use it as if it were typed as `Int -> Int -> Int`. It is then simply a completely different procedure: an impure one.

> does a pure language have to be [statically] typed?

Very good question! I don't know. I don't think it's formally a requirement, but I only know languages that handle it via the type system (IO, algebraic effect handlers, TEA).

> But so far I have never discovered a benefit of programming "purely" in Haskell that is not ultimately a consequence of what I call "referential transparency" above. Do you know one?

I think you're right about that. Purity and referential transparency somehow seem to be almost the same thing but not 100% congruent. I have a gap in my understanding at this point.

https://stackoverflow.com/questions/4865616/purity-vs-refere...

This seems to suggest that I won't be able to close this gap any time soon. In the end, it probably doesn't matter. I assume we both mean the same thing.


> This will be a bit more detailed. I apologize in advance.

No need to apologize. Thank you for your detailed response!


It's arguable that SQL is "basically applying and composing functions," given that its mathematical underpinnings lie in the relational algebra (hence relational databases). Further, while it's maybe technically correct to make statements such as, all programming languages are based on lambda calculus, a universal model of computation [1], this is about as precise a statement as saying that all mathematics is based on set theory. While it may be true, it may be too low a level of abstraction, and there are probably higher-order constructs or machinery that get closer to your actual domain of study or application (e.g. real numbers, functions, vectors).

I have mentioned downthread [0] a few characteristics that distinguish FP from other paradigms - namely, first-class functions and higher-order functions (the ability to pass functions to functions); referential transparency, "pure functions" without side effect, and equational reasoning (the ability to replace expressions with their value and keep the behaviour of the program identical); and a tendency to prefer parametric and ad-hoc polymorphism (generics and trait/typeclass bounds) to subtype polymorphism, the "inheritance" boogeyman of OOP.

Again as mentioned in [0] the "expression problem" [2] is a good model for discussing the ergonomics and tradeoffs of FP vs. OOP.

[0] https://news.ycombinator.com/item?id=41450851

[1] https://news.ycombinator.com/item?id=41452258

[2] https://wiki.c2.com/?ExpressionProblem


The distinguishing characteristics don't distinguish FP from anything. Those are features like first class functions that I would expect to find support for in Python or C++, for example. And it isn't possible to write a useful program that doesn't have side effects because there would be no IO.

It is a case of the paradigm not really meaning anything. It is an arbitrary bunching of language features and code properties that doesn't provide a particularly useful guide. Anything that is a good idea in one paradigm is also a good idea in any other paradigm too.


It's worth noting that both languages cited (Python, C++) are multi-paradigm and therefore contain aspects of both OOP and FP; there are very few languages that are exclusively object-oriented or functional. The naming of the functools [0] stdlib package for Python's higher-order functions is illustrative in this respect, and there is an entire HOWTO for "implementing programs in a functional style" [1] in the Python docs.

If you want to see object-orientation taken to the extreme, and how it makes writing programs in a functional style extremely unergonomic, you can read old-school Guava code from Java 6 and try to decipher all the line noise; there is even a caveat in its official documentation [2] that states,

    functional programming without Java 8+ requires awkward and verbose use of anonymous classes.

    Excessive use of Guava's functional programming idioms can lead to verbose, confusing, unreadable,
    and inefficient code. These are by far the most easily (and most commonly) abused parts of Guava,
    and when you go to preposterous lengths to make your code "a one-liner," the Guava team weeps.
> It is a case of the paradigm not really meaning anything. It is an arbitrary bunching

This is a case of throwing the baby out with the bathwater; just because we can't come up with a perfectly precise definition doesn't mean that we should dispense with rough heuristics and (perhaps crudely drawn) boundaries that serve as a useful clustering of related features. Try asking a machine learning or computer vision algorithm for the criteria it uses to categorize something as a "dog;" it's unlikely that you'll come up with something as precise as the definition of the Platonic ideal of a dog, but we can still use its model to recognize dogs with a reasonable degree of accuracy.

[0] https://docs.python.org/3/library/functools.html

[1] https://docs.python.org/3/howto/functional.html#small-functi...

[2] https://github.com/google/guava/wiki/FunctionalExplained


But if we take a holistic view here, languages that treat paradigms as a thing have been trounced in the marketplace of ideas. And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior to looking at specific individual features and techniques in context of the problem being tackled.


> "Functional Programming" doesn't have a precise definition at all as far as I know, so it is likely it doesn't refer to a real thing. Most of the paradigms are similar as they don't seem to actually mean anything.

> The distinguishing characteristics don't distinguish FP from anything.

> The attempts from decades past to draw a distinction between OOP and FP paradigms has largely failed. Consensus seems to be that whatever is concise and easy to understand is best and attempting to force code into a paradigm is inferior

This last statement is a marked departure from your original, first assertion which is a descriptive claim that functional programming, and indeed all paradigms, don't actually exist or mean anything; in it you instead make the normative claim that "attempting to force code into a paradigm is inferior," on the practical bases of concision or comprehensibility.

Moreover, if paradigms don't actually exist as per your first assertion, I find it difficult to understand what exactly it is we are "forcing code into" in your second statement. Clearly there exists some cluster of related features you are referring to when you point out the pitfalls of trying to pigeonhole your problem into a certain style of solution. As mentioned, Python even has a name for this cluster, and it's called the `functools` module, which lives alongside other implementation styles (OOP, procedural, etc.) in its multi-paradigmal space of solutions.

> And to find an example of a non-functional programming language your first instinct was to talk about Java 7 - where there is a strong consensus that if you were going to create a new program today it should not be used. We're up to Java 22.

The claim was basically that OOP and other paradigms don't exist; to refute the claim I supplied a counter-example of a language that leans so heavily towards the OOP side of the spectrum that writing FP code within it is prohibitive. Just because that language has been updated to incorporate techniques from other paradigms doesn't mean that the OOP paradigm ceased to exist once Java 8 came out. Just because following a "purist" FP or OOP approach is impractical does not mean that such categorizations don't exist, even if only as a limit or extreme to which varying languages tend to with varying degrees of resemblance. Java 22 can be considered mixed-paradigm, but it's probably still more OOP than Racket Lisp.

There is a difference between claiming that a distinction does not exist, and that drawing distinctions is an inferior or impractical approach.

In general I agree with the sentiment that one should be comfortable working with multiple paradigms, because as per the "expression problem" a strictly FP or OOP approach comes with tradeoffs, and which tradeoffs should be made depends on the particular problem being tackled.


Here's a good enough definition: "The same input yields the same output".

> "functional programming is a programming paradigm where programs are constructed by applying and composing functions"

This is only seems like a useless truism if you take 'function' to mean 'method' or 'procedure'. If you nail down 'function' to sameinput->sameoutput then it starts to make more sense.


No. You missed it the meaning. The paradigm is referring to the fact that the ENTIRE program must be constructed this way.

So let's say you have this in your code:

   x = 1
   b = x + 2
   x = b + x
You have composed three procedures here in order. This is illegal. You must ONLY construct the programs via composing functions there can be no other way.


Not so fast. This next is legal Haskell:

  let x = 1
      b = x + 2
      x = b + x
  in <do something with b and x>
The third line creates a second binding of x which shadows the first binding, and shadowing is considered by most people to be within the functional paradigm. What is not considered functional by most people is assigning a variable in a loop although even here you can do it in Haskell. Specifically, you can call readIORef and writeIORef every loop iteration, but last time I tried, that was about 1000 times less efficient than in an imperative language.


The intent of my syntax is to show mutation and procedures not variable shadowing and let in.


I don't think I missed anything. You presented non-functional code as an example of code which isn't functional. There's no contradiction.

And good riddance, too! I'm so sick of seeing this anti-pattern at $DAYJOB: First, create a thing which isn't what you called it (e.g. 'result'), and then mutate it in place until it is what you called it.


It's funny you say that because I'd say FP has a much more well defined mathematical foundation compared to OOP. In fact, before it became as dominant as it is today, it was often criticized as being too scholarly and theoretical. It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

I don't think OOP ever had nearly as strong of a mathematical/scholarly foundation as FP has


> It's built mostly on the foundations of Lambda Calculus (and maybe some Type Theory).

Can you describe a programming language that isn't based on lambda calculus? It is a universal model of computation.


In that sense, every programming language (as well as lambda calculus) is based on Java, since Java is a universal model of computation. So I don't think it's very useful to take "is based on" to be as broad as "is interpretable using".


Conspicuously absent in that thought is an alternative way to interpret "is based on". Because I suspect that in most senses that FP is based on lambda calculus we probably can claim it is also based on a subset of Java. Everything useful is multi-paradigm these days, programming paradigms turned out to not be a useful lens for developing useful programming languages.


> Conspicuously absent in that thought is an alternative way to interpret "is based on".

"Has designers who consciously adopted significant ideas from" seems good enough for me.


That is a path-dependent property though. So if we interpret it that way we can't tell if a language is based on lambda calculus without interviewing the author.

Indeed, the definition admits that we could have two almost identical languages but only one of them is based on lambda calculus. I don't think it is a reasonable way to interpret "is based on", it requires too much knowledge of history. It is more proper to have a technical definition that is rooted in the language itself.


C'est la vie: it's always going to be a very fuzzy concept when applied, short of broadening it until it's unrecognizable. Not every property of a thing needs to be perfectly decidable with no room for guesswork.

To put it pithily, "This popular notion is provided 'as is', without warranty of any kind, express or implied, including but not limited to fitness for a particular purpose." If you really want a property that's fully intrinsic, I'd suggest disregarding fuzzy 'based on'-ness and instead considering some particular aspect you care about, e.g., "How difficult is it to express such-and-such a technique in this language?"


FP is math. There's not much distinction. The language of math is FP.


> I don't know if it's still relevant to make a strict separation between programming paradigms.

Consider whether the same input to your code will result in the same output (aka functions), that's a good place to draw the line.

You can stick 'FP' fancy types and closures into a method; that doesn't turn it into a function.


Most js code bases I see (frontends typically in react) have only a basic sense of functional programming/closures. It is a massive paradigm shift to move to clojure from where modern js is today. It was probably less so in the jquery days funny enough


I wouldn't think of paradigms as strictly separate, but there are definitely clusters of related techniques that go well together. Higher order functions work best with immutability and pure functions, and struggle without them. Currying is somewhat useful by itself, but much more valuable with function composition or HOFs.

It's also important to teach the techniques together because functional programming tools are typically (deliberately) less powerful, which makes them easier to analyze, but means you need more tools.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: