Disclaimer: I don't know much about this programming language or about Effects, so there may be a better way to do this already
something I'd sometimes like to do when I'm profiling complex code will be to have an (essentially) global variable tracking the sum of how long a function took to execute over all invocations.
I am guessing that mutating the global counter would count as an effect, and I wouldn't really want to add the effect all the way through the call graph. I think this is something where the handling ought to be similar to how you're handling Debug.
We think that functional programmers should be able to write e.g. `List.count(x -> x > 5, l)` (or e.g. use pipelines with |>) and have it run as fast as an ordinary imperative loop with a mutable variable. The Flix compiler gives them that-- but it requires the program to undergo certain transformations that may require expressions to be moved around and eliminated. It is dangerous to perform such optimizations with incorrect assumptions about types or effects. How to support that together with print-debugging is the challenge.
For systems in production, we have the `Logger` effect and associated handlers.
And yet modern optimizers don’t actually seem to have a problem with a transformation like that as you must know. Try list.iter().filter(|x| x>5).count() in Rust
And yes, Rust doesn’t have an effect system yet, but others have mentioned Haskell and how it handles tracing and logging and the limitations of effect systems interplaying with such things.
It was a simple example; whether a specific optimization applies is very tricky. We have to look at the details. When can Rust move or eliminate a binder? Does Rust support automatic parallelization? What happens if you use unsafe blocks to lie to their type and ownership system? I think many of the same issues will surface.
To be me, the interesting question is: What happens when you lie to the type (and effect or ownership) system?
No, and there's no indication that automatic parallelization is at all worth the effort vs having the author explicitly annotate which things need parallelization (e.g. Rayon is drop-in for many tasks where you know you'll need it). Otherwise you're at the mercy of heuristics baked into the language which in practice never work out well and also slow down the non multithreaded use-cases.
> To be me, the interesting question is: What happens when you lie to the type (and effect or ownership) system?
As others have said, just having the print get elided if the operation gets optimized out would be fine. That's what Haskell does. It's a weird choice to look at the challenges of effect systems and conclude the effect system idea is perfect it's the programmers who are wrong and not that the effect system has gaps that can't be addressed and solve it in other less surprising ways.
Oh I hadn't heard of Ante before. This looks very close to the language I wanted out of Rust. Haskell's module system, row polymorphism, linear types, no sepples glyph soup. That's an instant bookmark save. Will be watching that space very closely.
The counter-point is the following: Functional programming is great for working with lists and trees. But functional programming (and imperative programming) struggle with succinctly, correctly, and efficiently expressing queries on graphs. Datalog, on the other hand, is excellent for working with graphs. It is simple, expressive, and (can be) very fast. It is a power tool. Most of the time it should not be used, but when it fits the problem domain its benefit can be 10x or 100x. It is also worth pointing out that Datalog is strictly more powerful than SQL (modulo various extensions).
The goal of Flix -- and typically of any high-level programming language -- is to provide powerful abstractions and constructs that make programming simple, concise, and (often) less error-prone. Here Datalog fits perfectly.
Now that said -- looking through the Flix documentation -- I think we need to do a better job at selling the use case for Datalog. Partly by adding arguments such as the above and partly by adding better examples.
But lists and trees aren't really "built into" languages. They're part of the standard library, not the language itself. Maybe one could say "foreach" syntax builds lists into a language, but in many languages you can foreach non-lists, others have foreach as a method instead of syntax, and it still doesn't say anything about trees.
I do see your point, I'm just not so sure I think it'd be the right thing to do. It feels like too big and arbitrary to be a core language feature, better left to a library. If there are some core features that would make building such a library easier, I'd focus on those rather than the logic programming itself. Something like how Rust did async. (Though contrarily, I think Rust should have built async into the language, since it's pervasive and hard to interop different implementations. Unlike async, logic programming is typically self-contained, and there would rarely be a need to interop multiple implementations).
Anyway, great work so far. I look forward to seeing it progress.
Going a little further, what I'd really like to see is the core concepts implemented in a way that allows a standard implementation to follow straightforwardly from the type system, but also allow for doing nonstandard things. Like, "here's a logic language built in" is kind of boring. What's more enticing is "Here are the components. Here's a nominal implementation based on these components. Here's a weird thing you can do that we hadn't actually designed for, but could fit certain use cases."
And sure, now I know it's all kind of fancy but fairly trivial stuff you can do with monads (and I guess free monads in the Linq-to-SQL case), but it was fascinating to me at the time.
So yeah, for "selling" purposes, I think rather than selling datalog built into the language as a front-page feature, a series of "how to build a datalog" posts would go further in showing off the power of the components of the language that it's built from.
(And FWIW I do like the way C# has built-in support for "important" monads like iterators (foreach), generators (yield), async (await), optional (null propagation operators), etc., even though a language purist would argue against it. I think it provides an easier on-ramp for newer developers, and helps make common things more concise and readable. So it'd be interesting to see where that line would best get drawn for logic programming, what gets special-but-extensible syntax support, and what is purely implementation and functions).
In the uncommon case, some stack frames must be heap allocated.
This is unavoidable when (a) the runtime enviroment, here the JVM, does not support tail calls, and (b) the language wants to guarantee that _any_[1] tail call does not grow the stack.
[1] Any call. Not just a call to the same function.
What do you mean? In Flix, if a function has "Bool" as a return type then it can only return a Boolean value. That's what a type system ensures. Similarly, in Flix if a function has the "ReadsFromDB" effect then it can call operations that cause "ReadsFromDB"-- but it cannot cause any other effect. In particular, if there is also a "WriteToDb" then it cannot perform that effect.
This is not just aspirational. It is an iron-clad guarantee; it is what is formally called "effect safety" and it has been proven for calculi that model the Flix type and effect system.
To sum up: In Flix:
- If a function is pure then it cannot perform side-effects.
- If a function has the Console effect then it can only perform operations defined on Console.
- If a function has the Console and Http effect then it can only perform operations defined on Console and Http.
But you have user defined effects don't you? E.g say I define an effect ReadsFromDB, it doesn't necessarily do what it says on the tin, and there is no way a compiler can check that it does. It could read from the db, and send some rockets into space. So a consequence of that is that these "effect systems" just amount to giving names to blocks of code. That's not necessarily a bad thing.
If you define a variable called number_of_apples, there is no way for the compiler to check that it actually contains the number of apples. How is that different?
It's different. Effect systems claim to be a 'major evolution' that 'enforce modularity'. But they don't really enforce anything other than the modularity provided by standard oop classes or if in a fp language, function modules.
Do you have some specific special effects in mind?