Simon Peyton Jones - Haskell is useless

By: bunidanoable

1834   104   184596

Uploaded on 12/18/2011

Simon Peyton Jones talking about the future of programming languages

Comments (6):

By zengid    2017-09-20

I feel like the author didn't do this as a way to bash Clojure but to try and progress Common Lisp. Anecdotally, I remember Hickey saying something about a stagnation in the CL community, so if Clojure inspires cross-pollination of ideas then that's a good thing.

For instance, Simon Peyton Jones describes [1] how they took the ideas around STM from 'unsafe' languages and put it into Haskell, developed it further, and then later the 'unsafe' languages took the Haskell innovations and put it back into their languages.

Clojure isn't perfect, and Common Lisp isn't perfect, but hey, they're better than programming in assembly. We should be stoked about someone trying to innovate; why do these posts always turn into language wars?

[1] https://youtu.be/iSmkqocn0oQ around 3:50, but the video is short so it's worth watching the whole thing.

Original Thread

By klodolph    2017-09-20

Let's get this out of the way: Rust is great. Rust apologia like this article is not so great.

> As an aside, remember that the only difference to C/c++ is that if you write a “basic linked list” in them, all of your code will be unsafe.

There's a bit of mental gymnastics going on here. The word "unsafe" is performing double duty, since it means "memory safety not guaranteed by the compiler" in Rust and it means something else entirely when you are talking about C++, since memory safety was never guaranteed by the compiler in the first place. The other problem with this statement is that linked lists in C or C++ aren’t really that hard to get right, in fact, they’re easy. Maybe you draw out a diagram on pen and paper before you write the code, but you’re unlikely to be facing segfaults.

I admit I’m biased here, because I’ve been using Haskell for something like 15 years now, but I feel like the Haskell community acknowledges that Haskell’s type system gets in the way and prevents you from doing useful, interesting work, and that even a great library ecosystem isn’t enough to overcome this. That’s how safety generally works. It’s harder to write programs that do useful things, but in exchange, it’s also harder to write programs that behave unpredictably or do dangerous things. Because Rust and Haskell put you in such restrictive type systems, sometimes you have to break out to get real work done.

Haskell’s pitch, in my mind, is, “Let’s make it easy to reason about side effects and value semantics.” From the article, Rust’s pitch could be, “Let’s make it easy to reason about control- and data flow.” These are both evolutionary steps in the development of programming languages, all programming languages being somewhat flawed. Future languages will steal ideas from Rust the same way modern languages have stolen ideas from Haskell.

But apologia still leaves a bad taste in my mouth. The article says, “Is this a problem with Rust? Not at all.” There’s a worrying unwillingness to acknowledge that Rust is flawed, and the article describes Rust users as “Rustaceans” and makes broad generalizations about how they behave. This reminds me of the excesses of 2000s-era object-oriented programming. The comment about “Rust’s facilities for code reuse” could have been taken straight out of a press release for Java back in the late 1990s for all I know.

Rust is great, but this article is further cementing my distaste for the Rust community.

By comparison, here is Simon Peyton Jones talking about how Haskell is useless: https://www.youtube.com/watch?v=iSmkqocn0oQ

Original Thread

By dwenzek    2018-03-25

I find this post a bit wordy but interesting. It notably points out well how the two approaches to deal with the state of a system, using versions or overwriting it, are in competition since a long time and in various domain of computer sciences.

Alan also cites a paper about [Worlds: Controlling the Scope of Side Effects](http://www.vpri.org/pdf/tr2011001_final_worlds.pdf). A hot topic in the functional programming world. That paper makes me remember that ideas from FP and OO can mutually improve (an idea well developed in this short video https://www.youtube.com/watch?v=iSmkqocn0oQ.

Original Thread

By anonymous    2017-09-20

As a preface, it's not "the IO Monad", despite what many poorly-written introductions say. It's just "the IO type". There's nothing magical about monads. Haskell's Monad class is a pretty boring thing - it's just unfamiliar and more abstract than what most languages can support. You don't ever see anyone call IO "the IO Alternative," even though IO implements Alternative. Focusing too much on Monad only gets in the way of learning.

The conceptual magic for principled handling of effects (not side-effects!) in pure languages is the existence of an IO type at all. It's a real type. It's not some flag saying "This is impure!". It's a full Haskell type of kind * -> *, just like Maybe or []. Type signatures accepting IO values as arguments, like IO a -> IO (Maybe a), make sense. Type signatures with nested IO make sense, like IO (IO a).

So if it's a real type, it must have a concrete meaning. Maybe a as a type represents a possibly-missing value of type a. [a] means 0 or more values of type a. IO a means a sequence of effects that produce a value of type a.

Note that the entire purpose of IO is to represent a sequence of effects. As I said above, they're not side effects. They can't be hidden away in an innocuous-looking leaf of the program and mysteriously change things behind the back of other code. Instead, effects are rather explicitly called out by the fact that they're an IO value. This is why people attempt to minimize the part of their program using IO types. The less that you do there, the fewer ways there are for spooky action at a distance to interfere with your program.

As to the main thrust of your question, then - a complete Haskell program is an IO value called main, and the collection of definitions it uses. The compiler, when it generates code, inserts a decidedly non-Haskell block of code that actually runs the sequence of effects in an IO value. In some sense, this is what Simon Peyton Jones (one of the long-time authors of GHC) was getting at in his talk Haskell is useless.

It's true that whatever actually executes the IO action cannot remain conceptually pure. (And there is that very impure function that runs IO actions exposed within the Haskell language. I won't say more about it than it was added to support the foreign function interface, and using it improperly will break your program very badly.) But the point of Haskell is to provide a principled interface to the effect system, and hide away the unprincipled bits. And it does that in a way that's actually quite useful in practice.

Original Thread

By anonymous    2018-01-29

Haskell functions are not the same as computations.

A computation is a piece of imperative code (perhaps written in C or Assembler, and then compiled to machine code, directly executable on a processor), that is by nature effectful and even unrestricted in its effects. That is, once it is ran, a computation may access and alter any memory and perform any operations, such as interacting with keyboard and screen, or even launching missiles.

By contrast, a function in a pure language, such as Haskell, is unable to alter arbitrary memory and launch missiles. It can only alter its own personal section of memory and return a result that is specified in its type.

So, in a sense, Haskell is a language that cannot do anything. Haskell is useless. This was a major problem during the 1990's, until IO was integrated into Haskell.

Now, an IO a value is a link to a separately prepared computation that will, eventually, hopefully, produce a. You will not be able to create an IO a out of pure Haskell functions. All the IO primitives are designed separately, and packaged into GHC. You can then compose these simple computations into less trivial ones, and eventually your program may have any effects you may wish.

One point, though: pure functions are separate from each other, they can only influence each other if you use them together. Computations, on the other hand, may interact with each other freely (as I said, they can generally do anything), and therefore can (and do) accidentally break each other. That's why there are so many bugs in software written in imperative languages! So, in Haskell, computations are kept in IO.

I hope this dispels at least some of your confusion.

Original Thread

Popular Videos 141

Google Developers

Submit Your Video

If you have some great dev videos to share, please fill out this form.