Friday, December 19, 2008

Haskell for C# Programmers Part 3: Visualizing Monads

Today I'll be explain what monads are, what they're good for, and how to build one of your own. One of the interesting things about monads is that they're relatively new.  They were first used in  programming languages about 20 years ago and are only just now making their way into a languages you have much chance of getting paid for using.

"Why do I need to learn about them?"

There is very little you can do in Haskell without understanding monads.  As I explained last time Haskell uses them for IO, without which programs are frightfully dull.  Unfortunately it is rather hard to learn a new programming language and a completely foreign concept at the same time, which is why learning Haskell can be daunting.  C# programmers have a leg up though.  They use monads all the time.  Surprisingly C# has a built-in syntax for working with Monads: Linq.

"Linq's for querying data.  What does that have to do with monads?"

On the contrary, Linq is just a general syntax for constructing monads.  It was made to look like SQL in order to make it feel more familiar to developers.

"You sir, are blowing my mind."

It's true.  In the previous post I showed that Linq expressions could be used to create and manipulate Haskell's IO monad.  Using Linq statements we composed several procedures that performed IO operations into one large procedure and then ran it.  That's not exactly what you normally think of as a query is it?

"I suppose not.  So what are monads?"

If you've written object-oriented code you've constructed objects with functions.  I've you written functional code like a Linq query then you have manipulating functions with other functions.

Monads are functions that are constructed with functions.*

"But Linq programs manipulate IEnumerable objects, not functions."

Don't be fooled.  The IEnumerable object is just a convenient wrapper for the IEnumerator.MoveNext function which does the work.  Although not strictly necessary, wrapper objects are usually defined to encapsulate monadic functions.  This is useful because it allows us to give the monad a nice readable type name that conveys its purpose (ex. IEnumerables are for enumerating).  Creating wrapper objects also makes it easy to use static typing to prevent attempts to compose two monads not meant to be composed (ex. IEnumerable monads shouldn't be combined with IO monads).

The following animation demonstrates how several IEnumerable monads are composed into one big IEnumerable monad with a Linq query.  After the IEnumerable monad is composed, the animation demonstrates what happens when it is run.

Monads have been likened to onions and matryoshka dolls.  From the animation above you can see why.  When you apply a bind two monadic functions together you get a new one that encapsulates them.  When the monad function is run each function runs its inner functions until the inner-most function is executed.  This function returns a result.  Then each monad applies some transformation to the result of its inner monad and returns that value.  This continues until finally the result is returned from the outer-most monad. 

"Why would I want to build a function this way?"

When you bind two monads of the same type together you get...another instance of the same monad.  Think about it.  If I select from an IEnumerable I get...an IEnumerable.  When you bind two IO monads together you get an IO monad and so on and so forth.  This is a very elegant, predictable way of structuring a program and a great way of controlling the complexity of your code. 

"So how do I know if a function is monadic?"

The first time I read the mathematical criteria for a monad I was awfully confused.  I will attempt to describe the rules for determining if a function is monadic in plain english and provide examples so as not to perpetuate this uncomfortable state.

A function is monadic if*:

1.  There exists a function which can construct it.

IEnumerable<int> numbers = new []{2};

2.  There exists a function which can retrieve a value from it.

int[] number = numbers.ToArray();

3.  There exists a function that can bind two monads together and produce another instance of the monad.

public static IEnumerable<R> SelectMany<T,R>( this IEnumerable<T> that, Func<T, IEnumerable<R>> func) { foreach(var item in that) { foreach(var nestedItem in func(item)) { yield return nestedItem; } } }

The first two should be straight forward because everyone knows how to get data into and out of an IEnumerable.  The last function might strike you as a bit funny.  This SelectMany function is a Linq function and although you are probably unaware of it you use it all the time.  The compiler calls it under the hood when you join two lists.  Turns out that the bind function for two IEnumerables is a join.  After all when you join two IEnumerables you get...well a new IEnumerable.  Sound familiar?

Take the following Linq query...

var pairs = from left in Enumerable.Range(0,3) from right in Enumerable.Range(3,6) select new {left, right};

The code above translates to...

Enumerable.Range(0,3).SelectMany( left => Enumerable.Range(3,6).Select( right => new {left, right}));

Take a minute to absorb this.  Nested functions can be very difficult for developers who are accustomed to imperative development.

"Okay...I think I get this but I'd like to see another example."

Okay.  Let's take a look at the code required to build Haskell's IO monad in C#.  First we'll define the monad.  No need for a wrapper class. A delegate type should suffice just fine.

delegate T IO<T>();

Okay that was easy enough.  Since our monad is just a delegate it's easy enough to construct one...

IO<object> helloMonad = () => { Console.WriteLine("Hello monad."); return null; };

Getting a value out (in this case null) is even easier.

helloMonad();

Now comes the hard part.  We need to write the bind function.  The Bind function uses a transformation function to create a new IO monad that encapsulates an existing one IO monad. 

public static IO<R> Bind<T, R>(this IO<T> io, Func<T, IO<R>> func) { return func(io()); }

As you can see, Bind runs the monad, gets the result, passes it to the transformation function, and returns the output, which is the transformed monad.  Unfortunately we're not done.  The C# compiler wants us to define one more overload.  If we want to transform the output of our composed monad with another function the compiler calls the following overload to avoid adding another layer of nesting.

public static IO<V> Bind<T, U, V>( this IO<T> io, Func<T, IO<U>> io0, Func<T, U, V> io1) { return io.Bind( value0 => io0(value0).Bind( value1 => new IO<V>(() => io1(value0, value1)))); }

Once again the function above is entirely redundant and is just a performance optimization.  Any code you can write with the latter can also be written with the former.

"Umm...I'm not sure I follow."

I know.  It's complicated.  The bind operation is perhaps the most difficult part of monads to understand.  Just keep in mind that bind takes two monadic functions, f and g, and composes them together, making a new one h.  The result of h(x) x is the same as calling g(f(x)).

"So how does Linq know how to compose my IO monad?  It doesn't implement IEnumerable."

Good observation.  In fact query comprehensions don't actually rely on the IEnumerable interface.  They rely on method name patterns.  For example the "select" keyword causes the C# compiler to look for a method named Select on the object it is manipulating.  If you attempt to bind two monads together the C# compiler converts that into a call to SelectMany. 

Therefore all we have to do to compose our IO monad with Linq is rename the Bind function to SelectMany!  Then we can construct our IO functions using Linq.

static IO<object> RealMain(string[] args){ return from address in GetAddressFromConsole() from html in GetHtml(address) from _ in WriteToConsole(html) select _; }

Notice that nothing has actually happened yet.  We've just constructed a new function that can be run and will return a value.

"This is really cool and all but when am I gonna learn some Haskell?"

I'm going to suspend this series here for two reasons:

The first is that there are many more good resources on the web for learning Haskell today than there were when I started this series many months ago.  My intent was to take advantage of the fact that C# and Haskell have so many similarities (anonymous types, lambda functions, and Linq which actually comes straight from Haskell) to get C# developers off the ground.  Hopefully you've not only gotten a good introduction to monads and Haskell syntax, but you've also gained a newfound respect for C#. 

The other reason I'm suspending this series is that I believe most .NET developers are more interested in Microsoft's new functional language, F#, which is available today and will be released with the next version of Visual Studio.  F# will sit alongside C# and VB.NET as a fully supported .NET language.  The good news is that everything you've just learned about Haskell and functional programming applies to F#.  The languages have a very similar syntax and share the same idioms.  I look forward to posting about F# in the near future.

*I'm aware that in explaining what a monad is I've managed to be at once too broad and too narrow.  Please know that I'm aware of this and have taken some creative license to make things as clear as possible.

Tuesday, December 2, 2008

Clojure. Wow.

Someone finally did it. They created a LISP that I want to use. Too bad it's for the JVM. :-)

Most of my complaints about LISPs are the same as everyone else's. Few libraries, less documentation, and poor interoperability. I never found a LISP that had everything I wanted. I'm repelled by the complexity of Common Lisp and vexed by Scheme's completely avoidable verbosity. It also find that languages like F# and Haskell do a better job of encouraging practices that minimize mutable state. Putting all that aside, even if a free Lisp with standard libraries as extensive as that of .NET or Java existed I just don't have time to relearn how to print "Hello word," nor should I have to.

Enter Clojure. It runs on the JVM and as such can use any Java library. As for syntax it takes a very conservative approach, adding just enough to make a big difference in readability. Clojure has tremendous potential. Its encourages the use of sequences, has lots of handy immutable data structures, and some nice syntax for creating them. It's weird but Clojure programming feels almost as much like F# as it does Scheme.

I have to call one particular syntactical decision out in particular. The choice of lambda syntax is brilliant:

(reduce #(+ %1 %2) (range 100))

Clojure's got macros (of course), optional type annotations, even a working Software Transactional Memory implementation(!). Having done some simple benchmarking I was pleasantly surprised to find that it's no slouch either.

I have a minor complaint which is that I wish Clojure had adopted JScheme's dot notation for referencing Java class members. I don't find Clojure's approach nearly as readable or natural to a Java programmer. Also there's no way to write mutually recursive functions but that's the JVM's fault.

It's too early to predict whether Clojure will get adopted. My concern is that it will lose out to more syntactically approachable dynamic JVM languages like Jython and JRuby. Frankly I'm not sure I can deal with any more disappointment. I got really excited about Arc. Of course you remember Arc. Paul Grahm's LISP flavor which, after years in the making, took the software development industry by storm. Not.

I don't want to pile on Paul Grahm. He's gotten enough flack over the Arc debacle. There is a lesson here though. If your language doesn't target one of the major VM's you might as well just write a paper and submit it to LtU.

There's no good reason that Clojure couldn't be ported more or less faithfully to .NET. Take the DLR and add some of the classes in F#'s standard library and you're 70% there. It's not like Lisp parsers are hard to write after all. Any takers? I'd do it my self if I wasn't consumed by charting :-).

About Me

My photo
I'm a software developer who started programming at age 16 and never saw any reason to stop. I'm working on the Presentation Platform Controls team at Microsoft. My primary interests are functional programming, and Rich Internet Applications.