Originally published at Inside the Mind of the G Machine. You can comment here or there.

I started learning Haskell around summer of 2015, and to be honest, I found it very difficult. It, as a representative of functional programming, requires a very different mode of reasoning, one that I had not really exposed myself to before. I used very light features of Haskell at a real job for testing purposes. With “very light,” IORef is obviously disqualified. I haven’t touched it for over a year, and I am quite rusty on it. Though I expect, since I had diligently worked through example code of various Haskell constructs and patterns, to the point where I could follow what was going on without much difficulty, I can retrain up to the level I had previously been at without too much difficulty.

I believe I first learned of functional programming a few years before when I started with Haskell, with, memorable, the introduction of the two line quicksort. See, at that time, I basically didn’t have a clue how programming really worked, and so I actually might have believed that it was the real quicksort. Maybe I wasn’t even fully aware of quicksort’s important in place distinction, which is what renders it superior to mergesort, which is also . It’s pretty non-trivial to implement the actual quicksort in Haskell, because Haskell, as a functional (non-mutable state) programming language, is such that mutable arrays are somewhat difficult to work with. Of course, this no mutable state associated with functional programming is an ideal that is too good to be true. What happens, roughly, is that mutation of memory, which is necessary for any program to run, is done under the hood; it is the higher level language that protects against dangerous mutation of memory. Though I have read the code for it, long ago, I still cannot say that I actually ever understood how real quicksort is implemented in Haskell. Even now, given how rusty I am, I feel like it might be a bit too much to put on my place. So instead, I’ll go over bubble-sort, with the hope that quicksort will be elaborated on this blog not long after. The resource I am basing off of is from another blog.

We start by introducing `IORef`

, which gives us a mutable reference to a type. It is *not* a pure operation, with every operation on it involving the `IO`

monad.

The functions involved for manipulating `IORef`

are:

data IORef a newIORef :: a -> IO (IORef a) readIORef :: IORef a -> IO a writeIORef :: IORef a -> a -> IO () modifyIORef :: IORef a -> (a -> a) -> IO ()

I will assume the reader if aware how bubble sort works. Now let’s brainstorm how we would, using `IORef`

, implement it. First of all, what is its type. It should be

bubbleSort :: [Int] -> IO [Int]

From this, we can easily guess that we ought to transform the input `[Int]`

into a `IORef [Int]`

. A difficulty here is that the constructor for `IORef`

, `newIORef`

wraps the `IORef`

inside `IO`

. Applying `map`

would thus give us a list of `IO`

, the results of which we wish to collect. For this, we use `mapM`

.

So in our code will have something like

xs <- mapM newIORef input

Now, we want to times, where is the length of the input, the number of passthroughs needed to complete bubble-sort with guarantee, perform the passthrough swap adjacent elements (if not in order) operation. The passthrough swap operation is an action to be completed times. How do we characterize this action, in code? It consists of pass through and maybe swap on each index. Each index maps to an operation, and these operations can be collected in a sequential fashion. In code, this would along the lines of

\j -> do let ix = xs !! j let iy = xs !! (j + 1) x <- readIORef ix y <- readIORef iy when (x > y) $ do writeIORef ix y writeIORef iy x

For this we can use `mapM`

again, but better can be done stylistically. There is the output of the operation, which is `IO ()`

, or when combined `IO [()]`

contains no information, so it can be discarded. There is a variant of `mapM`

for that, `mapM_`

. Since an anonymous function (that in this case maps an index to a do statement) is somewhat bulky, we’d like to have it as the latter argument, and `forM_`

is `mapM_`

with the arguments swapped.

Finally, we pull out the values from the `IORef`

with

mapM readIORef xs

Putting all this together gives us

bubbleSort :: [Int] -> IO [Int] bubbleSort input = do let ln = length input xs <- mapM newIORef input forM_ [0..ln - 1] $ \_ -> do forM_ [0..ln - 2] $ \j -> do let ix = xs !! j let iy = xs !! (j + 1) x <- readIORef ix y <- readIORef iy when (x > y) $ do writeIORef ix y writeIORef iy x mapM readIORef xs

Finally, we note that there is an issue ignored for the sake of simplicity since it is not the focus of this article. One might have noticed the `!!`

. What is that?

Prelude> :t (!!) (!!) :: [a] -> Int -> a

It’s the list accessor function, which is linear time, unlike constant time array access. I will, hopefully, dive into how arrays with constant time reads and writes are implemented in Haskell in some detail. For a high level explanation, see this answer on StackOverflow. Basically, there is a Haskell runtime system with primitive operations for memory reads and writes (that is linked with a Haskell program as part of the compilation process for production of the executable). See here for a manual of the runtime system, with all the runtime system options one can set for the executable.