In this post I want to share what I have learned during my first 4 weeks of my functional programming course. For this reflection, I was given a functional programming exercise in Haskell. You can view my solution here.
1. There are no for and while loops
In Haskell there are only declarations. Once we declare that i = 1
the value cannot change over the course of the our program's execution. A side effect of this is that we cannot have traditional loops that uses counters. Like:
function sum(A) {
let total = 0
for (let i = 0; i < A.length; i++) {
total += A[i]
}
return total
}
Instead we have to rely solely on recursion to do these kinds of repetition. For example the sum function can be defined as:
sum [] = 0
sum (x:xs) = x + sum xs
Here we define the sum of an empty list as zero, and then define the sum of a non-empty list recursively as its head plus the sum of its tail. There are some downsides with relying on recursion, one of the downsides is that by default recursion takes up stack space. Using the above definition the sum function will take O(n)
space compared to O(1)
in the JavaScript example.
Thankfully Haskell gives us the option to optimize the amount of stack space our recursive functions use if we define them as tail call recursive functions, which are recursive functions whose recursive step ends with a call to itself. Using the above example, we can observe that the function is not tail recursive because after calling the sum xs
we have to add it with x
. We can optimize it by defining a helper function that takes two parameter the array itself and an accumulator.
sum xs = go xs 0
where
go [] acc = acc
go (x:xs) acc = go xs (acc + x)
As we can see, the helper function's recursive step adds the accumulator by x first then calls itself, thus Haskell can optimize the function such that no extra stack space is used for the recursion.
Even though recursion is the only alternative to full-blown for and while loops. You might not need recursion and as often as you would think, because Haskell has many utlity functions that you can use when working with lists such as, map
, filter
, foldr
& foldl
(also known as reduce in JavaScript and other languages), scanl
& scanr
and many others that all have efficient implementations.
2. Functions are values
This means that all functions can be used as return values and parameters to other functions. For example the built in map
function can be used to transform elements in a list. The map function has a type signature of:
> :t map
map :: (a -> b) -> [a] -> [b]
It takes two parameters, a mapping function which transform an a
to a b
and list of a's which will then be transformed to a list of b's. For example here is a definition of a function that maps a list of Strings to their respective lengths:
lengthOfString :: [String] -> [Int]
lengthOfStrings xs = map length xs
Functions like map
, that return another function or accept another function as parameter are called higher order functions.
3. Every function is a 1 parameter function
In Haskell we can define a function
> add x y = x + y
add :: Int -> Int -> Int
and we can invoke the function quite easily
> add 1 2
3
Ignoring the somewhat confusing type signature. It looks like we have defined add
as a function that takes parameter. However in reality, add
is a 1 parameter function that takes an Int
and returns a function which takes an Int
and returns an Int
i.e. Int -> Int
. Now the arrows in the type signature might make a little bit more sense. Essentially add 1 3
is equivalent to (add 1) 3
. Haskell's syntax makes it much cleaner to only have functions with one parameter. Given the same constraints in JavaScript, you would have to define and like so:
> const add = x => y => x + y;
> add(1)(2);
3
Even with the modern JavaScript syntax it's still quite noisy, especially when invoking functions with many parameters.
We now know that Haskell's syntax makes it possible to have a language where every function is a 1 parameter function. But besides having it because it's possible, are there any benefits of having such a constraint? It turns out that having only 1 parameter functions, can make functions even more powerful. For example, in Haskell we can define things like:
> add1 = (+ 1)
> add1 3
4
Which might not seem particularly useful at first glance, but when combined with Haskell's higher order functions and function composition capability it can be quite powerful. For example we can define a function which doubles then add 1 to all the elements in a list like so:
> doubleThenAddOne xs = map ((+ 1) . (* 2)) xs
> doubleThenAddOne [1,2,3]
[3,5,7]
This constraint can also help us define functions more concisely, for example the above definition can be re-written as:
> doubleThenAddOne = map ((+ 1) . (* 2))
> doubleThenAddOne [1,2,3]
[3,5,7]
Because map (c -> d)
will return a function [c] -> [d]
.
4. Infinite lists are awesome
In Haskell, values aren't evaluated until it is needed. This means that we can have expression like [1..]
which represents a list containing all the natural numbers in increasing order. The elements of the list will only be evaluated when another function requires it. For example when we want to print the first six numbers:
naturalNumbers = [1..]
main = print (take 6 naturalNumbers) -- Only first 6 elements evaluated.
What's more interesting is that we can also define lists recursively by self-referencing it or using recursive functions. For example:
An infinite number of ones
ones = 1:ones
A list containing the infinite Fibonacci sequence
fibs = 0:1:(fibs `add` tail fibs)
where
add = zipWith (+)
A list of all prime numbers
primes = sieve [2..]
where
sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p > 0]