Chapter 4. Detour – Functional Programming

In the beginning of this book, we saw that an algorithm is a sequence of steps to achieve a result. This way of solving a problem by following a sequence of instructions is called imperative programming. Each statement in the program can be thought of as an imperative sentence asking the computer to do something. However, this is not the only way of looking at it. Functional programming sees an algorithm as a composition of components rather than as a sequence of steps. A problem to solve is seen as a composition of smaller-sized problems. Instead of using a loop, we combine smaller versions of the same problem. Functional programming uses recursion as a basic component. A recursion is nothing but solving the same problem for a smaller size and then composing the result with something else to get the solution for the given size of the problem. This has a far-reaching implication in how easy it is to read and understand a program. This makes it very important to study.

There are really two worlds in the programming paradigm. The imperative style of programming is favored by C-like languages, such as C, C++, and Java. On the purely functional side, there are languages such as Lisp, Haskell, and Scheme. Apart from these, some languages try to have the best of both worlds, such as Python or Scala. This is easier said than done; trying to mix both ideas in a language means you have features to support both, but to use them effectively is truly an art.

So, if Java is imperative in nature, why are we talking about functional programming in this book? Well, as I pointed out, sometimes, it is better to mix both concepts and get the best of both worlds. Recently, the Java community has taken note of this fact and has introduced a feature lambda from Java version 8 to provide some level of functional programming support. So, our intention is not to program completely in functional style, as that is not the preferred programming style of Java, but we will do it just enough to make our programs more beautiful and to aid our understanding of algorithms.

This chapter will introduce this rather foreign concept to you and provide some basic tools that are commonly used in functional programming. You will learn the following concepts:

  • Recursive algorithms and the immutability of variables
  • Monads
  • Aggregations on monads
  • Java's support for functional programming, that is, lambda.

Recursive algorithms

As I have already pointed out, recursive algorithms are a different way of thinking about solving a problem. For example, say our problem is to write a program that, given a positive integer n, returns the sum of numbers from zero to n. The known imperative way of writing it is simple:

public int sum_upto(int n){
  int sum=0;
  for(int i=0;i<=n;i++){
    sum+=i;
  }
  return sum;
}

The following would be the functional version of the problem:

public int sum_upto_functional(int n){
  return n==0?0:n+sum_upto_functional(n-1);
}

That's it–just a one-liner! This is probably nothing new to Java programmers, as they do understand recursive functions. However, an imperative programmer would use recursion only when nothing else worked. But this is a different way of thinking. How do we justify that it is equivalent to solving the problem for a smaller input and then composing it with something else? Well, we are certainly first computing the same function for an input that is smaller by one and then just adding n to it. There is one more thing to note here: in the imperative version, we are updating the variable called sum for each value of the loop variable i. However, in the functional version, we are not updating any variable. What do we achieve by that? When a variable is updated multiple times in a program, it is hard to understand or debug it because you need to keep track of the latest value. When this does not happen, it is far easier to understand what is happening. In fact, it makes it simpler in such a way that we can even prove the correctness of the program completely formally, which is rather hard to do for imperative programs where variables change values.

Let's check out another example. This one is about choosing r objects from n objects, where the order does not matter. Let's have a set A of a finite amount of objects. Let the number of objects in this set be n. How many subsets B of this set A have exactly r elements? Of course, the maximum number of elements any subset of A can have is n; hence r ≤ n. We will call this function choose. So, we write this as choose(n,r). Now, the only subset that has an equal number of elements as A is A itself. So, choose(n,n) equals 1.

How do we break this problem into subproblems of a similar nature but with smaller input? To find out how many subsets B have r elements, we first think of the set A as a combination of a subset C with n-1 elements and one particular element a. So, we can say A = C {a}. Now, we consider two disjoint cases: the element a is a member of the subset B and when it is not a member of the subset B. When a is not a member of the subset B, B is also a subset of C. The number of such subsets B with exactly r elements is choose(n-1,r), since C has n-1 elements. On the other hand, when a is a member of the set B, then B can be thought of as a union of two sets – a set D that has all the elements of B except a, and the other is just {a}. So, B = D {a}. Now, you can see that D is a subset of C that we defined. How many such subsets D are there of C? There are choose(n-1,r-1) subsets, since C has n-1 elements and D has r-1 elements. So, the number of such subsets B is choose(n-1,r-1). This means that the total number of sets B with or without a as an element is choose(n-1,r) + choose(n-1,r-1). If we put this in recursive calls, the r and n will get reduced until r equals zero or n equals r. We have already considered the case when n equals r, that is, choose(n,n). When r equals zero, it means B is the null set. Since there is only one null set, choose(n,0) equals 1. Now, we put this in code:

public long choose(long n, long r){
  if(n<r){
    return 0;
  }else if(r==0){
    return 1;
  }else if(n==r){
    return 1;
  }else{
    return choose(n-1,r) + choose(n-1,r-1);
  }
}

That was a little complex, but note that not only did we compute the value of the choose function, we also sort of proved that it will work. The choose function is the binomial coefficient function for integral exponents.

The aforementioned implementation is not efficient. To see why, just consider what each recursive call of the choose function will evaluate to when they both fall in the final else case: choose(n-1,r) = choose(n-2,r) + choose(n-2,r-1) and choose(n-1,r-1) = choose(n-2,r-1) + choose(n-2,r-2). Now note that choose(n-2,r-1) is being evaluated in both cases, which then would have its own recursive calls. This actually significantly increases the asymptotic complexity. We'll defer the analysis of this complexity to the end of this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset