Chapter 13. Functions

WHAT'S IN THIS CHAPTER?

  • Reasoning about Functions

  • Understanding Type Restriction

  • Using First Class Functions

  • Partially Applying Functions

The way you think about functions is one of the things that most strongly differentiates F# from the other .NET languages. This doesn't come down to how functions are represented under the hood. Instead, it has more to do with the way functions are used conceptually.

When programming in F#'s functional style, functions are thought of as just another data type much like any object. They are frequently both passed into and returned from other functions. They also often have data stored inside of them through partial application or closures.

Although this can be done in other languages, it's not a frequently used feature and can take some getting used to. However, in time you will see that by leveraging these functional features more often, you can bring much to the table in terms of code succinctness and clarity. Indeed, much of F#'s power comes from this style of avoiding objects and focusing on functions.

TRADITIONAL FUNCTION CALLS

Traditionally, in mainstream imperative languages like C, C++, and C#, we treat functions as something completely different than the data that flows through our programs. They take a set of data, do some work on or with that data, and, most likely, change the state of our program in some way. They are like the channels through which the execution of our program flows, changing various data states along the way.

Unfortunately, this approach makes it difficult to build reusable components. Without the ability to pass functions, it is difficult to compose new structures at runtime, swap out components for testing or have a subcomponent communicate back to its parent. Over the years various techniques have emerged to mitigate this.

One of the first was to use function pointers (aka delegates). Function pointers allow us to pass the location of functions in memory so that they can be called later. However, these are cumbersome to define and often lack type safety. Also, because languages that use this technique often have only compile-time type checking, ad-hoc runtime type systems often need to be constructed.

Another common technique is to use abstract inheritance. The general concept of this technique is that you define an abstract class that requires certain member functions are filled in. You then pass your function in terms of a concrete implementation of this abstract class. However, this approach requires quite a lot of additional code and, as the inheriting class may have hidden private variables, dependencies on unspecified behavior emerge.

Yet another is Eventing. Events allow us to easily inject callbacks into our existing code via a special mechanism for calling sets of function pointers. However, in addition to the runtime type problems inherent in using function pointers, it has a whole slew of others issues due to the dependence on subscribers. How can we be sure the correct subscribers are being called? How can we be sure our subscribers are unsubscribing correctly? Also, ordering often cannot be guaranteed, especially when many asynchronous calls are being made. The combination of these factors can make event-driven programming a tangled mess of dependencies.

As you'll find out in Chapter 14, many of the problems in using these constructs stem from compile-time-only type systems or the use of shared state. However, even ignoring these, we find that much additional code must be written even for a single additional composable call to be used. For this reason, imperative programmers often avoid these language features as they can add a significant amount of bulk to their program. Instead they opt to write rigid code and use cut and paste as their primary form of code reuse.

MATHEMATICAL FUNCTIONS

In functional programming, functions are considered first-class language constructs. Like in mathematics, functions can be partially filled in and assigned to variables. Functions can be passed into or returned from other functions. They can even be written inline inside of other functions sharing parents' input variables.

This makes writing code in F# much more like math than other languages. Frequently you compose functions out of other functions at runtime. You then can push your data through the resulting function to obtain your result.

Instead of designing in terms of the behavior of your object, which has been composed of other objects, you design in terms of the behavior of your function that has been composed of other functions. These ideas are fundamentally quite similar, but without having to constantly define objects, you generate much less structural code. The other main difference is, unlike objects, functions have immutable internal state. Given the same set of arguments, they always have the same behavior. This makes them much easier to test and significantly more likely to do exactly what you intended.

COMING FROM C#

If you are deeply familiar with C# lambda expressions, you are way ahead of the game. C# lambda expressions are arguably first-class language citizens. They are capture in-scope variables. They even grant some limited type inference. F# functions (and lambda expressions) support all these features and more.

First, they have much better type inference. Only rarely will you find yourself needing to define the input and output types of an F# function. This is because the F# compiler uses the powerful Hindley-Milner type inference algorithm (http://www.codecommit.com/blog/scala/what-is-hindley-milner-and-why-is-it-cool). This alone greatly reduces code size.

This means there is no need to build intermediate objects to move data between functions. That is, they can have arguments passed in one at a time and return sets of multiple data types. This means there is no need to define intermediate classes used simply to move data between functions.

Third, F# functions support tail recursion optimization. This means that, if written so that the recursive call always occurs as the final step before returning, F# functions will not overflow the stack. This allows you to leverage recursion in many more cases and be confident that your recursive functions will execute as expected.

Perhaps the biggest feature over C# lambda expressions is that F# functions do not need to be contained within an explicit class. Using F#'s interactive window, you can compose your program one function at a time, playing with different ideas. This makes development much faster because a stiff object-oriented design doesn't need to be in place in order to simply try a new idea. Object-oriented architecture can be applied at a later time, once the underlying ideas of the program have solidified. One of the biggest benefits in this style of writing software is that tests end up being function level and so don't need to constantly be rewritten for architecture changes.

FUNCTION ARGUMENTS AND RETURN VALUES

Function arguments and return values are a bit different in F# than in other .NET languages. They have a different syntax and use type inference by default. Both of these features come directly from F#'s ML heritage.

However, as F# compiles to IL just like C# or VB, it also is ultimately limited to the same underlying type representations. More than in any other .NET platform language, it is important to understand this type system and the limits of its capability to both generalize and restrict. Similarly, learning to transcend these limitations by leveraging F#'s inline keyword to enhance type generalization and restriction can be a great boon.

Automatic Generalization and Restriction

In F# the type of each function argument is automatically resolved for you through context whenever possible. The compiler examines how the arguments are used and attempts to provide a type that satisfies the most general case allowed by the underlying CLR type system. This process is called automatic generalization.

Consider this function which simply returns the minimum of two arguments:

> let min arg1 arg2 = if arg1 < arg2 then arg1 else arg2;;

val min : 'a -> 'a -> 'a when 'a : comparison

Here the type inference engine saw that arg1 and arg2 were compared with the less-than operator inside of the min function and so inferred that the arguments to min must have the same restrictions as the arguments to less-than. That is, they both must be the same type and have the comparison constraint. The less-than operator could just as well be another function. In that case the argument types would be inferred based on the signature of that other function.

There are a few important caveats to this. The first is that numerical operators are resolved to int if not observed being used in another context. For example, if you define a simple add function without any context you might expect it to generalize to all of the potential inputs of the plus operator. This is not the case; instead it will take and return integers by default.

> let add a b = a + b;;

val add : int -> int -> int

If you give the compiler an external context for this function, it will resolve the arguments in terms of that context.

> let add a b = a + b
add 2.0 3.0;;

val add : float -> float -> float

However, as the function signature has now been solidified to take two floats and returns another, if you now attempt to use this function with the int type, it will fail.

> add 2 3;;

  add 2 3;;
  ----^

C:UsersRickAppDataLocalTempstdin(6,5): error FS0001: This expression
was expected to have type
    float
but here has type
    int

Of course, this is less than ideal. Why should you need to write a specific version of our add function for each basic data type? Thankfully, you don't. There is a solution to this problem: the inline keyword and statically resolved type parameters.

The inline Keyword

Much like in other languages supporting this keyword, when a function is defined with inline the resulting function is injected directly into the locations from which it is called at compile-time. In older languages this was done for the sake of execution speed. In a tight loop, function calls can have significant overhead.

However, with modern processors this has become much less of an issue. These days our processors are often sitting idle, starved for information. The cost of a function call is usually a small drop in the overall bucket. Also, modern compilers are much better at optimization and will often do automatic inlining when appropriate. For these reasons the C# and VB.NET languages opted to not include this feature.

In F#, inline is mainly used for its effect on type inference. An inlined function supports much more robust type inference as it is not bound to the rules of the CLR as tightly. It can infer arguments in terms of statically resolved type parameters.

So, whereas in the previous section we saw that our add function was being automatically restricted to only accept a single type of input parameter, if we define the same function as inline it can now accept either.

> let inline add a b = a + b;;

val inline add :
   ^a ->  ^b ->  ^c
    when ( ^a or  ^b) : (static member ( + ) :  ^a *  ^b ->  ^c)

> add 2.0 3.0;;
val it : float = 5.0

> add 2 3;;
val it : int = 5

In this case the compiler is generalizing on the plus operator even though it is not generic. As long as the passed arguments support a plus operator that takes the other type, it will resolve correctly. The only caveat is that this type of inference can only happen at compile-time. Functions defined as inline will not be usable from other .NET languages.

Type Annotations

It is sometimes necessary to explicitly describe your type parameters in F#. Most often type inference works incorrectly due to a bug elsewhere in your program. However, occasionally the compiler cannot determine the type of your parameter for you or will infer a type you don't expect. Whatever the case may be, understanding type annotations is essential to writing F# effectively.

Basic type annotation syntax is simple; you just wrap the argument in parentheses, add a colon after the argument name and express the type after it.

> let plusOne (x: int) = x + 1;;

val plusOne : int -> int

In this example we are annotating x but the return type is still inferred. To annotate the return type, place a colon after the argument list and follow it with the to-be-returned type.

> let plusOne x : int = x + 1;;

val plusOne : int -> int

The return type, as well as each argument, may or may not be annotated individually.

> let plus x (y: double) = x + y;;

val plus : double -> double -> double

It is often necessary to only annotate one argument to ensure type inference works correctly or to find that pesky bug causing your program to fail compilation. It ends up working quite a lot like dominos. When one type is correctly identified, many others fall into place as well. Once the types around it are resolved, incorrect code will stand out like a sore thumb.

Generics and Type Constraints

F#'s type system isn't limited to just basic types. It supports the full range of generics and type constraints as other .NET languages. In fact, its type system can infer most of these automatically.

For example if you were to write a function that compares two arguments with the equals operator, the arguments automatically generalize to a generic type with the equality comparison constraint.

> let areEqual arg1 arg2 =
    arg1 = arg2;;

val areEqual : 'a -> 'a -> bool when 'a : equality

However, it is also possible to explicitly mandate these constraints when making type annotations. This can be done through application of the when keyword.

> let areEqual<'a when 'a : equality> (arg1: 'a) (arg2: 'a) =
    arg1 = arg2;;

val areEqual : 'a -> 'a -> bool when 'a : equality

In some cases you may want to have multiple constraints on a type. To do this, separate each constraint with the and keyword.

> let isNull<'a when 'a : equality and 'a : null> (arg: 'a) =
    arg = null;;

val isNull : 'a -> bool when 'a : equality and 'a : null

A list of commonly used generic type constraints follows.

CONSTRAINT

EXAMPLE

Type (or Parent)

<'a when 'a :> Object>

Nullable

<'a when 'a : null>

New()

<'a when 'a : ( new: unit -> 'a )>

Value Type

<'a when 'a : struct>

Reference Type

<'a when 'a : not struct>

Comparison

<'a when 'a : comparison>

Equality

<'a when 'a : equality>

Statically Resolved Type Parameters

As mentioned in the previous section on the inline keyword, F# also supports a much richer set of compile-time-only type constraints. To use them you must trade in generic type parameters for something called statically resolved type parameters. Parameters defined in this way are only available to F# functions marked as inline.

One particularly useful example is the member restriction type constraint. This constraint allows you to generalize a type on methods, properties, and even operators.

type PresentFromTheGods(isGood) =
     member x.IsGood : bool = isGood

let inline IsItGood< ^a when ^a : (member IsGood : bool)> (container: ^a)  =
    let isgood = (^a : (member IsGood : bool ) container)
    isgood

In this example the IsItGood function may take any object that has a gettable IsGood property that returns a bool. When passed an argument of a type defined by the member restriction statically resolved type parameter, some special syntax must be used to extract the value of that member. Here the value of the container's IsGood member is extracted into the isgood variable and then returned.

> IsItGood (new PresentFromTheGods(true));;
val it : bool = true

At compile time inline code will be generated in place of the function call. This allows for shared code that is fast and has liberal constraints.

However, as you can see from this example, statically resolved type parameters have syntax that is quite esoteric. In most cases it's best to stick with a combination of inline inference and CLR supported type constraints. This way, you can be sure that they will hold at runtime and that your functions will be available when calling F# assemblies from other languages.

PARTIAL APPLICATION

In this example only one of add's two arguments is passed in. After the value 1 is applied, the x argument is fixed to 1. The result of this is a new function which adds one to any integer passed in.

Partial application is the passing in of only some arguments to a function. This allows the arguments to be stored within the function and passed around implicitly with it. The remaining arguments can be then applied later when they are available or when you want the function to be executed.

> let add x y = x + y
let addOne = add 1;;

val add : int -> int -> int
val addOne : (int -> int)

> addOne 5;;
val it : int = 6

Partial application has many benefits. As you'll see in Chapter 17, it allows functions to be composed much more readily. Also, you no longer need to pass around a separate set of arguments along with your function. You can bake the repeatedly used arguments right inside and just pass the partially applied function around. This reduces the size of your code by making it unnecessary to build container classes that contain only the repeatedly used inputs of a given function or class.

Currying

In F#, partial application is done through a process called currying. A curried function is a function that internally has been broken down into a series of one parameter functions. When a parameter is passed in, another function is returned whose argument is the next parameter. This occurs until all parameters are filled in and the function is executed. This is done automatically for you in F#, but let's take a look at what it looks like conceptually.

> let explicitCurryAddTwoNumbers x =
    function y -> x + y

val explicitCurryAddTwoNumbers : int -> int -> int

Here we have a function which takes a single argument, x, and returns another function of a different argument, y. This returned function has x captured inside of it, ready for use later. This capturing of variables bound in a parent is called a closure.

> let plusOne = explicitCurryAddTwoNumbers 1;;
val plusOne : (int -> int)
> plusOne 2;;
val it : int = 3

After passing a value into the function, the internal function is returned with x bound. This function can then have another argument applied to it. This ends up being functionally equivalent to the implicit currying in F#.

> let addTwoNumbers x y = x + y
let plusOne = addTwoNumbers 1;;

val addTwoNumbers : int -> int -> int
val plusOne : (int -> int)
> plusOne 2;;
val it : int = 3

As you can see by comparing the two examples above, explicit currying requires an additional nested function for each parameter. For example, three arguments would require two nested functions.

> let explicitCurryAddThreeNumbers x =
    function y -> function z -> (x + y + z);;

val explicitCurryAddThreeNumbers : int -> int -> (int -> int)

Although this is possible to do in languages that don't support currying, simulating it with closures becomes tedious quickly. You need to explicitly nest each function, and each time a function is called, you are restricted to passing in only a single argument at a time. F# supports currying by default, so neither of these steps are necessary.

Restrictions on Functions and Methods

As useful as they are, for the sake of efficiency and interoperability F# has imposed a few restrictions on functions and curried methods. Not knowing about these beforehand can cause quite a bit of difficulty because they are somewhat nonintuitive.

First, functions defined outside the scope of a class cannot be overloaded. This makes sense because conceptually a free floating function is bound to a name that does not include its full type signature.

> let square x: int = x * x
let square x: double = x * x;;
error FS0037: Duplicate definition of value 'square'

Just as with data values, you cannot have multiple instances that have the same name within the same scope. However, you can bind a function to an existing name in a subscope as long as the type signature is exactly the same. Neither of these restrictions applies to class methods. As in other .NET languages, class methods may be overloaded.

> type SquareHelper =
    static member square (x: int) = x * x
    static member square (x: double) = x * x;;

type SquareHelper =
  class
    static member square : x:int -> int
    static member square : x:double -> double
  end

However, curried class methods cannot be overloaded. And any member of more than a single argument will be curried by default.

> type BadMultHelper =
    static member multiply (x: int) (y: int) = x * y
    static member multiply (x: double) (y: double) = x * y;;

      static member multiply (x: int) (y: int) = x * y;
  ------------------^^^^^^^^

The method 'multiply' has curried arguments but has the same name as another
method in this type. Methods with curried arguments cannot be overloaded.
Consider using a method taking tupled arguments.

The currying of class methods can be prevented by using tuple syntax for function arguments.

> type GoodMultHelper =
    static member multiply (x: int, y: int) = x * y
    static member multiply (x: double, y: double) = x * y;;

type GoodMultHelper =
  class
    static member multiply : x:int * y:int -> int
    static member multiply : x:double * y:double -> double
  end

Don't be concerned about the possible performance impact in potentially creating a tuple just to pass in. In actuality, F# tupled arguments turn into normal .NET calls with discrete arguments under the hood. Both types of functions will look the same from other .NET languages. The only real difference is in how they are used in the F# language syntax.

FUNCTIONS AS FIRST CLASS

According to Structure and Interpretation of Computer Programs (http://mitpress.mit.edu/sicp/full-text/book/book.html), a language construct is considered to be first class if it has no more restrictions than other constructs of that language. In particular, the following properties are listed:

  • It can be passed as a parameter.

  • It can be returned as a result.

  • It can be stored in variables and data structures.

If you consider function pointers, then you might say even humble C fulfills most of these requirements. However, like most imperative languages that inherit syntax from it, C lacks the capability to create new functions at runtime. Functions cannot be composed or partially applied. In this way they are clearly inferior to even a struct data type.

With the introduction of anonymous delegates in 2.0, C# functions also have all the properties previously listed and while somewhat cumbersome to use, may be considered first-class language constructs. The main difference with F# is that treating functions as first-class is simple to do, requires little syntax, and is leveraged just about everywhere when writing in the idiomatic language style.

Recursive Functions

Much as a data type can have a reference to itself, a true first class function should be able to as well. In fact, it should be able to call itself using that reference just as it might any other function. A function that does this is called a recursive function and the process of using recursive functions is called recursion.

To write recursive functions in F#, you must define the function with the rec keyword. This keyword binds the function in such a way as to be visible to itself.

let rec pow x n = if n <> 0
                  then x * pow (x) (n - 1)
                  else 1.0

This pow function raises its argument x to the nth power. It does this by calling itself repeatedly, each time reducing the value of its n parameter by one and multiplying x by the result of the last call to pow. However, in F# this is a poor implementation of pow.

To understand why, you first need to understand how this function will execute at runtime. Each time pow calls itself, another stack frame is generated. In the ideal case, n eventually becomes zero and pow will return 1 instead of calling itself again. Each layered call will then return, and the result will be multiplied by x. Finally, the result of all of those multiplications will be returned.

However, each function call takes up additional stack memory, and the stack is only a finite size. If n is too large, eventually your program will run out of stack memory and throw a StackoverFlowException.

> pow 2.0 100000;;
Process is terminated due to StackOverflowException.

This, combined with the overhead of each additional function call, makes for a convincing argument against recursion in other .NET languages. However, F# supports two features that make it very good at recursion. First, F# has been heavily optimized to allow for deep recursion.

> pow 2.0 1000;;
val it : float = 1.071508607e+301

Second, it has a feature called tail call optimization that allows the compiler to turn some recursive calls into fast loops under the hood. The caveat to this is that it only applies to recursive functions whose final call leaves no work left to be done. If after calling itself the function has to do anything other than return, it can't be optimized. This may sound like it might make tail call recursion all but useless, but as you'll see here and in following chapters, it is profoundly important for both speed and safety.

To make the above example tail recursive, you need to define the same problem in a way that does not cause any work to be done after the recursive call is made. This is most easily done by introducing a new argument to hold whatever it was you were relying on the stack to hold previously. A function argument used solely to pass itself data while performing recursion is called an accumulator.

let rec tailpow x n r = if n <> 0
                        then tailpow (r * x) (n - 1) r
                        else r

In this better example r acts as the accumulator. It holds the result of the previous computation. Instead of performing the computation lazily as the recursive calls return, they are performed at each step and the result is passed into r. Finally, when n is equal to zero, the result is returned.

It might seem as though having the additional parameter is a big problem. However, it's a simple matter to embed this function inside another and so hide the accumulator.

let pow x n =
    let rec tailpow (x:float) n r = if n <> 0
                                    then tailpow x (n - 1) (r * x)
                                    else r
    tailpow x n 1.0

In fact, using techniques mentioned in the "Creating Functions at Runtime" section make this even easier and opens up even more possibilities for recursion.

Higher Order Functions

Higher order functions are functions that use one particular property of a first class function. That is, they take and/or return other functions. As an example, take a look at a simple function that takes another function of no arguments.

> let square x = x * x
let performAndAddOne func = func() + 1;;

val square : int -> int
val performAndAddOne : (unit -> int) -> int

> performAndAddOne (fun () -> square 2);;
val it : int = 5

The function we pass in takes no arguments and returns the square of 2. When this is passed in to performAndAddOne it is executed, 4 is returned and 1 is added to it. What is interesting about this is not what is being done, but how. The passed in function, as well as the function it is passed in to, could just as well manipulate complex binary data.

Another slightly more advanced example of this from the .NET framework would be List<T>'s generic ForEach function. ForEach takes a function and applies it to each member in a data set. Popular functional examples include map, reduce, and fold, which are discussed in Chapter 16.

Beyond the simple idea of what higher order functions are, is the much more complex idea of what they enable. Ultimately, they allow the programmer to define a program in terms of a series of discrete data transformations instead of a series of state changes. Indeed, this idea is at the core of functional programming.

In languages without higher order functions, while, do-while, for, and foreach must be implemented in the compiler. The same would go for a generalized map, reduce, and fold. Even when using a somewhat imperative style, higher order functions enable you to write these on your own.

> let rec forEach (func: 'a -> unit) (collection: list<'a>) =
    if (Seq.isEmpty collection) then
        ()
    else
        func (List.head collection)
        forEach func (List.tail collection)

let printNum num = printfn "%i" num

let testList = [ 0; 1; 2; 3; 4 ];;

val forEach : ('a -> unit) -> 'a list -> unit
val printNum : int -> unit
val testList : int list = [0; 1; 2; 3; 4]

> forEach printNum testList;;
0
1
2
3
4
val it : unit = ()

Without some of the more advanced constructs like recursion, higher order functions provide little value other than syntactic sugar. Truly, it is the whole of the ideas presented in this part of the book that when used together make functional programming so useful.

One important idea to take away from this is that, like many functional programming constructs, higher order functions enable language-oriented programming. By leveraging them you can now accomplish much of what you would have previously needed a new compiler to do.

Storing Functions

Although it is possible to do a great deal with only the ability to pass and return functions from other functions, storing functions in the same way as other data types grants even more power. For example, this technique is often leveraged to reuse a partially applied function over and over within the same scope.

> let processEachDataset (data1, data2, data3) =
    let transform = getDataTransform()
    let newData1 = transform data1
    let newData2 = transform data2
    let newData3 = transform data3
    (newData1, newData2, newData3);;

val processEachDataset : 'a * 'a * 'a -> 'a * 'a * 'a

It is also possible to perform some work with a set of functions that you have previously accumulated.

> let processDatasetWithCurrentTransforms data =
    let transforms = getDataTransforms()
    let rec applyTransforms data transforms =
        match transforms with
        | [] -> data
        | thisTransform :: otherTransforms ->
            applyTransforms (thisTransform data) otherTransforms
    applyTransforms data transforms;;

val processDatasetWithCurrentTransforms : 'a -> 'a

One particularly strong example of this is F#'s asynchronous workflows. Using the Async module you can take a collection of functions and send them off to separate threads to execute with little code. Without the ability to put functions into collections, powerful constructs like this would be impossible.

Creating Functions at Runtime

The ability to pass and return functions from other functions is powerful alone. Greatly enhancing this is the ability to create new functions at runtime.

In F# this can be accomplished in a number of ways that are each useful for a wide range of practical applications. Closures provide the ability to bind variables out of the scope in which a function is defined. First-class functional composition is used to build new functions out of others.

Closures

Lexical closures are by far the easiest way to create new functions at runtime. Simply put, when a function is defined within another, it binds the values that are defined before it in the parent function's scope. This can be done with any first-class language construct: values, objects, and functions.

Think back to school and the approximate derivative of a function. You might recall that it can be expressed in terms of f(x + dx) − f(x − dx) / (2 * dx) where dx is a number close to zero. Without higher order functions, this is quite difficult to calculate in a nonspecific way. Given higher order functions, it's possible to calculate the derivative of a function for a single value of x in the following way:

let approxDerivative f dx x = (f(x + dx) - f(x - dx)) / 2 * dx;;

However, without a way to create a function, this can only find values of the derivative at certain points. With closures you can create a function that creates the approximate derivative function of a given function.

let approxDerivative f dx =
    let derivative x = (f(x + dx) - f(x - dx)) / 2.0 * dx in
    derivative

In this example the function and dx values are passed into the parent function and are captured by the derivative function. This new function can now be returned containing these bound values.

The simplest implication of this is that, when repeatedly calling the same function, you no longer have the need to pass the same arguments again and again. Instead, you can simply create an embedded function that closes over the unchanging arguments and call it. This is quite a boon, especially when dealing with mutable variables. You no longer need to consider that they might change the bound function's behavior. Once bound, values will not change unless mutated by reference.

Lambda Expressions

A lambda expression is simply special syntax for a function without a name. It can be defined completely inline. It can also still be curried and close over variables just like any other function. It can even be assigned to a variable. However, it need not be.

Lambda expression syntax is simple, just use the fun keyword followed by a list of arguments and then the function contents. Going back again to the derivative example, we can make the closure version look much cleaner if we use a lambda expression:

let lamdbaDerivative f dx = fun x -> (f(x + dx) - f(x - dx)) / 2.0 * dx

In this example the function is returned directly without any need for intermediate constructs. The syntax is clean and easy to understand, ideal for passing to or returning from other functions.

This feature is particularly useful when calling the many F# library functions that take another function as a parameter. The resulting code reads much more easily than a separate function definition when written in this way.

However, do be careful when using multiline lambda expressions. They can easily get out of hand and be difficult to differentiate from their surrounding context. They are also difficult to write tests against. For these reasons, if your lambda expression is longer than one or two lines, it is often better to define it in a separate function instead.

Composition with Partial Application

As previously discussed, currying allows you to partially compose functions by applying only some of the parameters a function requires. In the case of the derivative example, you can use partial application to build a derivative function without using closures. Consider the initial example:

let calcApproxDerivative f dx x = (f(x + dx) - f(x - dx)) / 2.0 * dx

Given that this function already exists, we can use partial application to build a specific derivative function quite easily.

let square (x: float) = x * x
let derivativeOfSquare = calcApproxDerivative square 0.001;;

So, while not quite as flexible as closures, in many cases it is possible to leverage partial application to do the same thing.

SUMMARY

Your first task in getting acclimated to F# is to get a handle on the type inference system. It's not difficult; you just have to learn how the type inference system makes the decisions it does. When things go wrong, use Visual Studio to mouse over your value and check out its inferred type. It will soon be obvious where type annotations are needed.

Next, try playing with closures, partial application, and lambda expressions. Although entirely avoidable in C# and VB.NET, one of the most important steps in getting familiar with F# is becoming used to treating functions as first-class language citizens.

These ideas are much different than what you have been used to in idiomatic C#. Becoming adept at leveraging these constructs primarily requires practice. Don't be afraid to try things out in the interactive window and see what happens.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset