Hour 18. Code Efficiency


What You’ll Learn in This Hour:

Image How to profile code to find the bottlenecks

Image How to vectorize code

Image What initialization is and how it makes code more efficient

Image How to handle memory usage

Image The basics of Rcpp


Up to this point we have thought a lot about the data analysis workflow in R—how we can read in data, analyze the data, and produce professional graphics—but we have not really thought about the impact of what we are doing and how long it will take to run the code in practice. Although we have already looked at packages such as dplyr and data.table that will help us to make working with data more efficient, we should do more to ensure our code is performant and robust. In this chapter, we are going to look at some of the techniques we can use to improve the efficiency and, importantly, the professionalism of our R code.

Determining Efficiency

Before we dive in and start spending large amounts of time making our code more efficient, it’s worth thinking about where we should start on improving our code and how we know if a change has made a difference. We will start by looking at ways in which we can profile code to find out where the slow points are and then look at functions we can use to see how long it takes to run our code.


Tip: Making Accurate Changes

As well as making updates that ensure that our code runs faster, we also need to ensure that any changes do not impact the accuracy. Although it would be great to have a function that is 1,000 times faster, it is no use if this adversely changes what the function does. At a basic level, we can simply compare the output of different variants of the function. For more professional and robust code, we can use a unit test framework such as testthat to continuously check our changes. See Hour 20, “Advanced Package Building,” for more information on unit testing.


Profiling Code

Profiling allows us to determine where the bottlenecks are in code, what is actually slowing us down. Profiling allows us to see which lines or functions we are spending the most time running. The benefit of this is that by knowing where our code is slowest, we can spend our time on increasing the efficiency of the right components of our code. After all, there is little point in increasing the efficiency of a line of code that is only a tiny percentage of the overall running time. You may as well put your time and effort into making changes in the right place.

A number of different packages are available for profiling R code, but here we will use the Rprof function available in base R. When we use this function, we run our code between start and close instances. This will then check at a specified interval what function is being run by our code. The output is returned to file, and we can then analyze it to determine where our code was spending most of its time. In more recent versions of R, it is possible to return this output at the line level so that we can see which lines of code we spent the most time on.

As an example, we will profile the function in Listing 18.1 shown in the next section. Because this function will run quite quickly, we will run the function a number of times using replicate.

> tmp <- tempfile()
> Rprof(filename = tmp, line.profiling = TRUE)
> replicate(100, f1(100))
> Rprof(NULL)
> summaryRprof(filename = tmp, lines = "show")
$by.line
   self.time self.pct total.time total.pct
#9      0.06      100       0.06       100

In this example, we have included line profiling, which makes it much easier to see which line the most time was spent on. In this particular case, the output returns only one line (line 9), which would indicate that the most time is spent performing the ifelse inside of the for loop (see Listing 18.1 in the next section). This suggests that this is the component we should focus on trying to improve. Note that the specific output you see in this case will depend on exactly how long the code takes to run, which will depend upon the machine used and the operating system, among other things.

Benchmarking

If we are going to start making changes to code, we want to know that it is making a difference and actually speeding up our functions. Benchmarking tools let us time the running of code, typically at the nanosecond level. Just as with profiling, there are a number of ways of doing this, but here we will use the microbenchmark package, which is widely used for code analysis. Using the microbenchmark function, we can pass any number of functions to be run. Each will be run a specified number of times, as defined by the times argument. We need to run a function more than once to determine the average time to run because there may be faster and slower occurrences, which would impact our results if we compared on a single run. The microbenchmark function helps us handle this and returns a series of statistics, such as the median and upper and lower quantiles of all the times.

As an example of benchmarking, we will start with the function defined in Listing 18.1. This is a simple function that samples 0 and 1 to give a vector of the length specified by the argument len. We will use this function as an example throughout this hour to show how we can improve our code. We can use this function in the microbenchmark function by simply passing the function call—for example, f1(100). By default, this will be replicated 100 times.

> microbenchmark(f1(100))
Unit: microseconds
    expr     min      lq     mean median       uq     max neval
 f1(100) 597.087 616.146 731.2236 624.21 662.5125 2026.94   100

As you can see, the output to this function is a series of summary statistics for the running time of each replicate. The main value of interest is the median, though in some instances the spread may also be of interest.

LISTING 18.1 Sampling Function


 1: f1 <- function(len){
 2:
 3:   x <- NULL
 4:
 5:   for(i in seq_len(len)){
 6:
 7:     s <- runif(1)
 8:
 9:     x[i] <- ifelse(s > 0.5, 1, 0)
10:
11:   }
12:
13:   x
14:
15: }



Tip: How Fast Is Fast Enough?

Before you start to make changes to your code, it is worth having an aim for how much you are looking to speed up your code by. How long will be sufficient to wait for your code to run? There are many small changes you can make to improve efficiency, but this will typically take more of your time than it is worth for the speed up you will achieve. Having an aim will allow you to focus on the changes that will help you achieve that rather than endlessly making changes for minimal gains.


Initialization

When you first start writing code, and particularly if you have a background in other languages such as C++, you are likely to write lots of loops. You have seen functions, such as the apply family of functions, that allow you to write some of these actions in an alternative way that would be recommended for production code.

However, sometimes you do just need to use a for loop. One of the common pitfalls when you do this is to create an object and then simply append to it each time you work around the loop. You can see an example of this in Listing 18.1. In this example, you can see that on line 3 an object called x is created, and then inside the for loop, on line 9, we append to this for each iteration of the loop. In R, this makes our code much slower because a copy is made of the vector at each iteration.

A very simple way to speed this up is to prevent R from making the copy each time. We can do this via initialization, or pre-allocation. This simply means that we create the object (in this case, a vector) before we start our loop as an object of the appropriate type and size (for instance, a numeric vector of length 10 or character vector of length 5). Now each time we work around our loop, we simply overwrite the values. This alternative implementation can be seen in Listing 18.2.

LISTING 18.2 Initialized Sampling Function


 1: f2 <- function(len){
 2:
 3:   x <- numeric(len)
 4:
 5:   for(i in seq_len(len)){
 6:
 7:     s <- runif(1)
 8:
 9:     x[i] <- ifelse(s > 0.5, 1, 0)
10:
11:   }
12:
13:   x
14:
15: }


Let’s compare this to the original version of the function using microbenchmark.

> microbenchmark(f1(100), f2(100))
Unit: microseconds
    expr     min       lq     mean   median      uq      max neval
 f1(100) 582.059 616.6960 637.9074 631.3575 651.883  744.434   100
 f2(100) 532.576 567.5805 642.1922 583.8910 602.401 2666.544   100

You can see that this has made the function faster, though in this case there is still a significant amount more we can do to improve the efficiency.


Tip: Creating the Correct Type

In this example, we have used the function numeric to create a numeric vector of 0s. We can also create character and logical vectors with the functions character and logical, respectively. The advantage of this is that the vector is of the correct type before we start, and this will prevent R from having to convert the object to a different type. It can also help us out because we don’t need to change the value unless we want to change it from 0 (for numeric), "" (for character), or FALSE (for logical).


Vectorization

As stated in the previous section, one of the common pitfalls when starting to write R code, especially for those coming to R from other programming languages, is to use a for loop to perform an action over a vector of values. In R this is actually often unnecessary and makes our code run much slower. Instead, we can use R’s vectorization to perform a series of actions at the same time. This will not only make the code much faster, but is a much more professional approach to take in coding in R.

What Is Vectorization?

Vectorization allows us to perform an action on an array of values, such as a vector, simultaneously. As an example, suppose we wanted to multiply the values 1 to 10 by 4. Rather than first multiply 1 by 4 and then 2 by 4 and so on, we can use vectorization to perform all 10 calculations at the same time. In R we would do the following:

> 4 * (1:10)
[1]  4  8 12 16 20 24 28 32 36 40

As you can see, we were able to perform 10 calculations with just a single expression and no need for any loops. This will significantly speed up the code, as you will see when we look again at the example from Listing 18.1.

Note that in this particular example the brackets are not strictly necessary, but they help to make your code much clearer to read, particularly for someone picking up your code for the first time. Because we are looking at efficiency, it is worth mentioning that brackets will slow down your code very slightly, so where your preference is for very fast code, you may want to remove them. However, this will not generally be the primary cause of slow-running code, and in the latest versions of R the difference is barely measurable.

How Code Can Be Vectorized

Vectorization in R is very simple because most functions have been designed to accept a vector of values as input rather than a single, scalar value. As an example, think about the paste function introduced in Hour 6, “Common R Utility Functions.” We actually made use of vectorization there to create a vector of values that were the strings of fruits with numeric values pasted together:

> fruits <- c("apples", "oranges", "pears")
> nfruits <- c(5, 9, 2)
> paste(fruits, nfruits, sep = " = ")
[1] "apples = 5"  "oranges = 9" "pears = 2"

So rather than having to loop round and paste the fruit to the number in turn, we do it all in one step. Some functions have even been written as a vectorized version of functions that you know. For instance, the function ifelse used in the examples in this hour is a vectorized version of the if/else structure introduced in Hour 7, “Writing Functions: Part I.” Other examples include pmin and pmax, which we can use to find the minimum and maximum, respectively, for each value in a vector of values. Here’s an example:

> pmin(0, -1:1)
[1] -1  0  0
> pmax(-1:1, 1:-1)
[1] 1 0 1

Let’s now return to our sampling function that we have been improving. You saw how we could initialize this function in Listing 18.2, but we can actually remove the loop here altogether by vectorizing the code. There are multiple ways we can do this, and two are shown in Listing 18.3.

LISTING 18.3 Vectorized Sampling Function


 1: f3 <- function(len){
 2:
 3:   s <- runif(len)
 4:
 5:   x <- ifelse(s > 0.5, 1, 0)
 6:
 7:   x
 8:
 9: }
10:
11:
12: f4 <- function(len){
13:
14:   x <- numeric(len)
15:
16:   s <- runif(len)
17:
18:   x[s > 0.5] <- 1
19:
20:   x
21:
22: }


In the first of these functions, f3, we have used the ifelse function. Rather than generate a single value from a uniform distribution, we have generated a complete vector of values that we will use in a single step (line 3). We can then use the vectorized ifelse (line 5) to test all values and return the appropriate 1 or 0 for each value in the vector. Before we look at the second way of doing this, let’s compare f3 to our previous implementations:

> microbenchmark(f1(100), f2(100), f3(100))
Unit: microseconds
    expr     min       lq      mean   median       uq      max neval
 f1(100) 570.696 593.6045 999.40998 601.1185 616.8795 32061.20   100
 f2(100) 524.512 533.8590 598.32525 550.7200 562.4485  1758.27   100
 f3(100)  30.056  32.2560  47.34957  33.7220  36.8370  1211.40   100

Just looking at the median values here, you can see that this is a massive improvement over the original version, and even the initialized version. This approach gives us huge improvements in the running of our code, but in actual fact the second approach we can take to vectorizing this function will make even more gains.

Take a look at the function f4 that we defined in Listing 18.3 (starting on line 12). In this example, we are again initializing a vector that we will return. Just like in f3, we have generated our uniform samples in a single step, but rather than using ifelse, we have directly subscripted the vector x based on the values in the vector s. You might also notice that we have only done this to generate the values that need to be 1. This is because the initialization creates a vector of 0s, so we can cut out a step by only making a single change that we need. If we compare the two vectorized versions, we will see that this is faster yet.

> microbenchmark(f3(100), f4(100))
Unit: microseconds
    expr    min     lq     mean median     uq    max neval
 f3(100) 28.956 29.690 31.40153 30.057 30.973 59.012   100
 f4(100)  9.530 10.264 11.19091 10.630 11.363 50.583   100

Although there are vectorized functions that will speed up compared to the non-vectorized versions, it is sometimes better to work directly on the vector using basic subscripting methods.


Tip: Don’t Remove Error Handling

Functions such as pmin and pmax are slower because they include a variety of arguments and checks for the data types and such. As you can see, the direct version is much faster, but that doesn’t mean we should start to remove all error handling from our functions. If you are sharing your code, it is much better practice—and key to production level code—to include the error handling and make other parts of your code more efficient with the methods you have seen here.


Using Alternative Functions

Often we don’t actually need to do much to our code other than use an alternative function that has solved the problem for us or is more specific in its implementation. It is quite possible that someone has already done what you are trying to do and solved the problem already, so it is always worth searching available resources for an alternative function or package. As a reminder of some of the ways in which you can search for functions and packages, take a look at Hour 2, “The R Environment.”

The example we have been using in this hour is a great illustration of such a case. The function we wrote in Listing 18.1 is designed to randomly sample a series of 0s and 1s. In Hour 6, you were introduced to the sample function. Clearly someone has already implemented the problem we are trying to solve, and it is likely that they have already put in the effort to make it as efficient as possible. A final version of this function, f5, is given in Listing 18.4, where we have simply changed the implementation to use the sample function. Let’s compare this final implementation to all the other variants we have seen in this hour.

> microbenchmark( f1(100), f2(100), f3(100), f4(100), f5(100))
Unit: microseconds
    expr     min       lq      mean   median       uq      max neval
 f1(100) 574.727 582.4245 672.98853 596.7200 616.8795 1895.354   100
 f2(100) 524.146 545.4050 638.65877 554.0190 568.3130 1768.899   100
 f3(100)  30.423  32.6220  36.03099  33.7220  39.0365   78.806   100
 f4(100)  10.263  10.9970  23.79963  11.5465  12.0965 1211.766   100
 f5(100)   6.231   7.5145   9.31053   8.4310  10.4470   16.862   100

LISTING 18.4 Using the sample Function


 1: f5 <- function(len){
 2:
 3:   sample(0:1, size = len, replace = TRUE)
 4:
 5: }


Obviously, if you don’t know that the function exists, you can’t use it. A great way to find functions that can help you solve a problem is to read other people’s code and take a look online at the ways in which people solve similar problems to your own. Many resources are available that can help you out, and we have tried to introduce many useful functions to you in the appropriate places in this book.

Managing Memory Usage

When it comes to memory usage in R, there is actually very little we need to do to manage it ourselves. Although memory in R is taken up by temporary objects, it is automatically made available when it is needed. There is no need for us to manually free the memory on a regular basis. One of the main things we need to do is consider what objects we have created and how we will work with them.

Suppose we are working with big data sets. The packages you saw in Hour 12, “Efficient Data Handling in R,” have been designed to use memory in an efficient manner, so they are strongly recommended in this case. If you do find that you are getting errors due to a lack of available memory, the first thing to do is to take a look at what objects you have created in your current R session, the size of those objects, and whether you can remove them.

In RStudio, this is made simple with the environment pane. This pane gives us summary information about all the objects in our environment, what each object is, and, importantly, its size.


Tip: Checking the Size of Objects

To see the size of an object in the environment pane, you will need to use the grid view. In the top-left corner of the pane, you will see a menu labeled either “Grid” or “List.” If it says “List,” you can use this to menu to switch your view. If you are not using RStudio, you will need to use the object.size function on each object. Remember that you can use a function such as sapply to do this for a number of objects at the same time.


We can remove objects from our session either using the interface in RStudio or programmatically using the function rm. For example, to remove object x, we would run

> rm(x)

If the object is large, we may want to force R to make the memory available again. We do this in R by using the function gc for garbage collection. This is usually done automatically when needed without the need for us to intervene.


Tip: Restart to Clear Completely

If you have been working on an analysis and creating objects to test out your method, you may want to restart R to completely clear the workspace of any unused objects, including classes and unused packages or functions. If you have been writing a script, it will be easy to re-run all of your code and get back to where you were in a completely clean environment.


Integrating with C++

We have been looking at some of the ways in which you can rewrite your code in R to make it more efficient, but in some instances it is simply not possible to improve the speed of your code using R. In those instances, you may want to turn to other tools that are more suitable for the task. In R, one of the simplest ways to extend code with much faster alternatives is by using C++, and more specifically the Rcpp package.

C++ is a statically typed language, which means we have to specify object types when they are created; it is also compiled, which tends to make it a much faster language than R. Although it has always been possible to integrate C and C++ code in R, the Rcpp package has made this much more accessible; you only need to take a look at the length of the list of reverse dependencies to see how popular it now is.

When to Think about C++ and Rcpp

Adding C++ code to your R packages obviously requires that you start to learn another programming language, so it may not always be the answer. The overhead in learning C++ in the first place may be larger than the gains it will give you. However, if you already know C++ or you find that there are a number of cases where your code could benefit from being written in C++, you may find that it is worth the effort.

There are two main cases when C++ will be beneficial to your code:

Image When you have no choice but to use a for loop. For example, when there is a dependency on the previous value in the loop.

Image When what you want to do has already been implemented efficiently in C++.

The advantage of using Rcpp for your C++ implementations is that it has solved many problems for you in terms of passing data between R and C++, handling the memory usage, and providing many commonly used R functions to C++. This means that rather than having to learn how to do all of these things yourself in C++, you can simply use existing, well-tested functionality.

A Basic Function

We won’t go into lots of detail here on how to start writing C++ code, but we will introduce some of the basics with the aim of demonstrating how you can use the Rcpp package to integrate your C++ code with R in an easy way. To continue the theme of this hour, we will implement the sampling function. This actually uses a number of features specific to C++, so it’s a helpful introduction. You can see an example of this implementation in Listing 18.5.

LISTING 18.5 Implementing with Rcpp


 1: #include <Rcpp.h>
 2: using namespace Rcpp;
 3:
 4: // [[Rcpp::export]]
 5: IntegerVector sampleInC(int len){
 6:
 7:   // Initialize x to create output
 8:   IntegerVector x(len);
 9:
10:   // Initialize and create s by using the Rcpp runif function
11:   NumericVector s = runif(len);
12:
13:   // Loop to do sampling, using if...else...
14:   for(int i = 0; i < len; ++i) {
15:
16:     if(s[i] > 0.5)
17:       x[i] = 1;
18:     else
19:       x[i] = 0;
20:   }
21:
22:   // Explicitly return x
23:   return x;
24: }


Differences Between R and C++

First of all, you should be aware of the key differences between R and C++ that you will come across when defining functions:

Image You must declare the types of all objects, including the type of input and output objects and the type of any intermediate objects created.

Image All expressions end with a semicolon.

Image You define for loops in a different way, specifying the start value, the end condition, and the increment.

Image Counting of indexes starts at zero in C++.

You saw all of these features in the code in Listing 18.5.

Writing a Function

We can write a C++ function directly in R using cppFunction; however, once our C++ function is more than a line or two long, this can be tricky, so it is much more sensible to write our function as a C++ script and then source this using sourceCpp. This is the approach we take here, so the code in Listing 18.5 should be saved in a file ending .cpp.


Tip: Rcpp and RStudio

Support for Rcpp is well integrated with RStudio. If you open a new script and instead of selecting R select “C++ File,” you will get the template structure for Rcpp. You can then source this by using the Source button at the top of the script, which will run sourceCpp for you.


The first four lines of Listing 18.5 (1 to 4) need to be at the top of any C++ script, where you want to use Rcpp. These lines make the functionality of Rcpp available to C++. They also allow R to recognize this as a function you want to be available in R.

Data Types

Starting on line 5 of Listing 18.5 we have our function definition. You will notice that in C++ we do not use the function keyword, but we have stated IntegerVector before the function name (sampleInC). This is to indicate to C++ that the return value of the function will be an integer vector. It is very important in C++ to get this correct. You will also notice that we have specified that the argument len will be of type int, which means we will pass an integer to the function. All of this is done for us in R, so we need to remember to include it when we write C++. The definition of various data types for scalars, vectors, and matrices are shown in Table 18.1. Note that some of these types are specific to Rcpp and are not the standard type definitions for C++.

Image

TABLE 18.1 Data Types in Rcpp

When you look through the remainder of the code, you will notice that this is very similar to the original example in Listing 18.1. We have created our vector, x, and the samples, and we will return to them in the next section. Just like in the R version, we have used a for loop with an if/else structure, which is the same as the R equivalent of the if/else structure, although different from the ifelse function we have used in this hour. The main difference is the structure of the for loop.

Loops in C++

In C++ we define a for loop in a different manner. First of all, we create an object and give it a starting value. Notice that in the example in Listing 18.5, line 14, this is initialized to 0. This is because we are going to index a vector, and the counting starts at 0 in C++. This is very important to remember when working with C++. The next component of the for loop is the condition that will cause the loop to stop. In this case, we are looping while the object i is less than the length of the final vector. Note the “less than” here. Because we start counting at 0, the final element will be len-1. The final component is the increment for the loop. Note the syntax here of ++i. In C++, this is special notation for adding one to the value of the object. So in this example, we are adding one to the value of i on each iteration.

Returning from Functions

To return from a function in R, we can optionally use the function return. In C++, this is not the case; we must use the keyword return. We must also ensure that what we return is of the same type that we stated the function would return. In this case we specified, on line 5 of Listing 18.5, that we would return an IntegerVector, so this is what we must return. Here, we are returning x, which we declared to be an IntegerVector on line 8.

Using R Functions in C++

You might have noticed that in the function in Listing 18.5 we used the function runif. This is because Rcpp provides many additional functions to C++ that you are familiar with in R, including distribution functions. In fact, thanks to the way the distribution functions are implemented, they make use of the same random number generation, meaning that you can still test your functions comparing to an R implementation.

Other than the distribution functions, we can implement in C++ vectorized versions of standard arithmetic operators (+, -, *, /, etc.) and many mathematical functions such as sin, cos, and so on, along with round, abs, ceiling, and floor.

In addition to the statistical distributions, there are also implementations of summary functions, such as mean, sd, var, sum, and diff. This is not an exhaustive list, and it is worth checking the vignette for Rcpp Sugar (vignette("Rcpp-sugar")) to see other functions that are available.

The advantage of this is that we can implement our R functions using Rcpp in a much faster way. Obviously to get the most from C++ you will need to learn more of the language itself, but as a means of quickly getting the benefits of speed gains, this is a great start.


Tip: Learning More

In this hour, we have only touched on the basics of C++, specifically for working with Rcpp. There are many available resources, but a good starting point is the user documentation provided with Rcpp. For a list of all the vignettes available in this package, you can use vignette(package = "Rcpp").


Summary

In this hour, we looked at many of the methods you can use to not only make your code more efficient but also more professional. The more you use R, the more you will find that you implement many of these approaches—in particular, vectorization—without thinking about them as being a way to speed up your code. You will also find more and more functions that help you write more efficient code. We also briefly introduced the Rcpp package, which can be beneficial when other approaches we have suggested are not possible or simply make no difference. One of the key points to remember when you are adapting your code is to ensure that you test whether it is still performing in the same way. Although this can simply be an informal test, you will see in Hour 20 that you can, and should, make use of test frameworks to continuously verify that you are not adversely changing your code.

Q&A

Q. I don’t mind waiting for my code to finish running. Do I need to do any of this?

A. If you are happy with the speed of your code, you don’t need to make any changes; however, many of these points are what will make your code more professional and suitable for wider production usage. It is advisable that you take all of these points into consideration when writing R code (many you may be doing already), and eventually they will become a natural part of your R code.

Workshop

The workshop contains quiz questions and exercises to help you solidify your understanding of the material covered. Try to answer all questions before looking at the “Answers” section that follows.

Quiz

1. Before you jump into changing your code, what should you do and what function can you use to help you do it?

2. Why should you initialize when writing for loops?

3. Why are vectorized functions, such as pmin, slower than working directly with a vector?

4. Do you need to handle memory usage in R?

5. What are the main differences between R and C++?

Answers

1. Before making any changes, you should first profile your code to determine where the slowest components are. You can do this in R using the Rprof function, which will generate a series of summary statistics to show where your code spends most of its time.

2. Initializing objects when you are writing for loops means that R will not continuously make copies of the objects you are adding values to. This is more efficient because you are simply writing over a value.

3. Vectorized functions are typically slower because they contain several function arguments and a series of error checks on the arguments. This is to ensure that the function is run in the way intended, and if incorrect arguments are passed, a more informative error message is returned. For code that you will reuse regularly and particularly share with others, this error checking is vital and shouldn’t be removed to make small speed gains.

4. No, this is done automatically when a temporary object is no longer being used. The main reason to manage memory would be to remove large objects that you no longer need but that you previously created.

5. There are four points that you should keep in mind:

A. You must declare the type of all objects.

B. All expressions end with a semicolon.

C. Loops are defined in a different way.

D. Indexes start counting from 0!

Activities

1. Write a function that takes a vector of input and, using a loop, iterates around all of the values, calculating the sum up to that value (that is, the cumulative sum) so that when you pass the vector of values 1 to 10, you get the following return value:

[1]  1  3  6 10 15 21 28 36 45 55

2. Use microbenchmark to determine the median time it takes to run your function.

3. Use any of the initialization and vectorization techniques to improve the speed of your function, using microbenchmark to check that you are making the code more efficient.

4. Can you find a function in R that will do this for you? Compare the speed of that function to your most efficient version.

5. Have a go at writing this function in C++ using Rcpp. If you are finding the cumulative sum a little tricky, start out with just taking the sum of all the values in the vector.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset