R holds all of its objects in virtual memory. For most of us, this design decision has led to a zippy interactive experience, but for analysts working with large datasets, it can lead to slow program execution and memory-related errors.
Memory limits will depend primarily on the R build (32 versus 64-bit) and for 32-bit Windows, on the OS version involved. Error messages starting with cannot allocate vector of size typically indicate a failure to obtain sufficient contiguous memory, while error messages starting with cannot allocate vector of length indicate that an address limit has been exceeded. When working with large datasets, try to use a 64-bit build if at all possible. For all builds, the number of elements in a vector is limited to 2,147,483,647 (see ?Memory for more information).
There are three issues to consider when working with large datasets: (a) efficient programming to speed execution, (b) storing data externally to limit memory issues, and (c) using specialized statistical routines designed to efficiently analyze massive amounts of data. We will briefly consider each.
There are a number of programming tips that improve performance when working with large datasets.
With large datasets, increasing code efficiency will only get you so far. When bumping up against memory limits, you can also store our data externally and use specialized analysis routines.
There are several packages available for storing data outside of R’s main memory. The strategy involves storing data in external databases or in binary flat files on disk, and then accessing portions as they are needed. Several useful packages are described in table G.1.
Package |
Description |
---|---|
ff | Provides data structures that are stored on disk but behave as if they were in RAM. |
bigmemory | Supports the creation, storage, access, and manipulation of massive matrices. Matrices are allocated to shared memory and memory-mapped files. |
filehash | Implements a simple key-value database where character string keys are associated with data values stored on disk. |
ncdf, ncdf4 | Provides an interface to Unidata netCDF data files. |
RODBC, RMySQL, ROracle, RPostgreSQL, RSQLite | Each provides access to external relational database management systems. |
The packages above help overcome R’s memory limits on data storage. However, specialized methods are also needed when attempting to analyze large datasets in a reasonable length of time. Some of the most useful are described below.
R provides several packages for the analysis of large datasets:
Working with datasets in the gigabyte to terabyte range can be challenging in any language. For more information on the methods available within R, see the CRAN Task View: High-Performance and Parallel Computing with R (cran.r-project.org/web/views/).