Understanding the Parquet format

the Google Dremel paper. In Parquet, data in a single column is stored contiguously.

Apache Parquet is a columnar data storage format, specifically designed for big data storage and processing. It is based on record shredding and the assembly algorithm from the Google Dremel paper. In Parquet, data in a single column is stored contiguously. The columnar format gives Parquet some unique benefits. For example, if you have a table with 100 columns and you mostly access 10 columns in a row-based format, you will have to load all the 100 columns, as the granularity level is at the row level. But, in Parquet, you will only need to load 10 columns. Another benefit is that since all of the data in a given column is of the same datatype (by definition), compression is much more efficient.

While we are discussing Parquet and its merits, it's a good idea to discuss the economic reason behind Parquet's success. There are two factors that every practitioner would like to drive down to zero, and they are latency and cost. In short, we all want fast query speeds at the least cost possible. General-purpose stores, such as Amazon S3 and HDFS, are as cheap as it can get. So the question arises: how to bring these new data storage systems to the same level of performance as seen in traditional database systems. This is where Parquet comes to the rescue. Parquet is very popular but is not the only shining star there when it comes to ad hoc query processing, especially on a public cloud. Facebook-born Presto on S3 is gaining a lot of traction and is worth paying attention to. In fact, AWS released a new service called Athena, which runs on this combo.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset