Stable release |
1.9.0 / 19 October 2016
|
---|---|
Development status | Active |
Operating system | Cross-platform |
Type | Database management system |
License | Apache License 2.0 |
Website | parquet |
Apache Parquet is a free and open source column-oriented data store of the Apache Hadoop ecosystem. It is similar to the other columnar storage file formats available in Hadoop namely RCFile and Optimized RCFile. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.
The open source project to build Apache Parquet began as a joint effort between Twitter and Cloudera. The first version Apache Parquet 1.0 was released in July 2013. From April 27, 2015 Apache Parquet is a top-level Apache Software Foundation (ASF)-sponsored project.
Apache Parquet is implemented using the record shredding and assembly algorithm taking into account the complex data structures that can be used to store the data. Apache Parquet stores data where the values in each column are physically stored in contiguous memory locations. It is similar to the data storage format of the RCFile. Due to the columnar storage, Apache Parquet provides the following benefits:
As of August 2015, Parquet supports the big data processing frameworks including Apache Hive, Apache Drill, Cloudera Impala, Apache Crunch, Apache Pig, Cascading and Apache Spark.
In Parquet, compression is performed column by column hence enabling different encoding schemes to be used for text and integer data. In addition this strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.