The Hive on Spark project was created to enable existing Hive users to run hive queries directly on the Spark execution engine instead of MapReduce.
Hive on Spark (https://issues.apache.org/jira/browse/HIVE-7292) is supported in the Cloudera distribution. To start using the service, change the execution to Spark in the Hive or beeline client:
hive> set hive.execution.engine=spark; 0: jdbc:hive2://localhost:10000/default> set hive.execution.engine=spark;