By now, you will have noticed that many operations on DataFrames are inspired by SQL operations. Additionally, Spark allows us to register DataFrames as tables and query them with SQL statements directly. We can therefore build a temporary database as part of the program flow.
Let's register readingsDF
as a temporary table:
scala> readingsDF.registerTempTable("readings")
This registers a temporary table that can be used in SQL queries. Registering a temporary table relies on the presence of a SQL context. The temporary tables are destroyed when the SQL context is destroyed (when we close the shell, for instance).
Let's explore what we can do with our temporary tables and the SQL context. We can first get a list of all the tables currently registered with the context:
scala> sqlContext.tables DataFrame = [tableName: string, isTemporary: boolean]
This returns a DataFrame. In general, all operations on a SQL context that return data return DataFrames:
scala> sqlContext.tables.show +---------+-----------+ |tableName|isTemporary| +---------+-----------+ | readings| true| +---------+-----------+
We can query this table by passing SQL statements to the SQL context:
scala> sqlContext.sql("SELECT * FROM readings").show +---------+--------+--------+---+--------+ |patientId|heightCm|weightKg|age|isSmoker| +---------+--------+--------+---+--------+ | 1| 175| 72| 43| false| | 2| 182| 78| 28| true| | 3| 164| 61| 41| false| | 4| 161| 62| 43| true| +---------+--------+--------+---+--------+
Any UDFs registered with the sqlContext
are available through the name given to them when they were registered. We can therefore use them in SQL queries:
scala> sqlContext.sql(""" SELECT patientId, likelyMaleUdf(heightCm, weightKg) AS likelyMale FROM readings """).show +---------+----------+ |patientId|likelyMale| +---------+----------+ | 1| true| | 2| true| | 3| false| | 4| false| +---------+----------+
You might wonder why one would want to register DataFrames as temporary tables and run SQL queries on those tables, when the same functionality is available directly on DataFrames. The main reason is for interacting with external tools. Spark can run a SQL engine that exposes a JDBC interface, meaning that programs that know how to interact with a SQL database will be able to make use of the temporary tables.
We don't have the space to cover how to set up a distributed SQL engine in this book, but you can find details in the Spark documentation (http://spark.apache.org/docs/latest/sql-programming-guide.html#distributed-sql-engine).