Performance Tuning
For some workloads, it is possible to improve performance by either caching data in memory, or by turning on some experimental options.
Caching Data In Memory
Spark SQL can cache tables using an in-memory columnar format by calling spark.catalog.cacheTable("tableName")
or dataFrame.cache()
.
Then Spark SQL will scan only required columns and will automatically tune compression to minimize
memory usage and GC pressure. You can call spark.catalog.uncacheTable("tableName")
to remove the table from memory.
Configuration of in-memory caching can be done using the setConf
method on SparkSession
or by running
SET key=value
commands using SQL.
Property Name | Default | Meaning |
---|---|---|
spark.sql.inMemoryColumnarStorage.compressed |
true | When set to true Spark SQL will automatically select a compression codec for each column based on statistics of the data. |
spark.sql.inMemoryColumnarStorage.batchSize |
10000 | Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization and compression, but risk OOMs when caching data. |
Other Configuration Options
The following options can also be used to tune the performance of query execution. It is possible that these options will be deprecated in future release as more optimizations are performed automatically.
Property Name | Default | Meaning |
---|---|---|
spark.sql.files.maxPartitionBytes |
134217728 (128 MB) | The maximum number of bytes to pack into a single partition when reading files. |
spark.sql.files.openCostInBytes |
4194304 (4 MB) | The estimated cost to open a file, measured by the number of bytes could be scanned in the same time. This is used when putting multiple files into a partition. It is better to over-estimated, then the partitions with small files will be faster than partitions with bigger files (which is scheduled first). |
spark.sql.broadcastTimeout |
300 |
Timeout in seconds for the broadcast wait time in broadcast joins |
spark.sql.autoBroadcastJoinThreshold |
10485760 (10 MB) |
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when
performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently
statistics are only supported for Hive Metastore tables where the command
ANALYZE TABLE <tableName> COMPUTE STATISTICS noscan has been run.
|
spark.sql.shuffle.partitions |
200 | Configures the number of partitions to use when shuffling data for joins or aggregations. |
Join Strategy Hints for SQL Queries
The join strategy hints, namely BROADCAST
, MERGE
, SHUFFLE_HASH
and SHUFFLE_REPLICATE_NL
,
instruct Spark to use the hinted strategy on each specified relation when joining them with another
relation. For example, when the BROADCAST
hint is used on table ‘t1’, broadcast join (either
broadcast hash join or broadcast nested loop join depending on whether there is any equi-join key)
with ‘t1’ as the build side will be prioritized by Spark even if the size of table ‘t1’ suggested
by the statistics is above the configuration spark.sql.autoBroadcastJoinThreshold
.
When different join strategy hints are specified on both sides of a join, Spark prioritizes the
BROADCAST
hint over the MERGE
hint over the SHUFFLE_HASH
hint over the SHUFFLE_REPLICATE_NL
hint. When both sides are specified with the BROADCAST
hint or the SHUFFLE_HASH
hint, Spark will
pick the build side based on the join type and the sizes of the relations.
Note that there is no guarantee that Spark will choose the join strategy specified in the hint since a specific strategy may not support all join types.