is LT, LE, GE, GT, and EQ
The PARQUET_READ_STATISTICS option provides a workaround when dealing with files that have corrupt Parquet
statistics and unknown errors.
In the query runtime profile output for each Impalad instance, the NumStatsFilteredRowGroups field in the SCAN
node section shows the number of row groups that were skipped based on Parquet statistics.
The supported values for the query option are:
• true (1): Read statistics from Parquet files and use them in query processing.
• false (0): Do not use Parquet read statistics.
• Any other values are treated as false.
Type: Boolean
Default: true
Added in: CDH 5.12.0 / Impala 2.9.0
PREFETCH_MODE Query Option (CDH 5.8 or higher only)
Determines whether the prefetching optimization is applied during join query processing.
Type: numeric (0, 1) or corresponding mnemonic strings (NONE, HT_BUCKET).
Default: 1 (equivalent to HT_BUCKET)
Added in: CDH 5.8.0 / Impala 2.6.0
Usage notes:
The default mode is 1, which means that hash table buckets are prefetched during join query processing.
Related information:
Joins in Impala SELECT Statements on page 296, Performance Considerations for Join Queries on page 568.
354 | Apache Impala Guide
Impala SQL Language Reference
QUERY_TIMEOUT_S Query Option (CDH 5.2 or higher only)
Sets the idle query timeout value for the session, in seconds. Queries that sit idle for longer than the timeout value
are automatically cancelled. If the system administrator specified the --idle_query_timeout startup option,
QUERY_TIMEOUT_S must be smaller than or equal to the --idle_query_timeout value.
Note:
The timeout clock for queries and sessions only starts ticking when the query or session is idle.
For queries, this means the query has results ready but is waiting for a client to fetch the data. A query
can run for an arbitrary time without triggering a timeout, because the query is computing results
rather than sitting idle waiting for the results to be fetched. The timeout period is intended to prevent
unclosed queries from consuming resources and taking up slots in the admission count of running
queries, potentially preventing other queries from starting.
For sessions, this means that no query has been submitted for some period of time.
Syntax:
SET QUERY_TIMEOUT_S=seconds;
Type: numeric
Default: 0 (no timeout if --idle_query_timeout not in effect; otherwise, use --idle_query_timeout value)
Added in: CDH 5.2.0 / Impala 2.0.0
Related information:
Setting Timeout Periods for Daemons, Queries, and Sessions on page 70
REPLICA_PREFERENCE Query Option (CDH 5.9 or higher only)
The REPLICA_PREFERENCE query option lets you distribute the work more evenly if hotspots and bottlenecks persist.
It causes the access cost of all replicas of a data block to be considered equal to or worse than the configured value.
This allows Impala to schedule reads to suboptimal replicas (e.g. local in the presence of cached ones) in order to
distribute the work across more executor nodes.
Allowed values are: CACHE_LOCAL (0), DISK_LOCAL (2), REMOTE (4)
Type: Enum
Default: CACHE_LOCAL (0)
Added in: CDH 5.9.0 / Impala 2.7.0
Usage Notes:
By default Impala selects the best replica it can find in terms of access cost. The preferred order is cached, local, and
remote. With REPLICA_PREFERENCE, the preference of all replicas are capped at the selected value. For example,
when REPLICA_PREFERENCE is set to DISK_LOCAL, cached and local replicas are treated with the equal preference.
When set to REMOTE, all three types of replicas, cached, local, remote, are treated with equal preference.
Related information:
Using HDFS Caching with Impala (CDH 5.3 or higher only) on page 593, SCHEDULE_RANDOM_REPLICA Query Option
(CDH 5.7 or higher only) on page 360
REQUEST_POOL Query Option
The pool or queue name that queries should be submitted to. Only applies when you enable the Impala admission
control feature. Specifies the name of the pool used by requests from Impala to the resource manager.
Type: STRING
Apache Impala Guide | 355
Impala SQL Language Reference
Default: empty (use the user-to-pool mapping defined by an impalad startup option in the Impala configuration file)
Related information:
Admission Control and Query Queuing on page 549
RESOURCE_TRACE_RATIO Query Option (CDH 6.2 / Impala 3.2 or higher only)
The RESOURCE_TRACE_RATIO query option specifies the ratio of queries where the CPU usage info will be included
in the profiles. Collecting CPU usage and sending it around adds a slight overhead during query execution. This query
option lets you control whether to collect additional information to diagnose the resource usage.
For example, setting RESOURCE_TRACE_RATIO=1 adds a trace of the CPU usage to the profile of each query.
Setting RESOURCE_TRACE_RATIO=0.5 means that randomly selected half of all queries will have that information
collected by the coordinator and included in the profiles.
Setting RESOURCE_TRACE_RATIO=0 means that CPU usage will not be tracked and included in the profiles.
Values from 0 to 1 are allowed.
Type: Number
Default: 0
Added in: CDH 6.2
RUNTIME_BLOOM_FILTER_SIZE Query Option (CDH 5.7 or higher only)
Size (in bytes) of Bloom filter data structure used by the runtime filtering feature.
Important:
In CDH 5.8 / Impala 2.6 and higher, this query option only applies as a fallback, when statistics are not
available. By default, Impala estimates the optimal size of the Bloom filter structure regardless of the
setting for this option. (This is a change from the original behavior in CDH 5.7 / Impala 2.5.)
In CDH 5.8 / Impala 2.6 and higher, when the value of this query option is used for query planning, it
is constrained by the minimum and maximum sizes specified by the RUNTIME_FILTER_MIN_SIZE
and RUNTIME_FILTER_MAX_SIZE query options. The filter size is adjusted upward or downward if
necessary to fit within the minimum/maximum range.
Type: integer
Default: 1048576 (1 MB)
Maximum: 16 MB
Added in: CDH 5.7.0 / Impala 2.5.0
Usage notes:
This setting affects optimizations for large and complex queries, such as dynamic partition pruning for partitioned
tables, and join optimization for queries that join large tables. Larger filters are more effective at handling higher
cardinality input sets, but consume more memory per filter.
If your query filters on high-cardinality columns (for example, millions of different values) and you do not get the
expected speedup from the runtime filtering mechanism, consider doing some benchmarks with a higher value for
RUNTIME_BLOOM_FILTER_SIZE. The extra memory devoted to the Bloom filter data structures can help make the
filtering more accurate.
Because the runtime filtering feature applies mainly to resource-intensive and long-running queries, only adjust this
query option when tuning long-running queries involving some combination of large partitioned tables and joins
involving large tables.
356 | Apache Impala Guide
Impala SQL Language Reference
Because the effectiveness of this setting depends so much on query characteristics and data distribution, you typically
only use it for specific queries that need some extra tuning, and the ideal value depends on the query. Consider setting
this query option immediately before the expensive query and unsetting it immediately afterward.
Kudu considerations:
This query option affects only Bloom filters, not the min/max filters that are applied to Kudu tables. Therefore, it does
not affect the performance of queries against Kudu tables.
Related information:
Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on page 588, RUNTIME_FILTER_MODE Query Option (CDH
5.7 or higher only) on page 358, RUNTIME_FILTER_MIN_SIZE Query Option (CDH 5.8 or higher only) on page 357,
RUNTIME_FILTER_MAX_SIZE Query Option (CDH 5.8 or higher only) on page 357
RUNTIME_FILTER_MAX_SIZE Query Option (CDH 5.8 or higher only)
The RUNTIME_FILTER_MAX_SIZE query option adjusts the settings for the runtime filtering feature. This option
defines the maximum size for a filter, no matter what the estimates produced by the planner are. This value also
overrides any lower number specified for the RUNTIME_BLOOM_FILTER_SIZE query option. Filter sizes are rounded
up to the nearest power of two.
Type: integer
Default: 0 (meaning use the value from the corresponding impalad startup option)
Added in: CDH 5.8.0 / Impala 2.6.0
Usage notes:
Because the runtime filtering feature applies mainly to resource-intensive and long-running queries, only adjust this
query option when tuning long-running queries involving some combination of large partitioned tables and joins
involving large tables.
Kudu considerations:
This query option affects only Bloom filters, not the min/max filters that are applied to Kudu tables. Therefore, it does
not affect the performance of queries against Kudu tables.
Related information:
Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on page 588, RUNTIME_FILTER_MODE Query Option (CDH
5.7 or higher only) on page 358, RUNTIME_FILTER_MIN_SIZE Query Option (CDH 5.8 or higher only) on page 357,
RUNTIME_BLOOM_FILTER_SIZE Query Option (CDH 5.7 or higher only) on page 356
RUNTIME_FILTER_MIN_SIZE Query Option (CDH 5.8 or higher only)
The RUNTIME_FILTER_MIN_SIZE query option adjusts the settings for the runtime filtering feature. This option
defines the minimum size for a filter, no matter what the estimates produced by the planner are. This value also
overrides any lower number specified for the RUNTIME_BLOOM_FILTER_SIZE query option. Filter sizes are rounded
up to the nearest power of two.
Type: integer
Default: 0 (meaning use the value from the corresponding impalad startup option)
Added in: CDH 5.8.0 / Impala 2.6.0
Usage notes:
Because the runtime filtering feature applies mainly to resource-intensive and long-running queries, only adjust this
query option when tuning long-running queries involving some combination of large partitioned tables and joins
involving large tables.
Kudu considerations:
This query option affects only Bloom filters, not the min/max filters that are applied to Kudu tables. Therefore, it does
not affect the performance of queries against Kudu tables.
Apache Impala Guide | 357
Impala SQL Language Reference
Related information:
Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on page 588, RUNTIME_FILTER_MODE Query Option (CDH
5.7 or higher only) on page 358, RUNTIME_FILTER_MAX_SIZE Query Option (CDH 5.8 or higher only) on page 357,
RUNTIME_BLOOM_FILTER_SIZE Query Option (CDH 5.7 or higher only) on page 356
RUNTIME_FILTER_MODE Query Option (CDH 5.7 or higher only)
The RUNTIME_FILTER_MODE query option adjusts the settings for the runtime filtering feature. It turns this feature
on and off, and controls how extensively the filters are transmitted between hosts.
Type: numeric (0, 1, 2) or corresponding mnemonic strings (OFF, LOCAL, GLOBAL).
Default: 2 (equivalent to GLOBAL); formerly was 1 / LOCAL, in CDH 5.7 / Impala 2.5
Added in: CDH 5.7.0 / Impala 2.5.0
Usage notes:
In CDH 5.8 / Impala 2.6 and higher, the default is GLOBAL. This setting is recommended for a wide variety of workloads,
to provide best performance with “out of the box” settings.
The lowest setting of LOCAL does a similar level of optimization (such as partition pruning) as in earlier Impala releases.
This setting was the default in CDH 5.7 / Impala 2.5, to allow for a period of post-upgrade testing for existing workloads.
This setting is suitable for workloads with non-performance-critical queries, or if the coordinator node is under heavy
CPU or memory pressure.
You might change the setting to OFF if your workload contains many queries involving partitioned tables or joins that
do not experience a performance increase from the runtime filters feature. If the overhead of producing the runtime
filters outweighs the performance benefit for queries, you can turn the feature off entirely.
Related information:
Partitioning for Impala Tables on page 625 for details about runtime filtering. DISABLE_ROW_RUNTIME_FILTERING
Query Option (CDH 5.7 or higher only) on page 328, RUNTIME_BLOOM_FILTER_SIZE Query Option (CDH 5.7 or higher
only) on page 356, RUNTIME_FILTER_WAIT_TIME_MS Query Option (CDH 5.7 or higher only) on page 358, and
MAX_NUM_RUNTIME_FILTERS Query Option (CDH 5.7 or higher only) on page 339 for tuning options for runtime
filtering.
RUNTIME_FILTER_WAIT_TIME_MS Query Option (CDH 5.7 or higher only)
The RUNTIME_FILTER_WAIT_TIME_MS query option adjusts the settings for the runtime filtering feature. It specifies
a time in milliseconds that each scan node waits for runtime filters to be produced by other plan fragments.
Type: integer
Default: 0 (meaning use the value from the corresponding impalad startup option)
Added in: CDH 5.7.0 / Impala 2.5.0
Usage notes:
Because the runtime filtering feature applies mainly to resource-intensive and long-running queries, only adjust this
query option when tuning long-running queries involving some combination of large partitioned tables and joins
involving large tables.
Related information:
Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on page 588, RUNTIME_FILTER_MODE Query Option (CDH
5.7 or higher only) on page 358
S3_SKIP_INSERT_STAGING Query Option (CDH 5.8 or higher only)
Speeds up INSERT operations on tables or partitions residing on the Amazon S3 filesystem. The tradeoff is the possibility
of inconsistent data left behind if an error occurs partway through the operation.
By default, Impala write operations to S3 tables and partitions involve a two-stage process. Impala writes intermediate
files to S3, then (because S3 does not provide a “rename” operation) those intermediate files are copied to their final
358 | Apache Impala Guide
location, making the process more expensive as on a filesystem that supports renaming or moving files. This query
option makes Impala skip the intermediate files, and instead write the new data directly to the final destination.
Impala SQL Language Reference
Usage notes:
Important:
If a host that is participating in the INSERT operation fails partway through the query, you might be
left with a table or partition that contains some but not all of the expected data files. Therefore, this
option is most appropriate for a development or test environment where you have the ability to
reconstruct the table if a problem during INSERT leaves the data in an inconsistent state.
The timing of file deletion during an INSERT OVERWRITE operation makes it impractical to write new files to S3 and
delete the old files in a single operation. Therefore, this query option only affects regular INSERT statements that add
to the existing data in a table, not INSERT OVERWRITE statements. Use TRUNCATE TABLE if you need to remove all
contents from an S3 table before performing a fast INSERT with this option enabled.
Performance improvements with this option enabled can be substantial. The speed increase might be more noticeable
for non-partitioned tables than for partitioned tables.
Type: Boolean; recognized values are 1 and 0, or true and false; any other value interpreted as false
Default: true (shown as 1 in output of SET statement)
Added in: CDH 5.8.0 / Impala 2.6.0
Related information:
Using Impala with the Amazon S3 Filesystem on page 692
SCAN_BYTES_LIMIT Query Option (CDH 6.1 or higher only)
The SCAN_BYTES_LIMIT query option sets a limit on the bytes scanned by HDFS and HBase SCAN operations. If a
query is still executing when the query’s coordinator detects that it has exceeded the limit, the query is terminated
with an error. The option is intended to prevent runaway queries that scan more data than is intended.
For example, an Impala administrator could set a default value of SCAN_BYTES_LIMIT=100GB for a resource pool to
automatically kill queries that scan more than 100 GB of data (see Impala Admission Control and Query Queuing for
information about default query options). If a user accidentally omits a partition filter in a WHERE clause and runs a
large query that scans a lot of data, the query will be automatically terminated after it scans more data than the
SCAN_BYTES_LIMIT.
You can override the default value per-query or per-session, in the same way as other query options, if you do not
want the default SCAN_BYTES_LIMIT value to apply to a specific query or session.
Note:
• Only data actually read from the underlying storage layer is counted towards the limit. E.g. Impala’s
Parquet scanner employs several techniques to skip over data in a file that is not relevant to a
specific query, so often only a fraction of the file size is counted towards SCAN_BYTES_LIMIT.
• As of Impala 3.1, bytes scanned by Kudu tablet servers are not counted towards the limit.
Because the checks are done periodically, the query may scan over the limit at times.
Syntax: SET SCAN_BYTES_LIMIT=bytes;
Type: numeric
Units:
• A numeric argument represents memory size in bytes.
• Specify a suffix of m or mb for megabytes.
Apache Impala Guide | 359
Impala SQL Language Reference
• Specify a suffix of g or gb for gigabytes.
• If you specify a suffix with unrecognized formats, subsequent queries fail with an error.
Default: 0 (no limit)
Added in: CDH 6.1
SCHEDULE_RANDOM_REPLICA Query Option (CDH 5.7 or higher only)
The SCHEDULE_RANDOM_REPLICA query option fine-tunes the scheduling algorithm for deciding which host processes
each HDFS data block or Kudu tablet to reduce the chance of CPU hotspots.
By default, Impala estimates how much work each host has done for the query, and selects the host that has the lowest
workload. This algorithm is intended to reduce CPU hotspots arising when the same host is selected to process multiple
data blocks / tablets. Use the SCHEDULE_RANDOM_REPLICA query option if hotspots still arise for some combinations
of queries and data layout.
The SCHEDULE_RANDOM_REPLICA query option only applies to tables and partitions that are not enabled for the HDFS
caching.
Type: Boolean; recognized values are 1 and 0, or true and false; any other value interpreted as false
Default: false
Added in: CDH 5.7.0 / Impala 2.5.0
Related information:
Using HDFS Caching with Impala (CDH 5.3 or higher only) on page 593, Avoiding CPU Hotspots for HDFS Cached Data
on page 612 , REPLICA_PREFERENCE Query Option (CDH 5.9 or higher only) on page 355
SCRATCH_LIMIT Query Option
Specifies the maximum amount of disk storage, in bytes, that any Impala query can consume on any host using the
“spill to disk” mechanism that handles queries that exceed the memory limit.
Syntax:
Specify the size in bytes, or with a trailing m or g character to indicate megabytes or gigabytes. For example:
-- 128 megabytes.
set SCRATCH_LIMIT=134217728
-- 512 megabytes.
set SCRATCH_LIMIT=512m;
-- 1 gigabyte.
set SCRATCH_LIMIT=1g;
Usage notes:
A value of zero turns off the spill to disk feature for queries in the current session, causing them to fail immediately if
they exceed the memory limit.
The amount of memory used per host for a query is limited by the MEM_LIMIT query option.
The more DataNodes in the cluster, the less memory is used on each host, and therefore also less scratch space is
required for queries that exceed the memory limit.
Type: numeric, with optional unit specifier
Default: -1 (amount of spill space is unlimited)
Related information:
SQL Operations that Spill to Disk on page 607, MEM_LIMIT Query Option on page 343
360 | Apache Impala Guide
Impala SQL Language Reference
SHUFFLE_DISTINCT_EXPRS Query Option
The SHUFFLE_DISTINCT_EXPRS query option controls the shuffling behavior when a query has both grouping and
distinct expressions. Impala can optionally include the distinct expressions in the hash exchange to spread the data
among more nodes. However, this plan requires one more hash exchange phase.
It is recommended that you turn off this option if the NDVs of the grouping expressions are high.
Type: Boolean; recognized values are 1 and 0, or true and false; any other value interpreted as false
Default: false
SUPPORT_START_OVER Query Option
Leave this setting at its default value. It is a read-only setting, tested by some client applications such as Hue.
If you accidentally change it through impala-shell, subsequent queries encounter errors until you undo the change
by issuing UNSET support_start_over.
Type: Boolean; recognized values are 1 and 0, or true and false; any other value interpreted as false
Default: false
SYNC_DDL Query Option
When enabled, causes any DDL operation such as CREATE TABLE or ALTER TABLE to return only when the changes
have been propagated to all other Impala nodes in the cluster by the Impala catalog service. That way, if you issue a
subsequent CONNECT statement in impala-shell to connect to a different node in the cluster, you can be sure that
other node will already recognize any added or changed tables. (The catalog service automatically broadcasts the DDL
changes to all nodes automatically, but without this option there could be a period of inconsistency if you quickly
switched to another node, such as by issuing a subsequent query through a load-balancing proxy.)
Although INSERT is classified as a DML statement, when the SYNC_DDL option is enabled, INSERT statements also
delay their completion until all the underlying data and metadata changes are propagated to all Impala nodes. Internally,
Impala inserts have similarities with DDL statements in traditional database systems, because they create metadata
needed to track HDFS block locations for new files and they potentially add new partitions to partitioned tables.
Note: Because this option can introduce a delay after each write operation, if you are running a
sequence of CREATE DATABASE, CREATE TABLE, ALTER TABLE, INSERT, and similar statements
within a setup script, to minimize the overall delay you can enable the SYNC_DDL query option only
near the end, before the final DDL statement.
Type: Boolean; recognized values are 1 and 0, or true and false; any other value interpreted as false
Default: false (shown as 0 in output of SET statement)
Related information:
DDL Statements on page 203
THREAD_RESERVATION_AGGREGATE_LIMIT Query Option (CDH 6.1 or higher only)
The THREAD_RESERVATION_AGGREGATE_LIMIT query option limits the number of reserved threads for a query
across all nodes on which it is executing. The option is intended to prevent execution of complex queries that can
consume excessive CPU or operating system resources on a cluster. Queries that have more threads than this threshold
are rejected by Impala’s admission controller before they start executing.
For example, an Impala administrator could set a default value of THREAD_RESERVATION_AGGREGATE_LIMIT=2000
for a resource pool on a 100 node where they expect only relatively simple queries with less than 20 threads per node
to run. This will reject queries that require more than 2000 reserved threads across all nodes, for example a query with
21 fragments running on all 100 nodes of the cluster.
You can override the default value per-query or per-session, in the same way as other query options, if you do not
want the default THREAD_RESERVATION_AGGREGATE_LIMIT value to apply to a specific query or session.
Syntax: SET THREAD_RESERVATION_AGGREGATE_LIMIT=number;
Apache Impala Guide | 361
Impala SQL Language Reference
Type: numeric
Default: 0 (no limit)
Added in: CDH 6.1
THREAD_RESERVATION_LIMIT Query Option (CDH 6.1 or higher only)
The THREAD_RESERVATION_LIMIT query option limits the number of reserved threads for a query on each node.
The option is intended to prevent execution of complex queries that can consume excessive CPU or operating system
resources on a single node. Queries that have more threads per node than this threshold are rejected by Impala’s
admission controller before they start executing. You can see the number of reserved threads for a query in its explain
plan in the “Per-Host Resource Reservation" line.
For example, an Impala administrator could set a default value of THREAD_RESERVATION_LIMIT=100 for a resource
pool where they expect only relatively simple queries to run. This will reject queries that require more than 100 reserved
threads on a node, for example, queries with more than 100 fragments.
You can override the default value per-query or per-session, in the same way as other query options, if you do not
want the default THREAD_RESERVATION_LIMIT value to apply to a specific query or session.
Note: The number of reserved threads on a node may be lower than the maximum value in the
explain plan if not all fragments of that query are scheduled on every node.
Syntax: SET THREAD_RESERVATION_LIMIT=number;
Type: numeric
Default: 3000
Added in: CDH 6.1
TIMEZONE Query Option (CDH 6.1 / Impala 3.1 or higher only)
The TIMEZONE query option defines the timezone used for conversions between UTC and the local time. If not set,
Impala uses the system time zone where the Coordinator Impalad runs. As query options are not sent to the Coordinator
immediately, the timezones are validated only when the query runs.
Impala takes the timezone into a consideration in the following cases:
• When calling the NOW() function
• When converting between Unix time and timestamp if the use_local_tz_for_unix_timestamp_conversions
flag is TRUE
• When reading Parquet timestamps written by Hive if the convert_legacy_hive_parquet_utc_timestamps
flag is TRUE
Syntax:
SET TIMEZONE=time zone
time zone can be a canonical code or a time zone name defined in IANA Time Zone Database. The value is case-sensitive.
Leading/trailing quotes (') and double quotes (") are stripped.
If time zone is an empty string, the time zone for the query is set to the default time zone of the Impalad Coordinator.
If time zone is NULL or a space character, Impala returns an error when the query is executed.
Type: String
Default: The system time zone where the Coordinator Impalad runs
362 | Apache Impala Guide
Impala SQL Language Reference
Examples:
SET TIMEZONE=UTC;
SET TIMEZONE="Europe/Budapest";
Added in: CDH 6.1
TOPN_BYTES_LIMIT Query Option (CDH 6.1 / Impala 3.1 or higher only)
The TOPN_BYTES_LIMIT query option places a limit on the amount of estimated memory that Impala can process for
top-N queries.
Top-N queries are the queries that include both ORDER BY and LIMIT clauses. Top-N queries don't spill to disk so have
to keep all rows they process in memory, and those queries can cause out-of-memory issues when running with a large
limit and an offset. If the Impala planner estimates that a top-N operator will process more bytes than the
TOPN_BYTES_LIMIT value, it will replace the top-N operator with the sort operator. Switching to the sort operator
allows Impala to spill to disk, thus requiring less memory than top-N, but potentially with performance penalties.
The option has no effect when set to 0 or -1.
Syntax:
SET TOPN_BYTES_LIMIT=limit
Type: Number
Default: 536870912 (512 MB)
Added in: CDH 6.1
SHOW Statement
The SHOW statement is a flexible way to get information about different types of Impala objects.
Syntax:
SHOW DATABASES [[LIKE] 'pattern']
SHOW SCHEMAS [[LIKE] 'pattern'] - an alias for SHOW DATABASES
SHOW TABLES [IN database_name] [[LIKE] 'pattern']
SHOW [AGGREGATE | ANALYTIC] FUNCTIONS [IN database_name] [[LIKE] 'pattern']
SHOW CREATE TABLE [database_name].table_name
SHOW CREATE VIEW [database_name].view_name
SHOW TABLE STATS [database_name.]table_name
SHOW COLUMN STATS [database_name.]table_name
SHOW [RANGE] PARTITIONS [database_name.]table_name
SHOW FILES IN [database_name.]table_name [PARTITION (key_col_expression [,
key_col_expression]]
SHOW ROLES
SHOW CURRENT ROLES
SHOW ROLE GRANT GROUP group_name
SHOW GRANT ROLE role_name
SHOW GRANT USER user_name
SHOW GRANT USER user_name ON SERVER
SHOW GRANT USER user_name ON DATABASE database_name
SHOW GRANT USER user_name ON TABLE table_name
SHOW GRANT USER user_name ON URI uri
Issue a SHOW object_type statement to see the appropriate objects in the current database, or SHOW object_type
IN database_name to see objects in a specific database.
Apache Impala Guide | 363
Impala SQL Language Reference
The optional pattern argument is a quoted string literal, using Unix-style * wildcards and allowing | for alternation.
The preceding LIKE keyword is also optional. All object names are stored in lowercase, so use all lowercase letters in
the pattern string. For example:
show databases 'a*';
show databases like 'a*';
show tables in some_db like '*fact*';
use some_db;
show tables '*dim*|*fact*';
Cancellation: Cannot be cancelled.
SHOW FILES Statement
The SHOW FILES statement displays the files that constitute a specified table, or a partition within a partitioned table.
This syntax is available in CDH 5.4 / Impala 2.2 and higher only. The output includes the names of the files, the size of
each file, and the applicable partition for a partitioned table. The size includes a suffix of B for bytes, MB for megabytes,
and GB for gigabytes.
In CDH 5.10 / Impala 2.8 and higher, you can use general expressions with operators such as <, IN, LIKE, and BETWEEN
in the PARTITION clause, instead of only equality operators. For example:
show files in sample_table partition (j < 5);
show files in sample_table partition (k = 3, l between 1 and 10);
show files in sample_table partition (month like 'J%');
Note: This statement applies to tables and partitions stored on HDFS, or in the Amazon Simple Storage
System (S3). It does not apply to views. It does not apply to tables mapped onto HBase or Kudu,
because those data management systems do not use the same file-based storage layout.
Usage notes:
You can use this statement to verify the results of your ETL process: that is, that the expected files are present, with
the expected sizes. You can examine the file information to detect conditions such as empty files, missing files, or
inefficient layouts due to a large number of small files. When you use INSERT statements to copy from one table to
another, you can see how the file layout changes due to file format conversions, compaction of small input files into
large data blocks, and multiple output files from parallel queries and partitioned inserts.
The output from this statement does not include files that Impala considers to be hidden or invisible, such as those
whose names start with a dot or an underscore, or that end with the suffixes .copying or .tmp.
The information for partitioned tables complements the output of the SHOW PARTITIONS statement, which summarizes
information about each partition. SHOW PARTITIONS produces some output for each partition, while SHOW FILES
does not produce any output for empty partitions because they do not include any data files.
HDFS permissions:
The user ID that the impalad daemon runs under, typically the impala user, must have read permission for all the
table files, read and execute permission for all the directories that make up the table, and execute permission for the
database directory and all its parent directories.
Examples:
The following example shows a SHOW FILES statement for an unpartitioned table using text format:
[localhost:21000] > create table unpart_text (x bigint, s string);
[localhost:21000] > insert into unpart_text (x, s) select id, name
> from oreilly.sample_data limit 20e6;
[localhost:21000] > show files in unpart_text;
+---------------------------------------------------------------------+----------+-----------+
| path | size | partition |
+---------------------------------------------------------------------+----------+-----------+
| hdfs://impala_data_dir/d.db/unpart_text/35665776ef85cfaf_1012432410_data.0. | 448.31MB | |
+---------------------------------------------------------------------+----------+-----------+
[localhost:21000] > insert into unpart_text (x, s) select id, name from oreilly.sample_data limit 100e6;
364 | Apache Impala Guide
Impala SQL Language Reference
[localhost:21000] > show files in unpart_text;
+-----------------------------------------------------------------------------+----------+-----------+
| path | size | partition |
+-----------------------------------------------------------------------------+----------+-----------+
| hdfs://impala_data_dir/d.db/unpart_text/35665776ef85cfaf_1012432410_data.0. | 448.31MB | |
| hdfs://impala_data_dir/d.db/unpart_text/ac3dba252a8952b8_1663177415_data.0. | 2.19GB | |
+-----------------------------------------------------------------------------+----------+-----------+
This example illustrates how, after issuing some INSERT ... VALUES statements, the table now contains some tiny
files of just a few bytes. Such small files could cause inefficient processing of parallel queries that are expecting
multi-megabyte input files. The example shows how you might compact the small files by doing an INSERT ...
SELECT into a different table, possibly converting the data to Parquet in the process:
[localhost:21000] > insert into unpart_text values (10,'hello'), (20, 'world');
[localhost:21000] > insert into unpart_text values (-1,'foo'), (-1000, 'bar');
[localhost:21000] > show files in unpart_text;
+-----------------------------------------------------------------------------+----------+
| path | size |
+-----------------------------------------------------------------------------+----------+
| hdfs://impala_data_dir/d.db/unpart_text/4f11b8bdf8b6aa92_238145083_data.0. | 18B
| hdfs://impala_data_dir/d.db/unpart_text/35665776ef85cfaf_1012432410_data.0. | 448.31MB
| hdfs://impala_data_dir/d.db/unpart_text/ac3dba252a8952b8_1663177415_data.0. | 2.19GB
| hdfs://impala_data_dir/d.db/unpart_text/cfb8252452445682_1868457216_data.0. | 17B
+-----------------------------------------------------------------------------+----------+
[localhost:21000] > create table unpart_parq stored as parquet as select * from unpart_text;
+---------------------------+
| summary |
+---------------------------+
| Inserted 120000002 row(s) |
+---------------------------+
[localhost:21000] > show files in unpart_parq;
+---------------------------------------------------------------------------------+----------+
| path | size |
+---------------------------------------------------------------------------------+----------+
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630184_549959007_data.0.parq | 255.36MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630184_549959007_data.1.parq | 178.52MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630185_549959007_data.0.parq | 255.37MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630185_549959007_data.1.parq | 57.71MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630186_2141167244_data.0.parq | 255.40MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630186_2141167244_data.1.parq | 175.52MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630187_1006832086_data.0.parq | 255.40MB |
| hdfs://impala_data_dir/d.db/unpart_parq/60798d96ba630187_1006832086_data.1.parq | 214.61MB |
+---------------------------------------------------------------------------------+----------+
The following example shows a SHOW FILES statement for a partitioned text table with data in two different partitions,
and two empty partitions. The partitions with no data are not represented in the SHOW FILES output.
[localhost:21000] > create table part_text (x bigint, y int, s string)
> partitioned by (year bigint, month bigint, day bigint);
[localhost:21000] > insert overwrite part_text (x, y, s) partition (year=2014,month=1,day=1)
> select id, val, name from oreilly.normalized_parquet
where id between 1 and 1000000;
[localhost:21000] > insert overwrite part_text (x, y, s) partition (year=2014,month=1,day=2)
> select id, val, name from oreilly.normalized_parquet
> where id between 1000001 and 2000000;
[localhost:21000] > alter table part_text add partition (year=2014,month=1,day=3);
[localhost:21000] > alter table part_text add partition (year=2014,month=1,day=4);
[localhost:21000] > show partitions part_text;
+-------+-------+-----+-------+--------+---------+--------------+-------------------+--------+-------------------+
| year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats |
+-------+-------+-----+-------+--------+---------+--------------+-------------------+--------+-------------------+
| 2014 | 1 | 1 | -1 | 4 | 25.16MB | NOT CACHED | NOT CACHED | TEXT | false |
| 2014 | 1 | 2 | -1 | 4 | 26.22MB | NOT CACHED | NOT CACHED | TEXT | false |
| 2014 | 1 | 3 | -1 | 0 | 0B | NOT CACHED | NOT CACHED | TEXT | false |
| 2014 | 1 | 4 | -1 | 0 | 0B | NOT CACHED | NOT CACHED | TEXT | false |
| Total | | | -1 | 8 | 51.38MB | 0B | | | |
+-------+-------+-----+-------+--------+---------+--------------+-------------------+--------+-------------------+
[localhost:21000] > show files in part_text;
+------------------------------------------------------------------------------------------------+--------+-------------------------+
| path | size | partition
|
+------------------------------------------------------------------------------------------------+--------+-------------------------+
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=1/80732d9dc80689f_1418645991_data.0. | 5.77MB | year=2014/month=1/day=1
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=1/80732d9dc8068a0_1418645991_data.0. | 6.25MB | year=2014/month=1/day=1
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=1/80732d9dc8068a1_147082319_data.0. | 7.16MB | year=2014/month=1/day=1
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=1/80732d9dc8068a2_2111411753_data.0. | 5.98MB | year=2014/month=1/day=1
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=2/21a828cf494b5bbb_501271652_data.0. | 6.42MB | year=2014/month=1/day=2
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=2/21a828cf494b5bbc_501271652_data.0. | 6.62MB | year=2014/month=1/day=2
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=2/21a828cf494b5bbd_1393490200_data.0. | 6.98MB | year=2014/month=1/day=2
|
| hdfs://impala_data_dir/d.db/part_text/year=2014/month=1/day=2/21a828cf494b5bbe_1393490200_data.0. | 6.20MB | year=2014/month=1/day=2
|
+------------------------------------------------------------------------------------------------+--------+-------------------------+
The following example shows a SHOW FILES statement for a partitioned Parquet table. The number and sizes of files
are different from the equivalent partitioned text table used in the previous example, because INSERT operations for
Apache Impala Guide | 365
Impala SQL Language Reference
Parquet tables are parallelized differently than for text tables. (Also, the amount of data is so small that it can be written
to Parquet without involving all the hosts in this 4-node cluster.)
[localhost:21000] > create table part_parq (x bigint, y int, s string)
> partitioned by (year bigint, month bigint, day bigint) stored as parquet;
[localhost:21000] > insert into part_parq partition (year,month,day) select x, y, s, year, month, day from partitioned_text;
[localhost:21000] > show partitions part_parq;
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+-------------------+
| year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats |
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+-------------------+
| 2014 | 1 | 1 | -1 | 3 | 17.89MB | NOT CACHED | NOT CACHED | PARQUET | false |
| 2014 | 1 | 2 | -1 | 3 | 17.89MB | NOT CACHED | NOT CACHED | PARQUET | false |
| Total | | | -1 | 6 | 35.79MB | 0B | | | |
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+-------------------+
[localhost:21000] > show files in part_parq;
+--------------------------------------------------------------------------------------+--------+-------------------------+
| path | size | partition |
+--------------------------------------------------------------------------------------+--------+-------------------------+
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=1/1134113650_data.0.parq | 4.49MB | year=2014/month=1/day=1 |
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=1/617567880_data.0.parq | 5.14MB | year=2014/month=1/day=1 |
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=1/2099499416_data.0.parq | 8.27MB | year=2014/month=1/day=1 |
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=2/945567189_data.0.parq | 8.80MB | year=2014/month=1/day=2 |
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=2/2145850112_data.0.parq | 4.80MB | year=2014/month=1/day=2 |
| hdfs://impala_data_dir/d.db/part_parq/year=2014/month=1/day=2/665613448_data.0.parq | 4.29MB | year=2014/month=1/day=2 |
+--------------------------------------------------------------------------------------+--------+-------------------------+
The following example shows output from the SHOW FILES statement for a table where the data files are stored in
Amazon S3:
[localhost:21000] > show files in s3_testing.sample_data_s3;
+-----------------------------------------------------------------------+---------+
| path | size |
+-----------------------------------------------------------------------+---------+
| s3a://impala-demo/sample_data/e065453cba1988a6_1733868553_data.0.parq | 24.84MB |
+-----------------------------------------------------------------------+---------+
SHOW ROLES Statement
The SHOW ROLES statement displays roles. This syntax is available in CDH 5.2 / Impala 2.0 and later only, when you
are using the Sentry authorization framework along with the Sentry service, as described in Using Impala with the
Sentry Service (CDH 5.1 or higher only) on page 91.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Kudu considerations:
Examples:
Depending on the roles set up within your organization by the CREATE ROLE statement, the output might look
something like this:
show roles;
+-----------+
| role_name |
+-----------+
| analyst |
| role1 |
| sales |
| superuser |
| test_role |
+-----------+
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
Enabling Sentry Authorization for Impala on page 87
366 | Apache Impala Guide
Impala SQL Language Reference
SHOW CURRENT ROLES
The SHOW CURRENT ROLES statement displays roles assigned to the current user. This syntax is available in CDH 5.2
/ Impala 2.0 and later only, when you are using the Sentry authorization framework along with the Sentry service, as
described in Using Impala with the Sentry Service (CDH 5.1 or higher only) on page 91.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Kudu considerations:
Examples:
Depending on the roles set up within your organization by the CREATE ROLE statement, the output might look
something like this:
show current roles;
+-----------+
| role_name |
+-----------+
| role1 |
| superuser |
+-----------+
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
Enabling Sentry Authorization for Impala on page 87
SHOW ROLE GRANT GROUP Statement
The SHOW ROLE GRANT GROUP statement lists all the roles assigned to the specified group. This statement is only
allowed for Sentry administrative users and others users that are part of the specified group. This syntax is available
in CDH 5.2 / Impala 2.0 and later only, when you are using the Sentry authorization framework along with the Sentry
service, as described in Using Impala with the Sentry Service (CDH 5.1 or higher only) on page 91.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Kudu considerations:
Related information:
Enabling Sentry Authorization for Impala on page 87
SHOW GRANT ROLE Statement
The SHOW GRANT ROLE statement list all the grants for the given role name. This statement is only allowed for Sentry
administrative users and other users that have been granted the specified role. This syntax is available in CDH 5.2 /
Impala 2.0 and later only, when you are using the Sentry authorization framework along with the Sentry service, as
described in Using Impala with the Sentry Service (CDH 5.1 or higher only) on page 91.
Security considerations:
Apache Impala Guide | 367
Impala SQL Language Reference
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Kudu considerations:
Related information:
Enabling Sentry Authorization for Impala on page 87
SHOW GRANT USER Statement
The SHOW GRANT USER statement shows the list of privileges for a given user. This statement is only allowed for Sentry
administrative users. However, the current user can run SHOW GRANT USER for themselves.
This syntax is available in CDH 6.1 / Impala 3.1 and later only, when you are using the Sentry authorization framework
along with the Sentry service, as described in Using Impala with the Sentry Service (CDH 5.1 or higher only) on page
91.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
Enabling Sentry Authorization for Impala on page 87
SHOW DATABASES
The SHOW DATABASES statement is often the first one you issue when connecting to an instance for the first time.
You typically issue SHOW DATABASES to see the names you can specify in a USE db_name statement, then after
switching to a database you issue SHOW TABLES to see the names you can specify in SELECT and INSERT statements.
In CDH 5.7 / Impala 2.5 and higher, the output includes a second column showing any associated comment for each
database.
The output of SHOW DATABASES includes the special _impala_builtins database, which lets you view definitions
of built-in functions, as described under SHOW FUNCTIONS.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Examples:
This example shows how you might locate a particular table on an unfamiliar system. The DEFAULT database is the
one you initially connect to; a database with that name is present on every system. You can issue SHOW TABLES IN
db_name without going into a database, or SHOW TABLES once you are inside a particular database.
[localhost:21000] > show databases;
+------------------+----------------------------------------------+
| name | comment |
+------------------+----------------------------------------------+
368 | Apache Impala Guide
Impala SQL Language Reference
| _impala_builtins | System database for Impala builtin functions |
| default | Default Hive database |
| file_formats | |
+------------------+----------------------------------------------+
Returned 3 row(s) in 0.02s
[localhost:21000] > show tables in file_formats;
+--------------------+
| name |
+--------------------+
| parquet_table |
| rcfile_table |
| sequencefile_table |
| textfile_table |
+--------------------+
Returned 4 row(s) in 0.01s
[localhost:21000] > use file_formats;
[localhost:21000] > show tables like '*parq*';
+--------------------+
| name |
+--------------------+
| parquet_table |
+--------------------+
Returned 1 row(s) in 0.01s
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
Overview of Impala Databases on page 193, CREATE DATABASE Statement on page 226, DROP DATABASE Statement on
page 262, USE Statement on page 385 SHOW TABLES Statement on page 369, SHOW FUNCTIONS Statement on page 379
SHOW TABLES Statement
Displays the names of tables. By default, lists tables in the current database, or with the IN clause, in a specified
database. By default, lists all tables, or with the LIKE clause, only those whose name match a pattern with * wildcards.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
The user ID that the impalad daemon runs under, typically the impala user, must have read and execute permissions
for all directories that are part of the table. (A table could span multiple different HDFS directories if it is partitioned.
The directories could be widely scattered because a partition can reside in an arbitrary HDFS directory based on its
LOCATION attribute.)
Examples:
The following examples demonstrate the SHOW TABLES statement. If the database contains no tables, the result set
is empty. If the database does contain tables, SHOW TABLES IN db_name lists all the table names. SHOW TABLES
with no qualifiers lists all the table names in the current database.
create database empty_db;
show tables in empty_db;
Fetched 0 row(s) in 0.11s
create database full_db;
create table full_db.t1 (x int);
create table full_db.t2 like full_db.t1;
show tables in full_db;
+------+
| name |
+------+
| t1 |
Apache Impala Guide | 369
Impala SQL Language Reference
| t2 |
+------+
use full_db;
show tables;
+------+
| name |
+------+
| t1 |
| t2 |
+------+
This example demonstrates how SHOW TABLES LIKE 'wildcard_pattern' lists table names that match a pattern,
or multiple alternative patterns. The ability to do wildcard matches for table names makes it helpful to establish naming
conventions for tables to conveniently locate a group of related tables.
create table fact_tbl (x int);
create table dim_tbl_1 (s string);
create table dim_tbl_2 (s string);
/* Asterisk is the wildcard character. Only 2 out of the 3 just-created tables are
returned. */
show tables like 'dim*';
+-----------+
| name |
+-----------+
| dim_tbl_1 |
| dim_tbl_2 |
+-----------+
/* We are already in the FULL_DB database, but just to be sure we can specify the database
name also. */
show tables in full_db like 'dim*';
+-----------+
| name |
+-----------+
| dim_tbl_1 |
| dim_tbl_2 |
+-----------+
/* The pipe character separates multiple wildcard patterns. */
show tables like '*dim*|t*';
+-----------+
| name |
+-----------+
| dim_tbl_1 |
| dim_tbl_2 |
| t1 |
| t2 |
+-----------+
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
Overview of Impala Tables on page 196, CREATE TABLE Statement on page 234, ALTER TABLE Statement on page 205,
DROP TABLE Statement on page 268, DESCRIBE Statement on page 251, SHOW CREATE TABLE Statement on page 370,
SHOW TABLE STATS Statement on page 373, SHOW DATABASES on page 368, SHOW FUNCTIONS Statement on page
379
SHOW CREATE TABLE Statement
As a schema changes over time, you might run a CREATE TABLE statement followed by several ALTER TABLE
statements. To capture the cumulative effect of all those statements, SHOW CREATE TABLE displays a CREATE TABLE
statement that would reproduce the current structure of a table. You can use this output in scripts that set up or clone
a group of tables, rather than trying to reproduce the original sequence of CREATE TABLE and ALTER TABLE
statements. When creating variations on the original table, or cloning the original table on a different system, you
370 | Apache Impala Guide
Impala SQL Language Reference
might need to edit the SHOW CREATE TABLE output to change things such as the database name, LOCATION field,
and so on that might be different on the destination system.
If you specify a view name in the SHOW CREATE TABLE, it returns a CREATE VIEW statement with column names and
the original SQL statement to reproduce the view. You need the VIEW_METADATA privilege on the view and SELECT
privilege on all underlying views and tables to successfully run the SHOW CREATE VIEW statement for a view. The
SHOW CREATE VIEW is available as an alias for SHOW CREATE TABLE.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Kudu considerations:
For Kudu tables:
• The column specifications include attributes such as NULL, NOT NULL, ENCODING, and COMPRESSION. If you do
not specify those attributes in the original CREATE TABLE statement, the SHOW CREATE TABLE output displays
the defaults that were used.
• The specifications of any RANGE clauses are not displayed in full. To see the definition of the range clauses for a
Kudu table, use the SHOW RANGE PARTITIONS statement.
• The TBLPROPERTIES output reflects the Kudu master address and the internal Kudu name associated with the
Impala table.
show CREATE TABLE numeric_grades_default_letter;
+------------------------------------------------------------------------------------------------+
| result
|
+------------------------------------------------------------------------------------------------+
| CREATE TABLE user.numeric_grades_default_letter (
|
| score TINYINT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION,
|
| letter_grade STRING NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION
DEFAULT '-', |
| student STRING NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION,
|
| PRIMARY KEY (score)
|
| )
|
| PARTITION BY RANGE (score) (...)
|
| STORED AS KUDU
|
| TBLPROPERTIES ('kudu.master_addresses'='vd0342.example.com:7051')
|
+------------------------------------------------------------------------------------------------+
show range partitions numeric_grades_default_letter;
+--------------------+
| RANGE (score) |
+--------------------+
| 0 <= VALUES < 50 |
| 50 <= VALUES < 65 |
| 65 <= VALUES < 80 |
| 80 <= VALUES < 100 |
+--------------------+
Apache Impala Guide | 371
Impala SQL Language Reference
Examples:
The following example shows how various clauses from the CREATE TABLE statement are represented in the output
of SHOW CREATE TABLE.
create table show_create_table_demo (id int comment "Unique ID", y double, s string)
partitioned by (year smallint)
stored as parquet;
show create table show_create_table_demo;
+----------------------------------------------------------------------------------------+
| result
|
+----------------------------------------------------------------------------------------+
| CREATE TABLE scratch.show_create_table_demo (
|
| id INT COMMENT 'Unique ID',
|
| y DOUBLE,
|
| s STRING
|
| )
|
| PARTITIONED BY (
|
| year SMALLINT
|
| )
|
| STORED AS PARQUET
|
| LOCATION 'hdfs://127.0.0.1:8020/user/hive/warehouse/scratch.db/show_create_table_demo'
|
| TBLPROPERTIES ('transient_lastDdlTime'='1418152582')
|
+----------------------------------------------------------------------------------------+
The following example shows how, after a sequence of ALTER TABLE statements, the output from SHOW CREATE
TABLE represents the current state of the table. This output could be used to create a matching table rather than
executing the original CREATE TABLE and sequence of ALTER TABLE statements.
alter table show_create_table_demo drop column s;
alter table show_create_table_demo set fileformat textfile;
show create table show_create_table_demo;
+----------------------------------------------------------------------------------------+
| result
|
+----------------------------------------------------------------------------------------+
| CREATE TABLE scratch.show_create_table_demo (
|
| id INT COMMENT 'Unique ID',
|
| y DOUBLE
|
| )
|
| PARTITIONED BY (
|
| year SMALLINT
|
| )
|
| STORED AS TEXTFILE
|
| LOCATION 'hdfs://127.0.0.1:8020/user/hive/warehouse/demo.db/show_create_table_demo'
|
| TBLPROPERTIES ('transient_lastDdlTime'='1418152638')
|
372 | Apache Impala Guide
Impala SQL Language Reference
+----------------------------------------------------------------------------------------+
Related information:
CREATE TABLE Statement on page 234, DESCRIBE Statement on page 251, SHOW TABLES Statement on page 369
SHOW CREATE VIEW Statement
The SHOW CREATE VIEW, it returns a CREATE VIEW statement with column names and the original SQL statement
to reproduce the view. You need the VIEW_METADATA privilege on the view and SELECT privilege on all underlying
views and tables to successfully run the SHOW CREATE VIEW statement for a view.
The SHOW CREATE VIEW is an alias for SHOW CREATE TABLE.
SHOW TABLE STATS Statement
The SHOW TABLE STATS and SHOW COLUMN STATS variants are important for tuning performance and diagnosing
performance issues, especially with the largest tables and the most complex join queries.
Any values that are not available (because the COMPUTE STATS statement has not been run yet) are displayed as -1.
SHOW TABLE STATS provides some general information about the table, such as the number of files, overall size of
the data, whether some or all of the data is in the HDFS cache, and the file format, that is useful whether or not you
have run the COMPUTE STATS statement. A -1 in the #Rows output column indicates that the COMPUTE STATS
statement has never been run for this table. If the table is partitioned, SHOW TABLE STATS provides this information
for each partition. (It produces the same output as the SHOW PARTITIONS statement in this case.)
The output of SHOW COLUMN STATS is primarily only useful after the COMPUTE STATS statement has been run on
the table. A -1 in the #Distinct Values output column indicates that the COMPUTE STATS statement has never
been run for this table. Currently, Impala always leaves the #Nulls column as -1, even after COMPUTE STATS has
been run.
These SHOW statements work on actual tables only, not on views.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Kudu considerations:
Because Kudu tables do not have characteristics derived from HDFS, such as number of files, file format, and HDFS
cache status, the output of SHOW TABLE STATS reflects different characteristics that apply to Kudu tables. If the Kudu
table is created with the clause PARTITIONS 20, then the result set of SHOW TABLE STATS consists of 20 rows, each
representing one of the numbered partitions. For example:
show table stats kudu_table;
+--------+-----------+----------+-----------------------+------------+
| # Rows | Start Key | Stop Key | Leader Replica | # Replicas |
+--------+-----------+----------+-----------------------+------------+
| -1 | | 00000001 | host.example.com:7050 | 3 |
| -1 | 00000001 | 00000002 | host.example.com:7050 | 3 |
| -1 | 00000002 | 00000003 | host.example.com:7050 | 3 |
| -1 | 00000003 | 00000004 | host.example.com:7050 | 3 |
| -1 | 00000004 | 00000005 | host.example.com:7050 | 3 |
...
Impala does not compute the number of rows for each partition for Kudu tables. Therefore, you do not need to re-run
COMPUTE STATS when you see -1 in the # Rows column of the output from SHOW TABLE STATS. That column always
shows -1 for all Kudu tables.
Apache Impala Guide | 373
Impala SQL Language Reference
Examples:
The following examples show how the SHOW TABLE STATS statement displays physical information about a table
and the associated data files:
show table stats store_sales;
+-------+--------+----------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+--------+----------+--------------+--------+-------------------+
| -1 | 1 | 370.45MB | NOT CACHED | TEXT | false |
+-------+--------+----------+--------------+--------+-------------------+
show table stats customer;
+-------+--------+---------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+--------+---------+--------------+--------+-------------------+
| -1 | 1 | 12.60MB | NOT CACHED | TEXT | false |
+-------+--------+---------+--------------+--------+-------------------+
The following example shows how, after a COMPUTE STATS or COMPUTE INCREMENTAL STATS statement, the #Rows
field is now filled in. Because the STORE_SALES table in this example is not partitioned, the COMPUTE INCREMENTAL
STATS statement produces regular stats rather than incremental stats, therefore the Incremental stats field
remains false.
compute stats customer;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 18 column(s). |
+------------------------------------------+
show table stats customer;
+--------+--------+---------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+--------+--------+---------+--------------+--------+-------------------+
| 100000 | 1 | 12.60MB | NOT CACHED | TEXT | false |
+--------+--------+---------+--------------+--------+-------------------+
compute incremental stats store_sales;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 23 column(s). |
+------------------------------------------+
show table stats store_sales;
+---------+--------+----------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+---------+--------+----------+--------------+--------+-------------------+
| 2880404 | 1 | 370.45MB | NOT CACHED | TEXT | false |
+---------+--------+----------+--------------+--------+-------------------+
HDFS permissions:
The user ID that the impalad daemon runs under, typically the impala user, must have read and execute permissions
for all directories that are part of the table. (A table could span multiple different HDFS directories if it is partitioned.
The directories could be widely scattered because a partition can reside in an arbitrary HDFS directory based on its
LOCATION attribute.) The Impala user must also have execute permission for the database directory, and any parent
directories of the database directory in HDFS.
Related information:
COMPUTE STATS Statement on page 219, SHOW COLUMN STATS Statement on page 375
See Table and Column Statistics on page 575 for usage information and examples.
374 | Apache Impala Guide
Impala SQL Language Reference
SHOW COLUMN STATS Statement
The SHOW TABLE STATS and SHOW COLUMN STATS variants are important for tuning performance and diagnosing
performance issues, especially with the largest tables and the most complex join queries.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Kudu considerations:
The output for SHOW COLUMN STATS includes the relevant information for Kudu tables. The information for column
statistics that originates in the underlying Kudu storage layer is also represented in the metastore database that Impala
uses.
Examples:
The following examples show the output of the SHOW COLUMN STATS statement for some tables, before the COMPUTE
STATS statement is run. Impala deduces some information, such as maximum and average size for fixed-length columns,
and leaves and unknown values as -1.
show column stats customer;
+------------------------+--------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+------------------------+--------+------------------+--------+----------+----------+
| c_customer_sk | INT | -1 | -1 | 4 | 4 |
| c_customer_id | STRING | -1 | -1 | -1 | -1 |
| c_current_cdemo_sk | INT | -1 | -1 | 4 | 4 |
| c_current_hdemo_sk | INT | -1 | -1 | 4 | 4 |
| c_current_addr_sk | INT | -1 | -1 | 4 | 4 |
| c_first_shipto_date_sk | INT | -1 | -1 | 4 | 4 |
| c_first_sales_date_sk | INT | -1 | -1 | 4 | 4 |
| c_salutation | STRING | -1 | -1 | -1 | -1 |
| c_first_name | STRING | -1 | -1 | -1 | -1 |
| c_last_name | STRING | -1 | -1 | -1 | -1 |
| c_preferred_cust_flag | STRING | -1 | -1 | -1 | -1 |
| c_birth_day | INT | -1 | -1 | 4 | 4 |
| c_birth_month | INT | -1 | -1 | 4 | 4 |
| c_birth_year | INT | -1 | -1 | 4 | 4 |
| c_birth_country | STRING | -1 | -1 | -1 | -1 |
| c_login | STRING | -1 | -1 | -1 | -1 |
| c_email_address | STRING | -1 | -1 | -1 | -1 |
| c_last_review_date | STRING | -1 | -1 | -1 | -1 |
+------------------------+--------+------------------+--------+----------+----------+
show column stats store_sales;
+-----------------------+-------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-----------------------+-------+------------------+--------+----------+----------+
| ss_sold_date_sk | INT | -1 | -1 | 4 | 4 |
| ss_sold_time_sk | INT | -1 | -1 | 4 | 4 |
| ss_item_sk | INT | -1 | -1 | 4 | 4 |
| ss_customer_sk | INT | -1 | -1 | 4 | 4 |
| ss_cdemo_sk | INT | -1 | -1 | 4 | 4 |
| ss_hdemo_sk | INT | -1 | -1 | 4 | 4 |
| ss_addr_sk | INT | -1 | -1 | 4 | 4 |
| ss_store_sk | INT | -1 | -1 | 4 | 4 |
| ss_promo_sk | INT | -1 | -1 | 4 | 4 |
| ss_ticket_number | INT | -1 | -1 | 4 | 4 |
| ss_quantity | INT | -1 | -1 | 4 | 4 |
| ss_wholesale_cost | FLOAT | -1 | -1 | 4 | 4 |
| ss_list_price | FLOAT | -1 | -1 | 4 | 4 |
| ss_sales_price | FLOAT | -1 | -1 | 4 | 4 |
| ss_ext_discount_amt | FLOAT | -1 | -1 | 4 | 4 |
| ss_ext_sales_price | FLOAT | -1 | -1 | 4 | 4 |
| ss_ext_wholesale_cost | FLOAT | -1 | -1 | 4 | 4 |
| ss_ext_list_price | FLOAT | -1 | -1 | 4 | 4 |
| ss_ext_tax | FLOAT | -1 | -1 | 4 | 4 |
Apache Impala Guide | 375
Impala SQL Language Reference
| ss_coupon_amt | FLOAT | -1 | -1 | 4 | 4 |
| ss_net_paid | FLOAT | -1 | -1 | 4 | 4 |
| ss_net_paid_inc_tax | FLOAT | -1 | -1 | 4 | 4 |
| ss_net_profit | FLOAT | -1 | -1 | 4 | 4 |
+-----------------------+-------+------------------+--------+----------+----------+
The following examples show the output of the SHOW COLUMN STATS statement for some tables, after the COMPUTE
STATS statement is run. Now most of the -1 values are changed to reflect the actual table data. The #Nulls column
remains -1 because Impala does not use the number of NULL values to influence query planning.
compute stats customer;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 18 column(s). |
+------------------------------------------+
compute stats store_sales;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 23 column(s). |
+------------------------------------------+
show column stats customer;
+------------------------+--------+------------------+--------+----------+--------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size
+------------------------+--------+------------------+--------+----------+--------+
| c_customer_sk | INT | 139017 | -1 | 4 | 4 |
| c_customer_id | STRING | 111904 | -1 | 16 | 16 |
| c_current_cdemo_sk | INT | 95837 | -1 | 4 | 4 |
| c_current_hdemo_sk | INT | 8097 | -1 | 4 | 4 |
| c_current_addr_sk | INT | 57334 | -1 | 4 | 4 |
| c_first_shipto_date_sk | INT | 4374 | -1 | 4 | 4 |
| c_first_sales_date_sk | INT | 4409 | -1 | 4 | 4 |
| c_salutation | STRING | 7 | -1 | 4 | 3.1308 |
| c_first_name | STRING | 3887 | -1 | 11 | 5.6356 |
| c_last_name | STRING | 4739 | -1 | 13 | 5.9106 |
| c_preferred_cust_flag | STRING | 3 | -1 | 1 | 0.9656 |
| c_birth_day | INT | 31 | -1 | 4 | 4 |
| c_birth_month | INT | 12 | -1 | 4 | 4 |
| c_birth_year | INT | 71 | -1 | 4 | 4 |
| c_birth_country | STRING | 205 | -1 | 20 | 8.4001 |
| c_login | STRING | 1 | -1 | 0 | 0 |
| c_email_address | STRING | 94492 | -1 | 46 | 26.485 |
| c_last_review_date | STRING | 349 | -1 | 7 | 6.7561 |
+------------------------+--------+------------------+--------+----------+--------+
show column stats store_sales;
+-----------------------+-------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-----------------------+-------+------------------+--------+----------+----------+
| ss_sold_date_sk | INT | 4395 | -1 | 4 | 4 |
| ss_sold_time_sk | INT | 63617 | -1 | 4 | 4 |
| ss_item_sk | INT | 19463 | -1 | 4 | 4 |
| ss_customer_sk | INT | 122720 | -1 | 4 | 4 |
| ss_cdemo_sk | INT | 242982 | -1 | 4 | 4 |
| ss_hdemo_sk | INT | 8097 | -1 | 4 | 4 |
| ss_addr_sk | INT | 70770 | -1 | 4 | 4 |
| ss_store_sk | INT | 6 | -1 | 4 | 4 |
| ss_promo_sk | INT | 355 | -1 | 4 | 4 |
| ss_ticket_number | INT | 304098 | -1 | 4 | 4 |
| ss_quantity | INT | 105 | -1 | 4 | 4 |
| ss_wholesale_cost | FLOAT | 9600 | -1 | 4 | 4 |
| ss_list_price | FLOAT | 22191 | -1 | 4 | 4 |
| ss_sales_price | FLOAT | 20693 | -1 | 4 | 4 |
| ss_ext_discount_amt | FLOAT | 228141 | -1 | 4 | 4 |
| ss_ext_sales_price | FLOAT | 433550 | -1 | 4 | 4 |
| ss_ext_wholesale_cost | FLOAT | 406291 | -1 | 4 | 4 |
| ss_ext_list_price | FLOAT | 574871 | -1 | 4 | 4 |
| ss_ext_tax | FLOAT | 91806 | -1 | 4 | 4 |
| ss_coupon_amt | FLOAT | 228141 | -1 | 4 | 4 |
376 | Apache Impala Guide
Impala SQL Language Reference
| ss_net_paid | FLOAT | 493107 | -1 | 4 | 4 |
| ss_net_paid_inc_tax | FLOAT | 653523 | -1 | 4 | 4 |
| ss_net_profit | FLOAT | 611934 | -1 | 4 | 4 |
+-----------------------+-------+------------------+--------+----------+----------+
HDFS permissions:
The user ID that the impalad daemon runs under, typically the impala user, must have read and execute permissions
for all directories that are part of the table. (A table could span multiple different HDFS directories if it is partitioned.
The directories could be widely scattered because a partition can reside in an arbitrary HDFS directory based on its
LOCATION attribute.) The Impala user must also have execute permission for the database directory, and any parent
directories of the database directory in HDFS.
Related information:
COMPUTE STATS Statement on page 219, SHOW TABLE STATS Statement on page 373
See Table and Column Statistics on page 575 for usage information and examples.
SHOW PARTITIONS Statement
SHOW PARTITIONS displays information about each partition for a partitioned table. (The output is the same as the
SHOW TABLE STATS statement, but SHOW PARTITIONS only works on a partitioned table.) Because it displays table
statistics for all partitions, the output is more informative if you have run the COMPUTE STATS statement after creating
all the partitions. See COMPUTE STATS Statement on page 219 for details. For example, on a CENSUS table partitioned
on the YEAR column:
Because Kudu tables are all considered to be partitioned, the SHOW PARTITIONS statement works for any Kudu table.
The default output is the same as for SHOW TABLE STATS, with the same Kudu-specific columns in the result set:
show table stats kudu_table;
+--------+-----------+----------+-----------------------+------------+
| # Rows | Start Key | Stop Key | Leader Replica | # Replicas |
+--------+-----------+----------+-----------------------+------------+
| -1 | | 00000001 | host.example.com:7050 | 3 |
| -1 | 00000001 | 00000002 | host.example.com:7050 | 3 |
| -1 | 00000002 | 00000003 | host.example.com:7050 | 3 |
| -1 | 00000003 | 00000004 | host.example.com:7050 | 3 |
| -1 | 00000004 | 00000005 | host.example.com:7050 | 3 |
...
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
Kudu considerations:
The optional RANGE clause only applies to Kudu tables. It displays only the partitions defined by the RANGE clause of
CREATE TABLE or ALTER TABLE.
Although you can specify < or <= comparison operators when defining range partitions for Kudu tables, Kudu rewrites
them if necessary to represent each range as low_bound <= VALUES < high_bound. This rewriting might involve
incrementing one of the boundary values or appending a \0 for string values, so that the partition covers the same
range as originally specified.
Examples:
Apache Impala Guide | 377
Impala SQL Language Reference
The following example shows the output for a Parquet, text, or other HDFS-backed table partitioned on the YEAR
column:
[localhost:21000] > show partitions census;
+-------+-------+--------+------+---------+
| year | #Rows | #Files | Size | Format |
+-------+-------+--------+------+---------+
| 2000 | -1 | 0 | 0B | TEXT |
| 2004 | -1 | 0 | 0B | TEXT |
| 2008 | -1 | 0 | 0B | TEXT |
| 2010 | -1 | 0 | 0B | TEXT |
| 2011 | 4 | 1 | 22B | TEXT |
| 2012 | 4 | 1 | 22B | TEXT |
| 2013 | 1 | 1 | 231B | PARQUET |
| Total | 9 | 3 | 275B | |
+-------+-------+--------+------+---------+
The following example shows the output for a Kudu table using the hash partitioning mechanism. The number of rows
in the result set corresponds to the values used in the PARTITIONS N clause of CREATE TABLE.
show partitions million_rows_hash;
+--------+-----------+----------+-----------------------+--
| # Rows | Start Key | Stop Key | Leader Replica | # Replicas
+--------+-----------+----------+-----------------------+--
| -1 | | 00000001 | n236.example.com:7050 | 3
| -1 | 00000001 | 00000002 | n236.example.com:7050 | 3
| -1 | 00000002 | 00000003 | n336.example.com:7050 | 3
| -1 | 00000003 | 00000004 | n238.example.com:7050 | 3
| -1 | 00000004 | 00000005 | n338.example.com:7050 | 3
....
| -1 | 0000002E | 0000002F | n240.example.com:7050 | 3
| -1 | 0000002F | 00000030 | n336.example.com:7050 | 3
| -1 | 00000030 | 00000031 | n240.example.com:7050 | 3
| -1 | 00000031 | | n334.example.com:7050 | 3
+--------+-----------+----------+-----------------------+--
Fetched 50 row(s) in 0.05s
The following example shows the output for a Kudu table using the range partitioning mechanism:
show range partitions million_rows_range;
+-----------------------+
| RANGE (id) |
+-----------------------+
| VALUES < "A" |
| "A" <= VALUES < "[" |
| "a" <= VALUES < "{" |
| "{" <= VALUES < "~\0" |
+-----------------------+
HDFS permissions:
The user ID that the impalad daemon runs under, typically the impala user, must have read and execute permissions
for all directories that are part of the table. (A table could span multiple different HDFS directories if it is partitioned.
The directories could be widely scattered because a partition can reside in an arbitrary HDFS directory based on its
LOCATION attribute.) The Impala user must also have execute permission for the database directory, and any parent
directories of the database directory in HDFS.
Related information:
See Table and Column Statistics on page 575 for usage information and examples.
SHOW TABLE STATS Statement on page 373, Partitioning for Impala Tables on page 625
378 | Apache Impala Guide
Impala SQL Language Reference
SHOW FUNCTIONS Statement
By default, SHOW FUNCTIONS displays user-defined functions (UDFs) and SHOW AGGREGATE FUNCTIONS displays
user-defined aggregate functions (UDAFs) associated with a particular database. The output from SHOW FUNCTIONS
includes the argument signature of each function. You specify this argument signature as part of the DROP FUNCTION
statement. You might have several UDFs with the same name, each accepting different argument data types.
Usage notes:
In CDH 5.7 / Impala 2.5 and higher, the SHOW FUNCTIONS output includes a new column, labelled is persistent.
This property is true for Impala built-in functions, C++ UDFs, and Java UDFs created using the new CREATE FUNCTION
syntax with no signature. It is false for Java UDFs created using the old CREATE FUNCTION syntax that includes the
types for the arguments and return value. Any functions with false shown for this property must be created again
by the CREATE FUNCTION statement each time the Impala catalog server is restarted. See CREATE FUNCTION for
information on switching to the new syntax, so that Java UDFs are preserved across restarts. Java UDFs that are persisted
this way are also easier to share across Impala and Hive.
Security considerations:
When authorization is enabled, the output of the SHOW statement only shows those objects for which you have the
privilege to view. If you believe an object exists but you cannot see it in the SHOW output, check with the system
administrator if you need to be granted a new privilege for that object. See Enabling Sentry Authorization for Impala
on page 87 for how to set up authorization and add privileges for specific objects.
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Examples:
To display Impala built-in functions, specify the special database name _impala_builtins:
show functions in _impala_builtins;
+--------------+-------------------------------------------------+-------------+---------------+
| return type | signature | binary type | is
persistent |
+--------------+-------------------------------------------------+-------------+---------------+
| BIGINT | abs(BIGINT) | BUILTIN | true
|
| DECIMAL(*,*) | abs(DECIMAL(*,*)) | BUILTIN | true
|
| DOUBLE | abs(DOUBLE) | BUILTIN | true
|
| FLOAT | abs(FLOAT) | BUILTIN | true
|
+----------------+----------------------------------------+
...
show functions in _impala_builtins like '*week*';
+-------------+------------------------------+-------------+---------------+
| return type | signature | binary type | is persistent |
+-------------+------------------------------+-------------+---------------+
| INT | dayofweek(TIMESTAMP) | BUILTIN | true |
| INT | weekofyear(TIMESTAMP) | BUILTIN | true |
| TIMESTAMP | weeks_add(TIMESTAMP, BIGINT) | BUILTIN | true |
| TIMESTAMP | weeks_add(TIMESTAMP, INT) | BUILTIN | true |
| TIMESTAMP | weeks_sub(TIMESTAMP, BIGINT) | BUILTIN | true |
| TIMESTAMP | weeks_sub(TIMESTAMP, INT) | BUILTIN | true |
+-------------+------------------------------+-------------+---------------+
Related information:
Overview of Impala Functions on page 194, Impala Built-In Functions on page 391, User-Defined Functions (UDFs) on
page 525, SHOW DATABASES on page 368, SHOW TABLES Statement on page 369
SHUTDOWN Statement
The SHUTDOWN statement performs a graceful shutdown of Impala Daemon. The Impala daemon will notify other
Impala daemons that it is shutting down, wait for a grace period, then shut itself down once no more queries or
Apache Impala Guide | 379
Impala SQL Language Reference
fragments are executing on that daemon. The --shutdown_grace_period_s flag determines the duration of the
grace period in seconds.
Syntax:
:SHUTDOWN()
:SHUTDOWN([host_name[:port_number] )
:SHUTDOWN(deadline)
:SHUTDOWN([host_name[:port_number], deadline)
Usage notes:
All arguments are optional for SHUTDOWN.
Argument
Type
Default
Description
host_name
STRING
The current impalad host to whom the
SHUTDOWN statement is submitted.
port_number
INT
• In Impala 3.1 / CDH 6.1, the current
impalad's port used for the thrift based
communication with other impalads
(by default, 22000).
• In Impala 3.2 and higher, the current
impalad's port used for the KRPC based
communication with other impalads
(by default, 27000).
Address of the impalad to be shut down.
Specifies the port by which the impalad can
be contacted.
• In Impala 3.1 / CDH 6.1, use the same
impalad port used for the thrift based
inter-Impala communication.
• In Impala 3.2 and higher, use the same
impalad port used for the KRPC based
inter-Impala communication.
deadline
INT
The value of the --shutdown_deadline_s
flag, which defaults to 1 hour.
deadline must be a non-negative number,
specified in seconds.
The value, 0, for deadline specifies an
immediate shutdown.
Take the following points into consideration when running the SHUTDOWN statement:
• A client can shut down the coordinator impalad that it is connected to via :SHUTDOWN().
• A client can remotely shut down any impalad via :SHUTDOWN('hostname').
• The shutdown time limit can be overridden to force a quicker or slower shutdown by specifying a deadline. The
default deadline is determined by the --shutdown_deadline_s flag, which defaults to 1 hour.
• Executors can be shut down without disrupting running queries. Short-running queries will finish, and long-running
queries will continue until the deadline is reached.
• If queries are submitted to a coordinator after shutdown of that coordinator has started, they will fail.
• Long running queries or other issues, such as stuck fragments, will slow down but not prevent eventual shutdown.
Security considerations:
The ALL privilege is required on the server.
Cancellation: Cannot be cancelled.
Examples:
:SHUTDOWN(); -- Shut down the current impalad with the default deadline.
:SHUTDOWN('hostname'); -- Shut down impalad running on hostname with the default
deadline.
:SHUTDOWN(\"hostname:1234\"); -- Shut down impalad running on host at port 1234 with
the default deadline.
:SHUTDOWN(10); - Shut down the current impalad after 10 seconds.
:SHUTDOWN('hostname', 10); - Shut down impalad running on hostname when all queries
running on hostname finish, or after 10 seconds.
:SHUTDOWN('hostname:11', 10 * 60); -- Shut down impalad running on hostname at port 11
380 | Apache Impala Guide
when all queries running on hostname finish, or after 600 seconds.
:SHUTDOWN(0); -- Perform an immdediate shutdown of the current impalad.
Impala SQL Language Reference
Added in: CDH 6.1
TRUNCATE TABLE Statement (CDH 5.5 or higher only)
Removes the data from an Impala table while leaving the table itself.
Syntax:
TRUNCATE [TABLE] [IF EXISTS] [db_name.]table_name
Statement type: DDL
Usage notes:
Often used to empty tables that are used during ETL cycles, after the data has been copied to another table for the
next stage of processing. This statement is a low-overhead alternative to dropping and recreating the table, or using
INSERT OVERWRITE to replace the data during the next ETL cycle.
This statement removes all the data and associated data files in the table. It can remove data files from internal tables,
external tables, partitioned tables, and tables mapped to HBase or the Amazon Simple Storage Service (S3). The data
removal applies to the entire table, including all partitions of a partitioned table.
Any statistics produced by the COMPUTE STATS statement are reset when the data is removed.
Make sure that you are in the correct database before truncating a table, either by issuing a USE statement first or by
using a fully qualified name db_name.table_name.
The optional TABLE keyword does not affect the behavior of the statement.
The optional IF EXISTS clause makes the statement succeed whether or not the table exists. If the table does exist,
it is truncated; if it does not exist, the statement has no effect. This capability is useful in standardized setup scripts
that are might be run both before and after some of the tables exist. This clause is available in CDH 5.7 / Impala 2.5
and higher.
For other tips about managing and reclaiming Impala disk space, see Managing Disk Space for Impala Data on page
77.
Amazon S3 considerations:
Although Impala cannot write new data to a table stored in the Amazon S3 filesystem, the TRUNCATE TABLE statement
can remove data files from S3. See Using Impala with the Amazon S3 Filesystem on page 692 for details about working
with S3 tables.
Cancellation: Cannot be cancelled.
HDFS permissions:
The user ID that the impalad daemon runs under, typically the impala user, must have write permission for all the
files and directories that make up the table.
Kudu considerations:
Currently, the TRUNCATE TABLE statement cannot be used with Kudu tables.
Examples:
The following example shows a table containing some data and with table and column statistics. After the TRUNCATE
TABLE statement, the data is removed and the statistics are reset.
CREATE TABLE truncate_demo (x INT);
INSERT INTO truncate_demo VALUES (1), (2), (4), (8);
SELECT COUNT(*) FROM truncate_demo;
+----------+
Apache Impala Guide | 381
Impala SQL Language Reference
| count(*) |
+----------+
| 4 |
+----------+
COMPUTE STATS truncate_demo;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 1 column(s). |
+-----------------------------------------+
SHOW TABLE STATS truncate_demo;
+-------+--------+------+--------------+-------------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats
|
+-------+--------+------+--------------+-------------------+--------+-------------------+
| 4 | 1 | 8B | NOT CACHED | NOT CACHED | TEXT | false
|
+-------+--------+------+--------------+-------------------+--------+-------------------+
SHOW COLUMN STATS truncate_demo;
+--------+------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+------+------------------+--------+----------+----------+
| x | INT | 4 | -1 | 4 | 4 |
+--------+------+------------------+--------+----------+----------+
-- After this statement, the data and the table/column stats will be gone.
TRUNCATE TABLE truncate_demo;
SELECT COUNT(*) FROM truncate_demo;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
SHOW TABLE STATS truncate_demo;
+-------+--------+------+--------------+-------------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental stats
|
+-------+--------+------+--------------+-------------------+--------+-------------------+
| -1 | 0 | 0B | NOT CACHED | NOT CACHED | TEXT | false
|
+-------+--------+------+--------------+-------------------+--------+-------------------+
SHOW COLUMN STATS truncate_demo;
+--------+------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+------+------------------+--------+----------+----------+
| x | INT | -1 | -1 | 4 | 4 |
+--------+------+------------------+--------+----------+----------+
The following example shows how the IF EXISTS clause allows the TRUNCATE TABLE statement to be run without
error whether or not the table exists:
CREATE TABLE staging_table1 (x INT, s STRING);
Fetched 0 row(s) in 0.33s
SHOW TABLES LIKE 'staging*';
+----------------+
| name |
+----------------+
| staging_table1 |
+----------------+
Fetched 1 row(s) in 0.25s
-- Our ETL process involves removing all data from several staging tables
-- even though some might be already dropped, or not created yet.
TRUNCATE TABLE IF EXISTS staging_table1;
Fetched 0 row(s) in 5.04s
TRUNCATE TABLE IF EXISTS staging_table2;
Fetched 0 row(s) in 0.25s
382 | Apache Impala Guide
Impala SQL Language Reference
TRUNCATE TABLE IF EXISTS staging_table3;
Fetched 0 row(s) in 0.25s
Related information:
Overview of Impala Tables on page 196, ALTER TABLE Statement on page 205, CREATE TABLE Statement on page 234,
Partitioning for Impala Tables on page 625, Internal Tables on page 197, External Tables on page 197
UPDATE Statement (CDH 5.10 or higher only)
Updates an arbitrary number of rows in a Kudu table. This statement only works for Impala tables that use the Kudu
storage engine.
Syntax:
UPDATE [database_name.]table_name SET col = val [, col = val ... ]
[ FROM joined_table_refs ]
[ WHERE where_conditions ]
Usage notes:
None of the columns that make up the primary key can be updated by the SET clause.
The conditions in the WHERE clause are the same ones allowed for the SELECT statement. See SELECT Statement on
page 295 for details.
If the WHERE clause is omitted, all rows in the table are updated.
The conditions in the WHERE clause can refer to any combination of primary key columns or other columns. Referring
to primary key columns in the WHERE clause is more efficient than referring to non-primary key columns.
Because Kudu currently does not enforce strong consistency during concurrent DML operations, be aware that the
results after this statement finishes might be different than you intuitively expect:
• If some rows cannot be updated because their some primary key columns are not found, due to their being deleted
by a concurrent DELETE operation, the statement succeeds but returns a warning.
• An UPDATE statement might also overlap with INSERT, UPDATE, or UPSERT statements running concurrently on
the same table. After the statement finishes, there might be more or fewer matching rows than expected in the
table because it is undefined whether the UPDATE applies to rows that are inserted or updated while the UPDATE
is in progress.
The number of affected rows is reported in an impala-shell message and in the query profile.
The optional FROM clause lets you restrict the updates to only the rows in the specified table that are part of the result
set for a join query. The join clauses can include non-Kudu tables, but the table from which the rows are deleted must
be a Kudu table.
Statement type: DML
Important: After adding or replacing data in a table used in performance-critical queries, issue a
COMPUTE STATS statement to make sure all statistics are up-to-date. Consider updating statistics for
a table after any INSERT, LOAD DATA, or CREATE TABLE AS SELECT statement in Impala, or after
loading data through Hive and doing a REFRESH table_name in Impala. This technique is especially
important for tables that are very large, used in join queries, or both.
Examples:
The following examples show how to perform a simple update on a table, with or without a WHERE clause:
-- Set all rows to the same value for column c3.
Apache Impala Guide | 383
Impala SQL Language Reference
-- In this case, c1 and c2 are primary key columns
-- and so cannot be updated.
UPDATE kudu_table SET c3 = 'not applicable';
-- Update only the rows that match the condition.
UPDATE kudu_table SET c3 = NULL WHERE c1 > 100 AND c3 IS NULL;
-- Does not update any rows, because the WHERE condition is always false.
UPDATE kudu_table SET c3 = 'impossible' WHERE 1 = 0;
-- Change the values of multiple columns in a single UPDATE statement.
UPDATE kudu_table SET c3 = upper(c3), c4 = FALSE, c5 = 0 WHERE c6 = TRUE;
The following examples show how to perform an update using the FROM keyword with a join clause:
-- Uppercase a column value, only for rows that have
-- an ID that matches the value from another table.
UPDATE kudu_table SET c3 = upper(c3)
FROM kudu_table JOIN non_kudu_table
ON kudu_table.id = non_kudu_table.id;
-- Same effect as previous statement.
-- Assign table aliases in FROM clause, then refer to
-- short names elsewhere in the statement.
UPDATE t1 SET c3 = upper(c3)
FROM kudu_table t1 JOIN non_kudu_table t2
ON t1.id = t2.id;
-- Same effect as previous statements, but more efficient.
-- Use WHERE clause to skip updating values that are
-- already uppercase.
UPDATE t1 SET c3 = upper(c3)
FROM kudu_table t1 JOIN non_kudu_table t2
ON t1.id = t2.id
WHERE c3 != upper(c3);
Related information:
Using Impala to Query Kudu Tables on page 670, INSERT Statement on page 277, DELETE Statement (CDH 5.10 or higher
only) on page 249, UPSERT Statement (CDH 5.10 or higher only) on page 384
UPSERT Statement (CDH 5.10 or higher only)
Acts as a combination of the INSERT and UPDATE statements. For each row processed by the UPSERT statement:
• If another row already exists with the same set of primary key values, the other columns are updated to match
the values from the row being “UPSERTed”.
• If there is not any row with the same set of primary key values, the row is created, the same as if the INSERT
statement was used.
This statement only works for Impala tables that use the Kudu storage engine.
Syntax:
UPSERT [hint_clause] INTO [TABLE] [db_name.]table_name
[(column_list)]
{
[hint_clause] select_statement
| VALUES (value [, value ...]) [, (value [, value ...]) ...]
}
hint_clause ::= [SHUFFLE] | [NOSHUFFLE]
(Note: the square brackets are part of the syntax.)
The select_statement clause can use the full syntax, such as WHERE and join clauses, as SELECT Statement on page 295.
384 | Apache Impala Guide
Impala SQL Language Reference
Statement type: DML
Usage notes:
If you specify a column list, any omitted columns in the inserted or updated rows are set to their default value (if the
column has one) or NULL (if the column does not have a default value). Therefore, if a column is not nullable and has
no default value, it must be included in the column list for any UPSERT statement. Because all primary key columns
meet these conditions, all the primary key columns must be specified in every UPSERT statement.
Because Kudu tables can efficiently handle small incremental changes, the VALUES clause is more practical to use with
Kudu tables than with HDFS-based tables.
Important: After adding or replacing data in a table used in performance-critical queries, issue a
COMPUTE STATS statement to make sure all statistics are up-to-date. Consider updating statistics for
a table after any INSERT, LOAD DATA, or CREATE TABLE AS SELECT statement in Impala, or after
loading data through Hive and doing a REFRESH table_name in Impala. This technique is especially
important for tables that are very large, used in join queries, or both.
Examples:
UPSERT INTO kudu_table (pk, c1, c2, c3) VALUES (0, 'hello', 50, true), (1, 'world', -1,
false);
UPSERT INTO production_table SELECT * FROM staging_table;
UPSERT INTO production_table SELECT * FROM staging_table WHERE c1 IS NOT NULL AND c2 >
0;
Related information:
Using Impala to Query Kudu Tables on page 670, INSERT Statement on page 277, UPDATE Statement (CDH 5.10 or higher
only) on page 383
USE Statement
Switches the current session to a specified database. The current database is where any CREATE TABLE, INSERT,
SELECT, or other statements act when you specify a table or other object name, without prefixing it with a database
name. The new current database applies for the duration of the session or unti another USE statement is executed.
Syntax:
USE db_name
By default, when you connect to an Impala instance, you begin in a database named default.
Usage notes:
Switching the default database is convenient in the following situations:
• To avoid qualifying each reference to a table with the database name. For example, SELECT * FROM t1 JOIN
t2 rather than SELECT * FROM db.t1 JOIN db.t2.
• To do a sequence of operations all within the same database, such as creating a table, inserting data, and querying
the table.
To start the impala-shell interpreter and automatically issue a USE statement for a particular database, specify the
option -d db_name for the impala-shell command. The -d option is useful to run SQL scripts, such as setup or
test scripts, against multiple databases without hardcoding a USE statement into the SQL source.
Examples:
See CREATE DATABASE Statement on page 226 for examples covering CREATE DATABASE, USE, and DROP DATABASE.
Cancellation: Cannot be cancelled.
Apache Impala Guide | 385
Impala SQL Language Reference
HDFS permissions: This statement does not touch any HDFS files or directories, therefore no HDFS permissions are
required.
Related information:
CREATE DATABASE Statement on page 226, DROP DATABASE Statement on page 262, SHOW DATABASES on page 368
VALUES Statement
In addition to being part of the INSERT statement, the VALUES clause can be used as stand-alone statement or with
the SELECT statement to construct a data set without creating a table. For example, the following statement returns
a data set of 2 rows and 3 columns.
VALUES ('r1_c1', 'r1_c2', 'r1_c3')
, ('r2_c1', 'r2_c2', 'r2_c3');
Syntax:
VALUES (row)[, (row), ...];
SELECT select_list FROM (VALUES (row)[, (row), ...]) AS alias;
row ::= column [[AS alias], column [AS alias], ...]
• The VALUES keyword is followed by a comma separated list of one or more rows.
• row is a comma-separated list of one or more columns.
• Each row must have the same number of columns.
• column can be a constant, a variable, or an expression.
• The corresponding columns must have compatible data types in all rows. See the third query in the Examples
section below.
• By default, the first row is used to name columns. But using the AS keyword, you can optionally give the column
an alias.
• If used in the SELECT statement, the AS keyword with an alias is required.
• select_list is the columns to be selected for the result set.
Examples:
> SELECT * FROM (VALUES(4,5,6),(7,8,9)) AS t;
+---+---+---+
| 4 | 5 | 6 |
+---+---+---+
| 4 | 5 | 6 |
| 7 | 8 | 9 |
+---+---+---+
> SELECT * FROM (VALUES(1 AS c1, true AS c2, 'abc' AS c3),(100,false,'xyz')) AS t;
+-----+-------+-----+
| c1 | c2 | c3 |
+-----+-------+-----+
| 1 | true | abc |
| 100 | false | xyz |
+-----+-------+-----+
> VALUES (CAST('2019-01-01' AS TIMESTAMP)), ('2019-02-02');
+---------------------------------+
| cast('2019-01-01' as timestamp) |
+---------------------------------+
| 2019-01-01 00:00:00 |
| 2019-02-02 00:00:00 |
+---------------------------------+
Related information:
SELECT Statement on page 295
386 | Apache Impala Guide
Impala SQL Language Reference
Optimizer Hints in Impala
The Impala SQL dialect supports query hints, for fine-tuning the inner workings of queries. Specify hints as a temporary
workaround for expensive queries, where missing statistics or other factors cause inefficient performance.
Hints are most often used for the most resource-intensive kinds of Impala queries:
• Join queries involving large tables, where intermediate result sets are transmitted across the network to evaluate
the join conditions.
• Inserting into partitioned Parquet tables, where many memory buffers could be allocated on each host to hold
intermediate results for each partition.
Syntax:
In CDH 5.2 / Impala 2.0 and higher, you can specify the hints inside comments that use either the /* */ or -- notation.
Specify a + symbol immediately before the hint name. Recently added hints are only available using the /* */ and
-- notation. For clarity, the /* */ and -- styles are used in the syntax and examples throughout this section. With
the /* */ or -- notation for hints, specify a + symbol immediately before the first hint name. Multiple hints can be
specified separated by commas, for example /* +clustered,shuffle */
SELECT STRAIGHT_JOIN select_list FROM
join_left_hand_table
JOIN /* +BROADCAST|SHUFFLE */
join_right_hand_table
remainder_of_query;
SELECT select_list FROM
join_left_hand_table
JOIN -- +BROADCAST|SHUFFLE
join_right_hand_table
remainder_of_query;
INSERT insert_clauses
/* +SHUFFLE|NOSHUFFLE */
SELECT remainder_of_query;
INSERT insert_clauses
-- +SHUFFLE|NOSHUFFLE
SELECT remainder_of_query;
INSERT /* +SHUFFLE|NOSHUFFLE */
insert_clauses
SELECT remainder_of_query;
INSERT -- +SHUFFLE|NOSHUFFLE
insert_clauses
SELECT remainder_of_query;
UPSERT /* +SHUFFLE|NOSHUFFLE */
upsert_clauses
SELECT remainder_of_query;
UPSERT -- +SHUFFLE|NOSHUFFLE
upsert_clauses
SELECT remainder_of_query;
SELECT select_list FROM
table_ref
/* +{SCHEDULE_CACHE_LOCAL | SCHEDULE_DISK_LOCAL | SCHEDULE_REMOTE}
[,RANDOM_REPLICA] */
remainder_of_query;
INSERT insert_clauses
-- +CLUSTERED
SELECT remainder_of_query;
INSERT insert_clauses
/* +CLUSTERED */
SELECT remainder_of_query;
Apache Impala Guide | 387
Impala SQL Language Reference
INSERT -- +CLUSTERED
insert_clauses
SELECT remainder_of_query;
INSERT /* +CLUSTERED */
insert_clauses
SELECT remainder_of_query;
UPSERT -- +CLUSTERED
upsert_clauses
SELECT remainder_of_query;
UPSERT /* +CLUSTERED */
upsert_clauses
SELECT remainder_of_query;
CREATE /* +SHUFFLE|NOSHUFFLE */
table_clauses
AS SELECT remainder_of_query;
CREATE -- +SHUFFLE|NOSHUFFLE
table_clauses
AS SELECT remainder_of_query;
CREATE /* +CLUSTERED|NOCLUSTERED */
table_clauses
AS SELECT remainder_of_query;
CREATE -- +CLUSTERED|NOCLUSTERED
table_clauses
AS SELECT remainder_of_query;
The square bracket style hints are supported for backward compatibility, but the syntax is deprecated and will be
removed in a future release. For that reason, any newly added hints are not available with the square bracket syntax.
SELECT STRAIGHT_JOIN select_list FROM
join_left_hand_table
JOIN [{ /* +BROADCAST */ | /* +SHUFFLE */ }]
join_right_hand_table
remainder_of_query;
INSERT insert_clauses
[{ /* +SHUFFLE */ | /* +NOSHUFFLE */ }]
[/* +CLUSTERED */]
SELECT remainder_of_query;
UPSERT [{ /* +SHUFFLE */ | /* +NOSHUFFLE */ }]
[/* +CLUSTERED */]
upsert_clauses
SELECT remainder_of_query;
Usage notes:
With both forms of hint syntax, include the STRAIGHT_JOIN keyword immediately after the SELECT and any DISTINCT
or ALL keywords to prevent Impala from reordering the tables in a way that makes the join-related hints ineffective.
The STRAIGHT_JOIN hint affects the join order of table references in the query block containing the hint. It does not
affect the join order of nested queries, such as views, inline views, or WHERE-clause subqueries. To use this hint for
performance tuning of complex queries, apply the hint to all query blocks that need a fixed join order.
To reduce the need to use hints, run the COMPUTE STATS statement against all tables involved in joins, or used as the
source tables for INSERT ... SELECT operations where the destination is a partitioned Parquet table. Do this
operation after loading data or making substantial changes to the data within each table. Having up-to-date statistics
helps Impala choose more efficient query plans without the need for hinting. See Table and Column Statistics on page
575 for details and examples.
To see which join strategy is used for a particular query, examine the EXPLAIN output for that query. See Using the
EXPLAIN Plan for Performance Tuning on page 602 for details and examples.
388 | Apache Impala Guide
Impala SQL Language Reference
Hints for join queries:
The /* +BROADCAST */ and /* +SHUFFLE */ hints control the execution strategy for join queries. Specify one of
the following constructs immediately after the JOIN keyword in a query:
• /* +SHUFFLE */ makes that join operation use the “partitioned” technique, which divides up corresponding
rows from both tables using a hashing algorithm, sending subsets of the rows to other nodes for processing. (The
keyword SHUFFLE is used to indicate a “partitioned join”, because that type of join is not related to “partitioned
tables”.) Since the alternative “broadcast” join mechanism is the default when table and index statistics are
unavailable, you might use this hint for queries where broadcast joins are unsuitable; typically, partitioned joins
are more efficient for joins between large tables of similar size.
• /* +BROADCAST */ makes that join operation use the “broadcast” technique that sends the entire contents of
the right-hand table to all nodes involved in processing the join. This is the default mode of operation when table
and index statistics are unavailable, so you would typically only need it if stale metadata caused Impala to mistakenly
choose a partitioned join operation. Typically, broadcast joins are more efficient in cases where one table is much
smaller than the other. (Put the smaller table on the right side of the JOIN operator.)
Hints for INSERT ... SELECT and CREATE TABLE AS SELECT (CTAS):
When inserting into partitioned tables, such as using the Parquet file format, you can include a hint in the INSERT or
CREATE TABLE AS SELECT(CTAS) statements to fine-tune the overall performance of the operation and its resource
usage.
You would only use hints if an INSERT or CTAS into a partitioned table was failing due to capacity limits, or if such an
operation was succeeding but with less-than-optimal performance.
• /* +SHUFFLE */ and /* +NOSHUFFLE */ Hints
– /* +SHUFFLE */ adds an exchange node, before writing the data, which re-partitions the result of the
SELECT based on the partitioning columns of the target table. With this hint, only one node writes to a
partition at a time, minimizing the global number of simultaneous writes and the number of memory buffers
holding data for individual partitions. This also reduces fragmentation, resulting in fewer files. Thus it reduces
overall resource usage of the INSERT or CTAS operation and allows some operations to succeed that otherwise
would fail. It does involve some data transfer between the nodes so that the data files for a particular partition
are all written on the same node.
Use /* +SHUFFLE */ in cases where an INSERT or CTAS statement fails or runs inefficiently due to all
nodes attempting to write data for all partitions.
If the table is unpartitioned or every partitioning expression is constant, then /* +SHUFFLE */ will cause
every write to happen on the coordinator node.
– /* +NOSHUFFLE */ does not add exchange node before inserting to partitioned tables and disables
re-partitioning. So the selected execution plan might be faster overall, but might also produce a larger number
of small data files or exceed capacity limits, causing the INSERT or CTAS operation to fail.
Impala automatically uses the /* +SHUFFLE */ method if any partition key column in the source table,
mentioned in the SELECT clause, does not have column statistics. In this case, use the /* +NOSHUFFLE */
hint if you want to override this default behavior.
– If column statistics are available for all partition key columns in the source table mentioned in the INSERT
... SELECT or CTAS query, Impala chooses whether to use the /* +SHUFFLE */ or /* +NOSHUFFLE */
technique based on the estimated number of distinct values in those columns and the number of nodes
involved in the operation. In this case, you might need the /* +SHUFFLE */ or the /* +NOSHUFFLE */
hint to override the execution plan selected by Impala.
• /* +CLUSTERED */ and /* +NOCLUSTERED */ Hints
– /* +CLUSTERED */ sorts data by the partition columns before inserting to ensure that only one partition
is written at a time per node. Use this hint to reduce the number of files kept open and the number of buffers
kept in memory simultaneously. This technique is primarily useful for inserts into Parquet tables, where the
large block size requires substantial memory to buffer data for multiple output files at once. This hint is
available in CDH 5.10 / Impala 2.8 or higher.
Apache Impala Guide | 389
Impala SQL Language Reference
Starting in CDH 6.0 / Impala 3.0, /* +CLUSTERED */ is the default behavior for HDFS tables.
– /* +NOCLUSTERED */ does not sort by primary key before insert. This hint is available in CDH 5.10 / Impala
2.8 or higher.
Use this hint when inserting to Kudu tables.
In the versions lower than CDH 6.0 / Impala 3.0, /* +NOCLUSTERED */ is the default in HDFS tables.
Kudu consideration:
Starting from CDH 5.12 / Impala 2.9, the INSERT or UPSERT operations into Kudu tables automatically add an exchange
and a sort node to the plan that partitions and sorts the rows according to the partitioning/primary key scheme of the
target table (unless the number of rows to be inserted is small enough to trigger single node execution). Since Kudu
partitions and sorts rows on write, pre-partitioning and sorting takes some of the load off of Kudu and helps large
INSERT operations to complete without timing out. However, this default behavior may slow down the end-to-end
performance of the INSERT or UPSERT operations. Starting from CDH 5.13 / Impala 2.10, you can use the /*
+NOCLUSTERED */ and /* +NOSHUFFLE */ hints together to disable partitioning and sorting before the rows are
sent to Kudu. Additionally, since sorting may consume a large amount of memory, consider setting the MEM_LIMIT
query option for those queries.
Hints for scheduling of scan ranges (HDFS data blocks or Kudu tablets):
The hints /* +SCHEDULE_CACHE_LOCAL */, /* +SCHEDULE_DISK_LOCAL */, and /* +SCHEDULE_REMOTE */
have the same effect as specifying the REPLICA_PREFERENCE query option with the respective option settings of
CACHE_LOCAL, DISK_LOCAL, or REMOTE.
Specifying the replica preference as a query hint always overrides the query option setting.
The hint /* +RANDOM_REPLICA */ is the same as enabling the SCHEDULE_RANDOM_REPLICA query option.
You can use these hints in combination by separating them with commas, for example, /*
+SCHEDULE_CACHE_LOCAL,RANDOM_REPLICA */. See REPLICA_PREFERENCE Query Option (CDH 5.9 or higher only)
on page 355 and SCHEDULE_RANDOM_REPLICA Query Option (CDH 5.7 or higher only) on page 360 for information
about how these settings influence the way Impala processes HDFS data blocks or Kudu tablets.
Specifying either the SCHEDULE_RANDOM_REPLICA query option or the corresponding RANDOM_REPLICA query hint
enables the random tie-breaking behavior when processing data blocks during the query.
Suggestions versus directives:
In early Impala releases, hints were always obeyed and so acted more like directives. Once Impala gained join order
optimizations, sometimes join queries were automatically reordered in a way that made a hint irrelevant. Therefore,
the hints act more like suggestions in Impala 1.2.2 and higher.
To force Impala to follow the hinted execution mechanism for a join query, include the STRAIGHT_JOIN keyword in
the SELECT statement. See Overriding Join Reordering with STRAIGHT_JOIN on page 569 for details. When you use
this technique, Impala does not reorder the joined tables at all, so you must be careful to arrange the join order to put
the largest table (or subquery result set) first, then the smallest, second smallest, third smallest, and so on. This ordering
lets Impala do the most I/O-intensive parts of the query using local reads on the DataNodes, and then reduce the size
of the intermediate result set as much as possible as each subsequent table or subquery result set is joined.
Restrictions:
Queries that include subqueries in the WHERE clause can be rewritten internally as join queries. Currently, you cannot
apply hints to the joins produced by these types of queries.
Because hints can prevent queries from taking advantage of new metadata or improvements in query planning, use
them only when required to work around performance issues, and be prepared to remove them when they are no
longer required, such as after a new Impala release or bug fix.
In particular, the /* +BROADCAST */ and /* +SHUFFLE */ hints are expected to be needed much less frequently
in Impala 1.2.2 and higher, because the join order optimization feature in combination with the COMPUTE STATS
390 | Apache Impala Guide
Impala SQL Language Reference
statement now automatically choose join order and join mechanism without the need to rewrite the query and add
hints. See Performance Considerations for Join Queries on page 568 for details.
Compatibility:
The hints embedded within -- comments are compatible with Hive queries. The hints embedded within /* */
comments or [ ] square brackets are not recognized by or not compatible with Hive. For example, Hive raises an error
for Impala hints within /* */ comments because it does not recognize the Impala hint names.
Considerations for views:
If you use a hint in the query that defines a view, the hint is preserved when you query the view. Impala internally
rewrites all hints in views to use the -- comment notation, so that Hive can query such views without errors due to
unrecognized hint names.
Examples:
For example, this query joins a large customer table with a small lookup table of less than 100 rows. The right-hand
table can be broadcast efficiently to all nodes involved in the join. Thus, you would use the /* +broadcast */ hint
to force a broadcast join strategy:
select straight_join customer.address, state_lookup.state_name
from customer join /* +broadcast */ state_lookup
on customer.state_id = state_lookup.state_id;
This query joins two large tables of unpredictable size. You might benchmark the query with both kinds of hints and
find that it is more efficient to transmit portions of each table to other nodes for processing. Thus, you would use the
/* +shuffle */ hint to force a partitioned join strategy:
select straight_join weather.wind_velocity, geospatial.altitude
from weather join /* +shuffle */ geospatial
on weather.lat = geospatial.lat and weather.long = geospatial.long;
For joins involving three or more tables, the hint applies to the tables on either side of that specific JOIN keyword.
The STRAIGHT_JOIN keyword ensures that joins are processed in a predictable order from left to right. For example,
this query joins t1 and t2 using a partitioned join, then joins that result set to t3 using a broadcast join:
select straight_join t1.name, t2.id, t3.price
from t1 join /* +shuffle */ t2 join /* +broadcast */ t3
on t1.id = t2.id and t2.id = t3.id;
Related information:
For more background information about join queries, see Joins in Impala SELECT Statements on page 296. For performance
considerations, see Performance Considerations for Join Queries on page 568.
Impala Built-In Functions
Impala supports several categories of built-in functions. These functions let you perform mathematical calculations,
string manipulation, date calculations, and other kinds of data transformations directly in SQL statements.
The categories of built-in functions supported by Impala are:
• Impala Mathematical Functions on page 397
• Impala Type Conversion Functions on page 423
• Impala Date and Time Functions on page 424
• Impala Conditional Functions on page 457
• Impala String Functions on page 462
• Impala Aggregate Functions on page 479.
• Impala Analytic Functions on page 506
• Impala Bit Functions on page 414
Apache Impala Guide | 391
Impala SQL Language Reference
• Impala Miscellaneous Functions on page 477
The following is a complete list of built-in functions supported in Impala:
ABS
ACOS
ADD_MONTHS
ADDDATE
APPX_MEDIAN
ASCII
ASIN
ATAN
ATAN2
AVG
AVG - Analytic Function
BASE64DECODE
BASE64ENCODE
BITAND
BIN
BITNOT
BITOR
BITXOR
BTRIM
CASE
CASE WHEN
CAST
CEIL, CEILING, DCEIL
CHAR_LENGTH
CHR
COALESCE
CONCAT
CONCAT_WS
CONV
COS
COSH
COT
COUNT
COUNT - Analytic Function
392 | Apache Impala Guide
COUNTSET
CUME_DIST
CURRENT_DATABASE
CURRENT_TIMESTAMP
DATE_ADD
DATE_PART
DATE_SUB
DATE_TRUNC
DATEDIFF
DAY
DAYNAME
DAYOFWEEK
DAYOFYEAR
DAYS_ADD
DAYS_SUB
DECODE
DEGREES
DENSE_RANK
E
EFFECTIVE_USER
EXP
EXTRACT
FACTORIAL
FIND_IN_SET
FIRST_VALUE
FLOOR, DFLOOR
FMOD
FNV_HASH
GET_JSON_OBJECT
FROM_UNIXTIME
FROM_TIMESTAMP
FROM_UTC_TIMESTAMP
GETBIT
GREATEST
GROUP_CONCAT
GROUP_CONCAT - Analytic Function
Impala SQL Language Reference
Apache Impala Guide | 393
Impala SQL Language Reference
HEX
HOUR
HOURS_ADD
HOURS_SUB
IF
IFNULL
INITCAP
INSTR
INT_MONTHS_BETWEEN
IS_INF
IS_NAN
ISFALSE
ISNOTFALSE
ISNOTTRUE
ISNULL
ISTRUE
LAG
LAST_VALUE
LEAD
LEAST
LEFT
LENGTH
LN
LOCATE
LOG
LOG10
LOG2
LOWER, LCASE
LPAD
LTRIM
MAX
MAX - Analytic Function
MAX_INT, MAX_TINYINT, MAX_SMALLINT, MAX_BIGINT
MICROSECONDS_ADD
MICROSECONDS_SUB
MILLISECOND
394 | Apache Impala Guide
Impala SQL Language Reference
MILLISECONDS_ADD
MILLISECONDS_SUB
MIN
MIN - Analytic Function
MIN_INT, MIN_TINYINT, MIN_SMALLINT, MIN_BIGINT
MINUTE
MINUTES_ADD
MINUTES_SUB
MOD
MONTH
MONTHNAME
MONTHS_ADD
MONTHS_BETWEEN
MONTHS_SUB
MURMUR_HASH
NANOSECONDS_ADD
NANOSECONDS_SUB
NDV
NEGATIVE
NEXT_DAY
NONNULLVALUE
NOW
NTILE
NULLIF
NULLIFZERO
NULLVALUE
NVL
NVL2
OVER Clause
PARSE_URL
PERCENT_RANK
PI
PID
PMOD
POSITIVE
POW, POWER, DPOW, FPOW
Apache Impala Guide | 395
Impala SQL Language Reference
PRECISION
QUARTER
QUOTIENT
RADIANS
RAND, RANDOM
RANK
REGEXP_ESCAPE
REGEXP_EXTRACT
REGEXP_LIKE
REGEXP_REPLACE
REPEAT
REPLACE
REVERSE
RIGHT
ROTATELEFT
ROTATERIGHT
ROUND, DROUND
ROW_NUMBER
RPAD
RTRIM
SCALE
SECOND
SECONDS_ADD
SECONDS_SUB
SETBIT
SHIFTLEFT
SHIFTRIGHT
SIGN
SIN
SINH
SLEEP
SPACE
SPLIT_PART
SQRT
STDDEV, STDDEV_SAMP, STDDEV_POP
STRLEFT
396 | Apache Impala Guide
Impala SQL Language Reference
STRRIGHT
SUBDATE
SUBSTR, SUBSTRING
SUM
SUM - Analytic Function
TAN
TANH
TIMEOFDAY
TIMESTAMP_CMP
TO_DATE
TO_TIMESTAMP
TO_UTC_TIMESTAMP
TRANSLATE
TRIM
TRUNC
TRUNCATE, DTRUNC, TRUNC
TYPEOF
UNHEX
UNIX_TIMESTAMP
UPPER, UCASE
USER
UTC_TIMESTAMP
UUID
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP
VERSION
WEEKOFYEAR
WEEKS_ADD
WEEKS_SUB
WIDTH_BUCKET
YEAR
YEARS_ADD
YEARS_SUB
ZEROIFNULL
Impala Mathematical Functions
Mathematical functions, or arithmetic functions, perform numeric calculations that are typically more complex than
basic addition, subtraction, multiplication, and division. For example, these functions include trigonometric, logarithmic,
and base conversion operations.
Apache Impala Guide | 397
Impala SQL Language Reference
Note: In Impala, exponentiation uses the POW() function rather than an exponentiation operator
such as **.
Related information:
The mathematical functions operate mainly on these data types: INT Data Type on page 117, BIGINT Data Type on page
105, SMALLINT Data Type on page 122, TINYINT Data Type on page 136, DOUBLE Data Type on page 114, FLOAT Data
Type on page 116, and DECIMAL Data Type (CDH 6.0 / Impala 3.0 or higher only) on page 109. For the operators that
perform the standard operations such as addition, subtraction, multiplication, and division, see Arithmetic Operators
on page 171.
Functions that perform bitwise operations are explained in Impala Bit Functions on page 414.
Function reference:
Impala supports the following mathematical functions:
• ABS
• ACOS
• ASIN
• ATAN
• ATAN2
• BIN
• CEIL, CEILING, DCEIL
• CONV
• COS
• COSH
• COT
• DEGREES
• E
• EXP
• FACTORIAL
• FLOOR, DFLOOR
• FMOD
• FNV_HASH
• GREATEST
• HEX
• IS_INF
• IS_NAN
• LEAST
• LN
• LOG
• LOG10
• LOG2
• MAX_INT, MAX_TINYINT, MAX_SMALLINT, MAX_BIGINT
• MIN_INT, MIN_TINYINT, MIN_SMALLINT, MIN_BIGINT
• MOD
• MURMUR_HASH
• NEGATIVE
• PI
• PMOD
• POSITIVE
• POW, POWER, DPOW, FPOW
398 | Apache Impala Guide
Impala SQL Language Reference
• PRECISION
• QUOTIENT
• RADIANS
• RAND, RANDOM
• ROUND, DROUND
• SCALE
• SIGN
• SIN
• SINH
• SQRT
• TAN
• TANH
• TRUNCATE, DTRUNC, TRUNC
• UNHEX
• WIDTH_BUCKET
ABS(numeric_type a)
Purpose: Returns the absolute value of the argument.
Return type: Same as the input value
Usage notes: Use this function to ensure all return values are positive. This is different than the positive()
function, which returns its argument unchanged (even if the argument was negative).
ACOS(DOUBLE a)
Purpose: Returns the arccosine of the argument.
Return type: DOUBLE
ASIN(DOUBLE a)
Purpose: Returns the arcsine of the argument.
Return type: DOUBLE
ATAN(DOUBLE a)
Purpose: Returns the arctangent of the argument.
Return type: DOUBLE
ATAN(DOUBLE a, DOUBLE b)
Purpose: Returns the arctangent of the two arguments, with the signs of the arguments used to determine the
quadrant of the result.
Return type: DOUBLE
BIN(BIGINT a)
Purpose: Returns the binary representation of an integer value, that is, a string of 0 and 1 digits.
Return type: STRING
CEIL(DOUBLE a), CEIL(DECIMAL(p,s) a), CEILING(DOUBLE a), CEILING(DECIMAL(p,s) a), DCEIL(DOUBLE a),
DCEIL(DECIMAL(p,s) a)
Purpose: Returns the smallest integer that is greater than or equal to the argument.
Return type: Same as the input type
CONV(BIGINT n, INT from_base, INT to_base), CONV(STRING s, INT from_base, INT to_base)
Purpose: Returns a string representation of the first argument converted from from_base to to_base. The first
argument can be specified as a number or a string. For example, conv(100, 2, 10) and conv('100', 2, 10)
both return '4'.
Apache Impala Guide | 399
Impala SQL Language Reference
Return type: STRING
Usage notes:
If to_base is negative, the first argument is treated as signed, and otherwise, it is treated as unsigned. For example:
• conv(-17, 10, -2) returns '-10001', -17 in base 2.
• conv(-17, 10, 10) returns '18446744073709551599'. -17 is interpreted as an unsigned, 2^64-17, and
then the value is returned in base 10.
The function returns NULL when the following illegal arguments are specified:
• Any argument is NULL.
• from_base or to_base is below -36 or above 36.
• from_base or to_base is -1, 0, or 1.
• The first argument represents a positive number and from_base is a negative number.
If the first argument represents a negative number and from_base is a negative number, the function returns 0.
If the first argument represents a number larger than the maximum bigint, the function returns:
• The string representation of -1 in to_base if to_base is negative.
• The string representation of 18446744073709551615' (2^64 - 1) in to_base if to_base is positive.
If the first argument does not represent a valid number in from_base, e.g. 3 in base 2 or '1a23' in base 10, the
digits in the first argument are evaluated from left-to-right and used if a valid digit in from_base. The invalid digit
and the digits to the right are ignored.
For example:
• conv(445, 5, 10) is converted to conv(44, 5, 10) and returns '24'.
• conv('1a23', 10, 16) is converted to conv('1', 10 , 16) and returns '1'.
COS(DOUBLE a)
Purpose: Returns the cosine of the argument.
Return type: DOUBLE
COSH(DOUBLE a)
Purpose: Returns the hyperbolic cosine of the argument.
Return type: DOUBLE
COT(DOUBLE a)
Purpose: Returns the cotangent of the argument.
Return type: DOUBLE
Added in: CDH 5.5.0 / Impala 2.3.0
DEGREES(DOUBLE a)
Purpose: Converts argument value from radians to degrees.
Return type: DOUBLE
E()
Purpose: Returns the mathematical constant e.
Return type: DOUBLE
EXP(DOUBLE a), DEXP(DOUBLE a)
Purpose: Returns the mathematical constant e raised to the power of the argument.
Return type: DOUBLE
400 | Apache Impala Guide
Impala SQL Language Reference
FACTORIAL(integer_type a)
Purpose: Computes the factorial of an integer value. It works with any integer type.
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes: You can use either the factorial() function or the ! operator. The factorial of 0 is 1. Likewise, the
factorial() function returns 1 for any negative value. The maximum positive value for the input argument is 20;
a value of 21 or greater overflows the range for a BIGINT and causes an error.
Return type: BIGINT
Added in: CDH 5.5.0 / Impala 2.3.0
select factorial(5);
+--------------+
| factorial(5) |
+--------------+
| 120 |
+--------------+
select 5!;
+-----+
| 5! |
+-----+
| 120 |
+-----+
FLOOR(DOUBLE a), FLOOR(DECIMAL(p,s) a), DFLOOR(DOUBLE a), DFLOOR(DECIMAL(p,s) a)
Purpose: Returns the largest integer that is less than or equal to the argument.
Return type: Same as the input type
FMOD(DOUBLE a, DOUBLE b), FMOD(FLOAT a, FLOAT b)
Purpose: Returns the modulus of a floating-point number.
Return type: FLOAT or DOUBLE, depending on type of arguments
Added in: Impala 1.1.1
Usage notes:
Because this function operates on DOUBLE or FLOAT values, it is subject to potential rounding errors for values that
cannot be represented precisely. Prefer to use whole numbers, or values that you know can be represented precisely
by the DOUBLE or FLOAT types.
Examples:
The following examples show equivalent operations with the fmod() function and the % arithmetic operator, for
values not subject to any rounding error.
select fmod(10,3);
+-------------+
| fmod(10, 3) |
+-------------+
| 1 |
+-------------+
select fmod(5.5,2);
+--------------+
| fmod(5.5, 2) |
+--------------+
| 1.5 |
+--------------+
select 10 % 3;
+--------+
| 10 % 3 |
+--------+
Apache Impala Guide | 401
Impala SQL Language Reference
| 1 |
+--------+
select 5.5 % 2;
+---------+
| 5.5 % 2 |
+---------+
| 1.5 |
+---------+
The following examples show operations with the fmod() function for values that cannot be represented precisely
by the DOUBLE or FLOAT types, and thus are subject to rounding error. fmod(9.9,3.0) returns a value slightly
different than the expected 0.9 because of rounding. fmod(9.9,3.3) returns a value quite different from the
expected value of 0 because of rounding error during intermediate calculations.
select fmod(9.9,3.0);
+--------------------+
| fmod(9.9, 3.0) |
+--------------------+
| 0.8999996185302734 |
+--------------------+
select fmod(9.9,3.3);
+-------------------+
| fmod(9.9, 3.3) |
+-------------------+
| 3.299999713897705 |
+-------------------+
FNV_HASH(type v)
Purpose: Returns a consistent 64-bit value derived from the input argument, for convenience of implementing
hashing logic in an application.
Return type: BIGINT
Usage notes:
You might use the return value in an application where you perform load balancing, bucketing, or some other
technique to divide processing or storage.
Because the result can be any 64-bit value, to restrict the value to a particular range, you can use an expression
that includes the ABS() function and the % (modulo) operator. For example, to produce a hash value in the range
0-9, you could use the expression ABS(FNV_HASH(x)) % 10.
This function implements the same algorithm that Impala uses internally for hashing, on systems where the CRC32
instructions are not available.
This function implements the Fowler–Noll–Vo hash function, in particular the FNV-1a variation. This is not a perfect
hash function: some combinations of values could produce the same result value. It is not suitable for cryptographic
use.
Similar input values of different types could produce different hash values, for example the same numeric value
represented as SMALLINT or BIGINT, FLOAT or DOUBLE, or DECIMAL(5,2) or DECIMAL(20,5).
Examples:
[localhost:21000] > create table h (x int, s string);
[localhost:21000] > insert into h values (0, 'hello'), (1,'world'),
(1234567890,'antidisestablishmentarianism');
[localhost:21000] > select x, fnv_hash(x) from h;
+------------+----------------------+
| x | fnv_hash(x) |
+------------+----------------------+
| 0 | -2611523532599129963 |
| 1 | 4307505193096137732 |
| 1234567890 | 3614724209955230832 |
+------------+----------------------+
402 | Apache Impala Guide
Impala SQL Language Reference
[localhost:21000] > select s, fnv_hash(s) from h;
+------------------------------+---------------------+
| s | fnv_hash(s) |
+------------------------------+---------------------+
| hello | 6414202926103426347 |
| world | 6535280128821139475 |
| antidisestablishmentarianism | -209330013948433970 |
+------------------------------+---------------------+
[localhost:21000] > select s, abs(fnv_hash(s)) % 10 from h;
+------------------------------+-------------------------+
| s | abs(fnv_hash(s)) % 10.0 |
+------------------------------+-------------------------+
| hello | 8 |
| world | 6 |
| antidisestablishmentarianism | 4 |
+------------------------------+-------------------------+
For short argument values, the high-order bits of the result have relatively low entropy:
[localhost:21000] > create table b (x boolean);
[localhost:21000] > insert into b values (true), (true), (false), (false);
[localhost:21000] > select x, fnv_hash(x) from b;
+-------+---------------------+
| x | fnv_hash(x) |
+-------+---------------------+
| true | 2062020650953872396 |
| true | 2062020650953872396 |
| false | 2062021750465500607 |
| false | 2062021750465500607 |
+-------+---------------------+
Added in: Impala 1.2.2
GREATEST(BIGINT a[, BIGINT b ...]), GREATEST(DOUBLE a[, DOUBLE b ...]), GREATEST(DECIMAL(p,s) a[, DECIMAL(p,s)
b ...]), GREATEST(STRING a[, STRING b ...]), GREATEST(TIMESTAMP a[, TIMESTAMP b ...])
Purpose: Returns the largest value from a list of expressions.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
HEX(BIGINT a), HEX(STRING a)
Purpose: Returns the hexadecimal representation of an integer value, or of the characters in a string.
Return type: STRING
IS_INF(DOUBLE a)
Purpose: Tests whether a value is equal to the special value “inf”, signifying infinity.
Return type: BOOLEAN
Usage notes:
Infinity and NaN can be specified in text data files as inf and nan respectively, and Impala interprets them as these
special values. They can also be produced by certain arithmetic expressions; for example, 1/0 returns Infinity
and pow(-1, 0.5) returns NaN. Or you can cast the literal values, such as CAST('nan' AS DOUBLE) or
CAST('inf' AS DOUBLE).
IS_NAN(DOUBLE a)
Purpose: Tests whether a value is equal to the special value “NaN”, signifying “not a number”.
Return type: BOOLEAN
Usage notes:
Infinity and NaN can be specified in text data files as inf and nan respectively, and Impala interprets them as these
special values. They can also be produced by certain arithmetic expressions; for example, 1/0 returns Infinity
Apache Impala Guide | 403
Impala SQL Language Reference
and pow(-1, 0.5) returns NaN. Or you can cast the literal values, such as CAST('nan' AS DOUBLE) or
CAST('inf' AS DOUBLE).
LEAST(BIGINT a[, BIGINT b ...]), LEAST(DOUBLE a[, DOUBLE b ...]), LEAST(DECIMAL(p,s) a[, DECIMAL(p,s) b ...]),
LEAST(STRING a[, STRING b ...]), LEAST(TIMESTAMP a[, TIMESTAMP b ...])
Purpose: Returns the smallest value from a list of expressions.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
LN(DOUBLE a), DLOG1(DOUBLE a)
Purpose: Returns the natural logarithm of the argument.
Return type: DOUBLE
LOG(DOUBLE base, DOUBLE a)
Purpose: Returns the logarithm of the second argument to the specified base.
Return type: DOUBLE
LOG10(DOUBLE a), DLOG10(DOUBLE a)
Purpose: Returns the logarithm of the argument to the base 10.
Return type: DOUBLE
LOG2(DOUBLE a)
Purpose: Returns the logarithm of the argument to the base 2.
Return type: DOUBLE
MAX_INT(), MAX_TINYINT(), MAX_SMALLINT(), MAX_BIGINT()
Purpose: Returns the largest value of the associated integral type.
Return type: The same as the integral type being checked.
Usage notes: Use the corresponding min_ and max_ functions to check if all values in a column are within the
allowed range, before copying data or altering column definitions. If not, switch to the next higher integral type or
to a DECIMAL with sufficient precision.
MIN_INT(), MIN_TINYINT(), MIN_SMALLINT(), MIN_BIGINT()
Purpose: Returns the smallest value of the associated integral type (a negative number).
Return type: The same as the integral type being checked.
Usage notes: Use the corresponding min_ and max_ functions to check if all values in a column are within the
allowed range, before copying data or altering column definitions. If not, switch to the next higher integral type or
to a DECIMAL with sufficient precision.
MOD(numeric_type a, same_type b)
Purpose: Returns the modulus of a number. Equivalent to the % arithmetic operator. Works with any size integer
type, any size floating-point type, and DECIMAL with any precision and scale.
Return type: Same as the input value
Added in: CDH 5.4.0 / Impala 2.2.0
Usage notes:
Because this function works with DECIMAL values, prefer it over fmod() when working with fractional values. It is
not subject to the rounding errors that make fmod() problematic with floating-point numbers.
Query plans shows the MOD() function as the % operator.
Examples:
404 | Apache Impala Guide
The following examples show how the mod() function works for whole numbers and fractional values, and how
the % operator works the same way. In the case of mod(9.9,3), the type conversion for the second argument
results in the first argument being interpreted as DOUBLE, so to produce an accurate DECIMAL result requires casting
the second argument or writing it as a DECIMAL literal, 3.0.
Impala SQL Language Reference
select mod(10,3);
+-------------+
| mod(10, 3) |
+-------------+
| 1 |
+-------------+
select mod(5.5,2);
+--------------+
| mod(5.5, 2) |
+--------------+
| 1.5 |
+--------------+
select 10 % 3;
+--------+
| 10 % 3 |
+--------+
| 1 |
+--------+
select 5.5 % 2;
+---------+
| 5.5 % 2 |
+---------+
| 1.5 |
+---------+
select mod(9.9,3.3);
+---------------+
| mod(9.9, 3.3) |
+---------------+
| 0.0 |
+---------------+
select mod(9.9,3);
+--------------------+
| mod(9.9, 3) |
+--------------------+
| 0.8999996185302734 |
+--------------------+
select mod(9.9, cast(3 as decimal(2,1)));
+-----------------------------------+
| mod(9.9, cast(3 as decimal(2,1))) |
+-----------------------------------+
| 0.9 |
+-----------------------------------+
select mod(9.9,3.0);
+---------------+
| mod(9.9, 3.0) |
+---------------+
| 0.9 |
+---------------+
MURMUR_HASH(type v)
Purpose: Returns a consistent 64-bit value derived from the input argument, for convenience of implementing
MurmurHash2 non-cryptographic hash function.
Return type: BIGINT
Usage notes:
Apache Impala Guide | 405
Impala SQL Language Reference
You might use the return value in an application where you perform load balancing, bucketing, or some other
technique to divide processing or storage. This function provides a good performance for all kinds of keys such as
number, ascii string and UTF-8. It can be recommended as general-purpose hashing function.
Regarding comparison of murmur_hash with fnv_hash, murmur_hash is based on Murmur2 hash algorithm and
fnv_hash function is based on FNV-1a hash algorithm. Murmur2 and FNV-1a can show very good randomness and
performance compared with well known other hash algorithms, but Murmur2 slightly show better randomness
and performance than FNV-1a. See [1][2][3] for details.
Similar input values of different types could produce different hash values, for example the same numeric value
represented as SMALLINT or BIGINT, FLOAT or DOUBLE, or DECIMAL(5,2) or DECIMAL(20,5).
Examples:
[localhost:21000] > create table h (x int, s string);
[localhost:21000] > insert into h values (0, 'hello'), (1,'world'),
(1234567890,'antidisestablishmentarianism');
[localhost:21000] > select x, murmur_hash(x) from h;
+------------+----------------------+
| x | murmur_hash(x) |
+------------+----------------------+
| 0 | 6960269033020761575 |
| 1 | -780611581681153783 |
| 1234567890 | -5754914572385924334 |
+------------+----------------------+
[localhost:21000] > select s, murmur_hash(s) from h;
+------------------------------+----------------------+
| s | murmur_hash(s) |
+------------------------------+----------------------+
| hello | 2191231550387646743 |
| world | 5568329560871645431 |
| antidisestablishmentarianism | -2261804666958489663 |
+------------------------------+----------------------+
For short argument values, the high-order bits of the result have relatively higher entropy than fnv_hash:
[localhost:21000] > create table b (x boolean);
[localhost:21000] > insert into b values (true), (true), (false), (false);
[localhost:21000] > select x, murmur_hash(x) from b;
+-------+----------------------+
| x | murmur_hash(x) |
+-------+---------------------++
| true | -5720937396023583481 |
| true | -5720937396023583481 |
| false | 6351753276682545529 |
| false | 6351753276682545529 |
+-------+--------------------+-+
Added in: Impala 2.12.0
NEGATIVE(numeric_type a)
Purpose: Returns the argument with the sign reversed; returns a positive value if the argument was already negative.
Return type: Same as the input value
Usage notes: Use -abs(a) instead if you need to ensure all return values are negative.
PI()
Purpose: Returns the constant pi.
Return type: double
PMOD(BIGINT a, BIGINT b), PMOD(DOUBLE a, DOUBLE b)
Purpose: Returns the positive modulus of a number. Primarily for HiveQL compatibility.
Return type: INT or DOUBLE, depending on type of arguments
Examples:
406 | Apache Impala Guide
The following examples show how the fmod() function sometimes returns a negative value depending on the sign
of its arguments, and the pmod() function returns the same value as fmod(), but sometimes with the sign flipped.
Impala SQL Language Reference
select fmod(-5,2);
+-------------+
| fmod(-5, 2) |
+-------------+
| -1 |
+-------------+
select pmod(-5,2);
+-------------+
| pmod(-5, 2) |
+-------------+
| 1 |
+-------------+
select fmod(-5,-2);
+--------------+
| fmod(-5, -2) |
+--------------+
| -1 |
+--------------+
select pmod(-5,-2);
+--------------+
| pmod(-5, -2) |
+--------------+
| -1 |
+--------------+
select fmod(5,-2);
+-------------+
| fmod(5, -2) |
+-------------+
| 1 |
+-------------+
select pmod(5,-2);
+-------------+
| pmod(5, -2) |
+-------------+
| -1 |
+-------------+
POSITIVE(numeric_type a)
Purpose: Returns the original argument unchanged (even if the argument is negative).
Return type: Same as the input value
Usage notes: Use abs() instead if you need to ensure all return values are positive.
POW(DOUBLE a, double p), POWER(DOUBLE a, DOUBLE p), DPOW(DOUBLE a, DOUBLE p), FPOW(DOUBLE a, DOUBLE
p)
Purpose: Returns the first argument raised to the power of the second argument.
Return type: DOUBLE
PRECISION(numeric_expression)
Purpose: Computes the precision (number of decimal digits) needed to represent the type of the argument expression
as a DECIMAL value.
Usage notes:
Typically used in combination with the scale() function, to determine the appropriate
DECIMAL(precision,scale) type to declare in a CREATE TABLE statement or CAST() function.
Return type: INT
Examples:
Apache Impala Guide | 407
Impala SQL Language Reference
The following examples demonstrate how to check the precision and scale of numeric literals or other numeric
expressions. Impala represents numeric literals in the smallest appropriate type. 5 is a TINYINT value, which ranges
from -128 to 127, therefore 3 decimal digits are needed to represent the entire range, and because it is an integer
value there are no fractional digits. 1.333 is interpreted as a DECIMAL value, with 4 digits total and 3 digits after
the decimal point.
[localhost:21000] > select precision(5), scale(5);
+--------------+----------+
| precision(5) | scale(5) |
+--------------+----------+
| 3 | 0 |
+--------------+----------+
[localhost:21000] > select precision(1.333), scale(1.333);
+------------------+--------------+
| precision(1.333) | scale(1.333) |
+------------------+--------------+
| 4 | 3 |
+------------------+--------------+
[localhost:21000] > with t1 as
( select cast(12.34 as decimal(20,2)) x union select cast(1 as decimal(8,6)) x )
select precision(x), scale(x) from t1 limit 1;
+--------------+----------+
| precision(x) | scale(x) |
+--------------+----------+
| 24 | 6 |
+--------------+----------+
QUOTIENT(BIGINT numerator, BIGINT denominator), QUOTIENT(DOUBLE numerator, DOUBLE denominator)
Purpose: Returns the first argument divided by the second argument, discarding any fractional part. Avoids promoting
integer arguments to DOUBLE as happens with the / SQL operator. Also includes an overload that accepts DOUBLE
arguments, discards the fractional part of each argument value before dividing, and again returns BIGINT. With
integer arguments, this function works the same as the DIV operator.
Return type: BIGINT
RADIANS(DOUBLE a)
Purpose: Converts argument value from degrees to radians.
Return type: DOUBLE
RAND(), RAND(BIGINT seed), RANDOM(), RANDOM(BIGINT seed)
Purpose: Returns a random value between 0 and 1. After rand() is called with a seed argument, it produces a
consistent random sequence based on the seed value.
Return type: DOUBLE
Usage notes: Currently, the random sequence is reset after each query, and multiple calls to rand() within the
same query return the same value each time. For different number sequences that are different for each query,
pass a unique seed value to each call to rand(). For example, select rand(unix_timestamp()) from ...
Examples:
The following examples show how rand() can produce sequences of varying predictability, so that you can reproduce
query results involving random values or generate unique sequences of random values for each query. When
rand() is called with no argument, it generates the same sequence of values each time, regardless of the ordering
of the result set. When rand() is called with a constant integer, it generates a different sequence of values, but
still always the same sequence for the same seed value. If you pass in a seed value that changes, such as the return
value of the expression unix_timestamp(now()), each query will use a different sequence of random values,
potentially more useful in probability calculations although more difficult to reproduce at a later time. Therefore,
the final two examples with an unpredictable seed value also include the seed in the result set, to make it possible
to reproduce the same random sequence later.
select x, rand() from three_rows;
+---+-----------------------+
408 | Apache Impala Guide
Impala SQL Language Reference
| x | rand() |
+---+-----------------------+
| 1 | 0.0004714746030380365 |
| 2 | 0.5895895192351144 |
| 3 | 0.4431900859080209 |
+---+-----------------------+
select x, rand() from three_rows order by x desc;
+---+-----------------------+
| x | rand() |
+---+-----------------------+
| 3 | 0.0004714746030380365 |
| 2 | 0.5895895192351144 |
| 1 | 0.4431900859080209 |
+---+-----------------------+
select x, rand(1234) from three_rows order by x;
+---+----------------------+
| x | rand(1234) |
+---+----------------------+
| 1 | 0.7377511392057646 |
| 2 | 0.009428468537250751 |
| 3 | 0.208117277924026 |
+---+----------------------+
select x, rand(1234) from three_rows order by x desc;
+---+----------------------+
| x | rand(1234) |
+---+----------------------+
| 3 | 0.7377511392057646 |
| 2 | 0.009428468537250751 |
| 1 | 0.208117277924026 |
+---+----------------------+
select x, unix_timestamp(now()), rand(unix_timestamp(now()))
from three_rows order by x;
+---+-----------------------+-----------------------------+
| x | unix_timestamp(now()) | rand(unix_timestamp(now())) |
+---+-----------------------+-----------------------------+
| 1 | 1440777752 | 0.002051228658320023 |
| 2 | 1440777752 | 0.5098743483004506 |
| 3 | 1440777752 | 0.9517714925817081 |
+---+-----------------------+-----------------------------+
select x, unix_timestamp(now()), rand(unix_timestamp(now()))
from three_rows order by x desc;
+---+-----------------------+-----------------------------+
| x | unix_timestamp(now()) | rand(unix_timestamp(now())) |
+---+-----------------------+-----------------------------+
| 3 | 1440777761 | 0.9985985015512437 |
| 2 | 1440777761 | 0.3251255333074953 |
| 1 | 1440777761 | 0.02422675025846192 |
+---+-----------------------+-----------------------------+
ROUND(DOUBLE a), ROUND(DOUBLE a, INT d), ROUND(DECIMAL a, int_type d), DROUND(DOUBLE a),
DROUND(DOUBLE a, INT d), DROUND(DECIMAL(p,s) a, int_type d)
Purpose: Rounds a floating-point value. By default (with a single argument), rounds to the nearest integer. Values
ending in .5 are rounded up for positive numbers, down for negative numbers (that is, away from zero). The optional
second argument specifies how many digits to leave after the decimal point; values greater than zero produce a
floating-point return value rounded to the requested number of digits to the right of the decimal point.
Return type: Same as the input type
SCALE(numeric_expression)
Purpose: Computes the scale (number of decimal digits to the right of the decimal point) needed to represent the
type of the argument expression as a DECIMAL value.
Usage notes:
Apache Impala Guide | 409
Impala SQL Language Reference
Typically used in combination with the precision() function, to determine the appropriate
DECIMAL(precision,scale) type to declare in a CREATE TABLE statement or CAST() function.
Return type: int
Examples:
The following examples demonstrate how to check the precision and scale of numeric literals or other numeric
expressions. Impala represents numeric literals in the smallest appropriate type. 5 is a TINYINT value, which ranges
from -128 to 127, therefore 3 decimal digits are needed to represent the entire range, and because it is an integer
value there are no fractional digits. 1.333 is interpreted as a DECIMAL value, with 4 digits total and 3 digits after
the decimal point.
[localhost:21000] > select precision(5), scale(5);
+--------------+----------+
| precision(5) | scale(5) |
+--------------+----------+
| 3 | 0 |
+--------------+----------+
[localhost:21000] > select precision(1.333), scale(1.333);
+------------------+--------------+
| precision(1.333) | scale(1.333) |
+------------------+--------------+
| 4 | 3 |
+------------------+--------------+
[localhost:21000] > with t1 as
( select cast(12.34 as decimal(20,2)) x union select cast(1 as decimal(8,6)) x )
select precision(x), scale(x) from t1 limit 1;
+--------------+----------+
| precision(x) | scale(x) |
+--------------+----------+
| 24 | 6 |
+--------------+----------+
SIGN(DOUBLE a)
Purpose: Returns -1, 0, or 1 to indicate the signedness of the argument value.
Return type: INT
SIN(DOUBLE a)
Purpose: Returns the sine of the argument.
Return type: DOUBLE
SINH(DOUBLE a)
Purpose: Returns the hyperbolic sine of the argument.
Return type: DOUBLE
SQRT(DOUBLE a), DSQRT(DOUBLE a)
Purpose: Returns the square root of the argument.
Return type: DOUBLE
TAN(DOUBLE a)
Purpose: Returns the tangent of the argument.
Return type: DOUBLE
TANH(DOUBLE a)
Purpose: Returns the hyperbolic tangent of the argument.
Return type: DOUBLE
TRUNCATE(DOUBLE_or_DECIMAL a[, digits_to_leave]), DTRUNC(DOUBLE_or_DECIMAL a[, digits_to_leave]),
TRUNC(DOUBLE_or_DECIMAL a[, digits_to_leave])
Purpose: Removes some or all fractional digits from a numeric value.
410 | Apache Impala Guide
Impala SQL Language Reference
Arguments: With a single floating-point argument, removes all fractional digits, leaving an integer value. The optional
second argument specifies the number of fractional digits to include in the return value, and only applies when the
argument type is DECIMAL. A second argument of 0 truncates to a whole integer value. A second argument of
negative N sets N digits to 0 on the left side of the decimal
Scale argument: The scale argument applies only when truncating DECIMAL values. It is an integer specifying how
many significant digits to leave to the right of the decimal point. A scale argument of 0 truncates to a whole integer
value. A scale argument of negative N sets N digits to 0 on the left side of the decimal point.
TRUNCATE(), DTRUNC(), and TRUNC() are aliases for the same function.
Return type: Same as the input type
Added in: The TRUNC() alias was added in CDH 5.13 / Impala 2.10.
Usage notes:
You can also pass a DOUBLE argument, or DECIMAL argument with optional scale, to the DTRUNC() or TRUNCATE
functions. Using the TRUNC() function for numeric values is common with other industry-standard database systems,
so you might find such TRUNC() calls in code that you are porting to Impala.
The TRUNC() function also has a signature that applies to TIMESTAMP values. See Impala Date and Time Functions
for details.
Examples:
The following examples demonstrate the TRUNCATE() and DTRUNC() signatures for this function:
select truncate(3.45);
+----------------+
| truncate(3.45) |
+----------------+
| 3 |
+----------------+
select truncate(-3.45);
+-----------------+
| truncate(-3.45) |
+-----------------+
| -3 |
+-----------------+
select truncate(3.456,1);
+--------------------+
| truncate(3.456, 1) |
+--------------------+
| 3.4 |
+--------------------+
select dtrunc(3.456,1);
+------------------+
| dtrunc(3.456, 1) |
+------------------+
| 3.4 |
+------------------+
select truncate(3.456,2);
+--------------------+
| truncate(3.456, 2) |
+--------------------+
| 3.45 |
+--------------------+
select truncate(3.456,7);
+--------------------+
| truncate(3.456, 7) |
+--------------------+
| 3.4560000 |
+--------------------+
Apache Impala Guide | 411
Impala SQL Language Reference
The following examples demonstrate using TRUNC() with DECIMAL or DOUBLE values, and with an optional scale
argument for DECIMAL values. (The behavior is the same for the TRUNCATE() and DTRUNC() aliases also.)
create table t1 (d decimal(20,7));
-- By default, no digits to the right of the decimal point.
insert into t1 values (1.1), (2.22), (3.333), (4.4444), (5.55555);
select trunc(d) from t1 order by d;
+----------+
| trunc(d) |
+----------+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
+----------+
-- 1 digit to the right of the decimal point.
select trunc(d,1) from t1 order by d;
+-------------+
| trunc(d, 1) |
+-------------+
| 1.1 |
| 2.2 |
| 3.3 |
| 4.4 |
| 5.5 |
+-------------+
-- 2 digits to the right of the decimal point,
-- including trailing zeroes if needed.
select trunc(d,2) from t1 order by d;
+-------------+
| trunc(d, 2) |
+-------------+
| 1.10 |
| 2.22 |
| 3.33 |
| 4.44 |
| 5.55 |
+-------------+
insert into t1 values (9999.9999), (8888.8888);
-- Negative scale truncates digits to the left
-- of the decimal point.
select trunc(d,-2) from t1 where d > 100 order by d;
+--------------+
| trunc(d, -2) |
+--------------+
| 8800 |
| 9900 |
+--------------+
-- The scale of the result is adjusted to match the
-- scale argument.
select trunc(d,2),
precision(trunc(d,2)) as p,
scale(trunc(d,2)) as s
from t1 order by d;
+-------------+----+---+
| trunc(d, 2) | p | s |
+-------------+----+---+
| 1.10 | 15 | 2 |
| 2.22 | 15 | 2 |
| 3.33 | 15 | 2 |
| 4.44 | 15 | 2 |
| 5.55 | 15 | 2 |
| 8888.88 | 15 | 2 |
412 | Apache Impala Guide
Impala SQL Language Reference
| 9999.99 | 15 | 2 |
+-------------+----+---+
create table dbl (d double);
insert into dbl values
(1.1), (2.22), (3.333), (4.4444), (5.55555),
(8888.8888), (9999.9999);
-- With double values, there is no optional scale argument.
select trunc(d) from dbl order by d;
+----------+
| trunc(d) |
+----------+
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
| 8888 |
| 9999 |
+----------+
UNHEX(STRING a)
Purpose: Returns a string of characters with ASCII values corresponding to pairs of hexadecimal digits in the argument.
Return type: STRING
WIDTH_BUCKET(DECIMAL expr, DECIMAL min_value, DECIMAL max_value, INT num_buckets)
Purpose: Returns the bucket number in which the expr value would fall in the histogram where its range between
min_value and max_value is divided into num_buckets buckets of identical sizes.
The function returns:
• NULL if any argument is NULL.
• 0 if expr < min_value.
• num_buckets + 1 if expr >= max_val.
• If none of the above, the bucket number where expr falls.
Arguments:The following rules apply to the arguments.
• min_val is the minimum value of the histogram range.
• max_val is the maximum value of the histogram range.
• num_buckets must be greater than 0.
• min_value must be less than max_value.
Usage notes:
Each bucket contains values equal to or greater than the base value of that bucket and less than the base value of
the next bucket. For example, with width_bucket(8, 1, 10, 3), the bucket ranges are actually the 0th
"underflow bucket" with the range (-infinity to 0.999...), (1 to 3.999...), (4, to 6.999...), (7 to 9.999...), and the
"overflow bucket" with the range (10 to infinity).
Return type: BIGINT
Added in: CDH 6.1.
Examples:
The below function creates 3 buckets between the range of 1 and 20 with the bucket width of 6.333, and returns
2 for the bucket #2 where the value 8 falls in:
WIDTH_BUCKET(8, 1, 20, 3)
Apache Impala Guide | 413
Impala SQL Language Reference
The below statement returns a list of accounts with the energy spending and the spending bracket each account
falls in, between 0 and 11. Bucket 0 (underflow bucket) will be assigned to the accounts whose energy spendings
are less than $50. Bucket 11 (overflow bucket) will be assigned to the accounts whose energy spendings are more
than or equal to $1000.
SELECT account, invoice_amount, WIDTH_BUCKET(invoice_amount,50,1000,10)
FROM invoices_june2018
ORDER BY 3;
Impala Bit Functions
Bit manipulation functions perform bitwise operations involved in scientific processing or computer science algorithms.
For example, these functions include setting, clearing, or testing bits within an integer value, or changing the positions
of bits with or without wraparound.
If a function takes two integer arguments that are required to be of the same type, the smaller argument is promoted
to the type of the larger one if required. For example, BITAND(1,4096) treats both arguments as SMALLINT, because
1 can be represented as a TINYINT but 4096 requires a SMALLINT.
Remember that all Impala integer values are signed. Therefore, when dealing with binary values where the most
significant bit is 1, the specified or returned values might be negative when represented in base 10.
Whenever any argument is NULL, either the input value, bit position, or number of shift or rotate positions, the return
value from any of these functions is also NULL
Related information:
The bit functions operate on all the integral data types: INT Data Type on page 117, BIGINT Data Type on page 105,
SMALLINT Data Type on page 122, and TINYINT Data Type on page 136.
Function reference:
Impala supports the following bit functions:
• BITAND
• BITNOT
• BITOR
• BITXOR
• COUNTSET
• GETBIT
• ROTATELEFT
• ROTATERIGHT
• SETBIT
• SHIFTLEFT
• SHIFTRIGHT
BITAND(integer_type a, same_type b)
Purpose: Returns an integer value representing the bits that are set to 1 in both of the arguments. If the arguments
are of different sizes, the smaller is promoted to the type of the larger.
Usage notes: The BITAND() function is equivalent to the & binary operator.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show the results of ANDing integer values. 255 contains all 1 bits in its lowermost 7 bits.
32767 contains all 1 bits in its lowermost 15 bits. You can use the bin() function to check the binary representation
414 | Apache Impala Guide
of any integer value, although the result is always represented as a 64-bit value. If necessary, the smaller argument
is promoted to the type of the larger one.
Impala SQL Language Reference
select bitand(255, 32767); /* 0000000011111111 & 0111111111111111 */
+--------------------+
| bitand(255, 32767) |
+--------------------+
| 255 |
+--------------------+
select bitand(32767, 1); /* 0111111111111111 & 0000000000000001 */
+------------------+
| bitand(32767, 1) |
+------------------+
| 1 |
+------------------+
select bitand(32, 16); /* 00010000 & 00001000 */
+----------------+
| bitand(32, 16) |
+----------------+
| 0 |
+----------------+
select bitand(12,5); /* 00001100 & 00000101 */
+---------------+
| bitand(12, 5) |
+---------------+
| 4 |
+---------------+
select bitand(-1,15); /* 11111111 & 00001111 */
+----------------+
| bitand(-1, 15) |
+----------------+
| 15 |
+----------------+
BITNOT(integer_type a)
Purpose: Inverts all the bits of the input argument.
Usage notes: The BITNOT() function is equivalent to the ~ unary operator.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
These examples illustrate what happens when you flip all the bits of an integer value. The sign always changes. The
decimal representation is one different between the positive and negative values.
select bitnot(127); /* 01111111 -> 10000000 */
+-------------+
| bitnot(127) |
+-------------+
| -128 |
+-------------+
select bitnot(16); /* 00010000 -> 11101111 */
+------------+
| bitnot(16) |
+------------+
| -17 |
+------------+
select bitnot(0); /* 00000000 -> 11111111 */
+-----------+
| bitnot(0) |
+-----------+
| -1 |
Apache Impala Guide | 415
Impala SQL Language Reference
+-----------+
select bitnot(-128); /* 10000000 -> 01111111 */
+--------------+
| bitnot(-128) |
+--------------+
| 127 |
+--------------+
BITOR(integer_type a, same_type b)
Purpose: Returns an integer value representing the bits that are set to 1 in either of the arguments. If the arguments
are of different sizes, the smaller is promoted to the type of the larger.
Usage notes: The BITOR() function is equivalent to the | binary operator.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show the results of ORing integer values.
select bitor(1,4); /* 00000001 | 00000100 */
+-------------+
| bitor(1, 4) |
+-------------+
| 5 |
+-------------+
select bitor(16,48); /* 00001000 | 00011000 */
+---------------+
| bitor(16, 48) |
+---------------+
| 48 |
+---------------+
select bitor(0,7); /* 00000000 | 00000111 */
+-------------+
| bitor(0, 7) |
+-------------+
| 7 |
+-------------+
BITXOR(integer_type a, same_type b)
Purpose: Returns an integer value representing the bits that are set to 1 in one but not both of the arguments. If
the arguments are of different sizes, the smaller is promoted to the type of the larger.
Usage notes: The BITXOR() function is equivalent to the ^ binary operator.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show the results of XORing integer values. XORing a non-zero value with zero returns the
non-zero value. XORing two identical values returns zero, because all the 1 bits from the first argument are also 1
bits in the second argument. XORing different non-zero values turns off some bits and leaves others turned on,
based on whether the same bit is set in both arguments.
select bitxor(0,15); /* 00000000 ^ 00001111 */
+---------------+
| bitxor(0, 15) |
+---------------+
| 15 |
+---------------+
416 | Apache Impala Guide
Impala SQL Language Reference
select bitxor(7,7); /* 00000111 ^ 00000111 */
+--------------+
| bitxor(7, 7) |
+--------------+
| 0 |
+--------------+
select bitxor(8,4); /* 00001000 ^ 00000100 */
+--------------+
| bitxor(8, 4) |
+--------------+
| 12 |
+--------------+
select bitxor(3,7); /* 00000011 ^ 00000111 */
+--------------+
| bitxor(3, 7) |
+--------------+
| 4 |
+--------------+
COUNTSET(integer_type a [, INT zero_or_one])
Purpose: By default, returns the number of 1 bits in the specified integer value. If the optional second argument is
set to zero, it returns the number of 0 bits instead.
Usage notes:
In discussions of information theory, this operation is referred to as the “population count” or “popcount”.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show how to count the number of 1 bits in an integer value.
select countset(1); /* 00000001 */
+-------------+
| countset(1) |
+-------------+
| 1 |
+-------------+
select countset(3); /* 00000011 */
+-------------+
| countset(3) |
+-------------+
| 2 |
+-------------+
select countset(16); /* 00010000 */
+--------------+
| countset(16) |
+--------------+
| 1 |
+--------------+
select countset(17); /* 00010001 */
+--------------+
| countset(17) |
+--------------+
| 2 |
+--------------+
select countset(7,1); /* 00000111 = 3 1 bits; the function counts 1 bits by default */
+----------------+
| countset(7, 1) |
+----------------+
| 3 |
+----------------+
Apache Impala Guide | 417
Impala SQL Language Reference
select countset(7,0); /* 00000111 = 5 0 bits; third argument can only be 0 or 1 */
+----------------+
| countset(7, 0) |
+----------------+
| 5 |
+----------------+
GETBIT(integer_type a, INT position)
Purpose: Returns a 0 or 1 representing the bit at a specified position. The positions are numbered right to left,
starting at zero. The position argument cannot be negative.
Usage notes:
When you use a literal input value, it is treated as an 8-bit, 16-bit, and so on value, the smallest type that is
appropriate. The type of the input value limits the range of the positions. Cast the input value to the appropriate
type if you need to ensure it is treated as a 64-bit, 32-bit, and so on value.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show how to test a specific bit within an integer value.
select getbit(1,0); /* 00000001 */
+--------------+
| getbit(1, 0) |
+--------------+
| 1 |
+--------------+
select getbit(16,1) /* 00010000 */
+---------------+
| getbit(16, 1) |
+---------------+
| 0 |
+---------------+
select getbit(16,4) /* 00010000 */
+---------------+
| getbit(16, 4) |
+---------------+
| 1 |
+---------------+
select getbit(16,5) /* 00010000 */
+---------------+
| getbit(16, 5) |
+---------------+
| 0 |
+---------------+
select getbit(-1,3); /* 11111111 */
+---------------+
| getbit(-1, 3) |
+---------------+
| 1 |
+---------------+
select getbit(-1,25); /* 11111111 */
ERROR: Invalid bit position: 25
select getbit(cast(-1 as int),25); /* 11111111111111111111111111111111 */
+-----------------------------+
| getbit(cast(-1 as int), 25) |
+-----------------------------+
| 1 |
+-----------------------------+
418 | Apache Impala Guide
Impala SQL Language Reference
ROTATELEFT(integer_type a, INT positions)
Purpose: Rotates an integer value left by a specified number of bits. As the most significant bit is taken out of the
original value, if it is a 1 bit, it is “rotated” back to the least significant bit. Therefore, the final value has the same
number of 1 bits as the original value, just in different positions. In computer science terms, this operation is a
“circular shift”.
Usage notes:
Specifying a second argument of zero leaves the original value unchanged. Rotating a -1 value by any number of
positions still returns -1, because the original value has all 1 bits and all the 1 bits are preserved during rotation.
Similarly, rotating a 0 value by any number of positions still returns 0. Rotating a value by the same number of bits
as in the value returns the same value. Because this is a circular operation, the number of positions is not limited
to the number of bits in the input value. For example, rotating an 8-bit value by 1, 9, 17, and so on positions returns
an identical result in each case.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
select rotateleft(1,4); /* 00000001 -> 00010000 */
+------------------+
| rotateleft(1, 4) |
+------------------+
| 16 |
+------------------+
select rotateleft(-1,155); /* 11111111 -> 11111111 */
+---------------------+
| rotateleft(-1, 155) |
+---------------------+
| -1 |
+---------------------+
select rotateleft(-128,1); /* 10000000 -> 00000001 */
+---------------------+
| rotateleft(-128, 1) |
+---------------------+
| 1 |
+---------------------+
select rotateleft(-127,3); /* 10000001 -> 00001100 */
+---------------------+
| rotateleft(-127, 3) |
+---------------------+
| 12 |
+---------------------+
ROTATERIGHT(integer_type a, INT positions)
Purpose: Rotates an integer value right by a specified number of bits. As the least significant bit is taken out of the
original value, if it is a 1 bit, it is “rotated” back to the most significant bit. Therefore, the final value has the same
number of 1 bits as the original value, just in different positions. In computer science terms, this operation is a
“circular shift”.
Usage notes:
Specifying a second argument of zero leaves the original value unchanged. Rotating a -1 value by any number of
positions still returns -1, because the original value has all 1 bits and all the 1 bits are preserved during rotation.
Similarly, rotating a 0 value by any number of positions still returns 0. Rotating a value by the same number of bits
as in the value returns the same value. Because this is a circular operation, the number of positions is not limited
to the number of bits in the input value. For example, rotating an 8-bit value by 1, 9, 17, and so on positions returns
an identical result in each case.
Return type: Same as the input value
Apache Impala Guide | 419
Impala SQL Language Reference
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
select rotateright(16,4); /* 00010000 -> 00000001 */
+--------------------+
| rotateright(16, 4) |
+--------------------+
| 1 |
+--------------------+
select rotateright(-1,155); /* 11111111 -> 11111111 */
+----------------------+
| rotateright(-1, 155) |
+----------------------+
| -1 |
+----------------------+
select rotateright(-128,1); /* 10000000 -> 01000000 */
+----------------------+
| rotateright(-128, 1) |
+----------------------+
| 64 |
+----------------------+
select rotateright(-127,3); /* 10000001 -> 00110000 */
+----------------------+
| rotateright(-127, 3) |
+----------------------+
| 48 |
+----------------------+
SETBIT(integer_type a, INT position [, INT zero_or_one])
Purpose: By default, changes a bit at a specified position to a 1, if it is not already. If the optional third argument
is set to zero, the specified bit is set to 0 instead.
Usage notes:
If the bit at the specified position was already 1 (by default) or 0 (with a third argument of zero), the return value
is the same as the first argument. The positions are numbered right to left, starting at zero. (Therefore, the return
value could be different from the first argument even if the position argument is zero.) The position argument
cannot be negative.
When you use a literal input value, it is treated as an 8-bit, 16-bit, and so on value, the smallest type that is
appropriate. The type of the input value limits the range of the positions. Cast the input value to the appropriate
type if you need to ensure it is treated as a 64-bit, 32-bit, and so on value.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
select setbit(0,0); /* 00000000 -> 00000001 */
+--------------+
| setbit(0, 0) |
+--------------+
| 1 |
+--------------+
select setbit(0,3); /* 00000000 -> 00001000 */
+--------------+
| setbit(0, 3) |
+--------------+
| 8 |
+--------------+
select setbit(7,3); /* 00000111 -> 00001111 */
+--------------+
| setbit(7, 3) |
420 | Apache Impala Guide
Impala SQL Language Reference
+--------------+
| 15 |
+--------------+
select setbit(15,3); /* 00001111 -> 00001111 */
+---------------+
| setbit(15, 3) |
+---------------+
| 15 |
+---------------+
select setbit(0,32); /* By default, 0 is a TINYINT with only 8 bits. */
ERROR: Invalid bit position: 32
select setbit(cast(0 as bigint),32); /* For BIGINT, the position can be 0..63. */
+-------------------------------+
| setbit(cast(0 as bigint), 32) |
+-------------------------------+
| 4294967296 |
+-------------------------------+
select setbit(7,3,1); /* 00000111 -> 00001111; setting to 1 is the default */
+-----------------+
| setbit(7, 3, 1) |
+-----------------+
| 15 |
+-----------------+
select setbit(7,2,0); /* 00000111 -> 00000011; third argument of 0 clears instead of
sets */
+-----------------+
| setbit(7, 2, 0) |
+-----------------+
| 3 |
+-----------------+
SHIFTLEFT(integer_type a, INT positions)
Purpose: Shifts an integer value left by a specified number of bits. As the most significant bit is taken out of the
original value, it is discarded and the least significant bit becomes 0. In computer science terms, this operation is a
“logical shift”.
Usage notes:
The final value has either the same number of 1 bits as the original value, or fewer. Shifting an 8-bit value by 8
positions, a 16-bit value by 16 positions, and so on produces a result of zero.
Specifying a second argument of zero leaves the original value unchanged. Shifting any value by 0 returns the original
value. Shifting any value by 1 is the same as multiplying it by 2, as long as the value is small enough; larger values
eventually become negative when shifted, as the sign bit is set. Starting with the value 1 and shifting it left by N
positions gives the same result as 2 to the Nth power, or pow(2,N).
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
select shiftleft(1,0); /* 00000001 -> 00000001 */
+-----------------+
| shiftleft(1, 0) |
+-----------------+
| 1 |
+-----------------+
select shiftleft(1,3); /* 00000001 -> 00001000 */
+-----------------+
| shiftleft(1, 3) |
+-----------------+
| 8 |
+-----------------+
Apache Impala Guide | 421
Impala SQL Language Reference
select shiftleft(8,2); /* 00001000 -> 00100000 */
+-----------------+
| shiftleft(8, 2) |
+-----------------+
| 32 |
+-----------------+
select shiftleft(127,1); /* 01111111 -> 11111110 */
+-------------------+
| shiftleft(127, 1) |
+-------------------+
| -2 |
+-------------------+
select shiftleft(127,5); /* 01111111 -> 11100000 */
+-------------------+
| shiftleft(127, 5) |
+-------------------+
| -32 |
+-------------------+
select shiftleft(-1,4); /* 11111111 -> 11110000 */
+------------------+
| shiftleft(-1, 4) |
+------------------+
| -16 |
+------------------+
SHIFTRIGHT(integer_type a, INT positions)
Purpose: Shifts an integer value right by a specified number of bits. As the least significant bit is taken out of the
original value, it is discarded and the most significant bit becomes 0. In computer science terms, this operation is
a “logical shift”.
Usage notes:
Therefore, the final value has either the same number of 1 bits as the original value, or fewer. Shifting an 8-bit value
by 8 positions, a 16-bit value by 16 positions, and so on produces a result of zero.
Specifying a second argument of zero leaves the original value unchanged. Shifting any value by 0 returns the original
value. Shifting any positive value right by 1 is the same as dividing it by 2. Negative values become positive when
shifted right.
Return type: Same as the input value
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
select shiftright(16,0); /* 00010000 -> 00010000 */
+-------------------+
| shiftright(16, 0) |
+-------------------+
| 16 |
+-------------------+
select shiftright(16,4); /* 00010000 -> 00000001 */
+-------------------+
| shiftright(16, 4) |
+-------------------+
| 1 |
+-------------------+
select shiftright(16,5); /* 00010000 -> 00000000 */
+-------------------+
| shiftright(16, 5) |
+-------------------+
| 0 |
+-------------------+
422 | Apache Impala Guide
Impala SQL Language Reference
select shiftright(-1,1); /* 11111111 -> 01111111 */
+-------------------+
| shiftright(-1, 1) |
+-------------------+
| 127 |
+-------------------+
select shiftright(-1,5); /* 11111111 -> 00000111 */
+-------------------+
| shiftright(-1, 5) |
+-------------------+
| 7 |
+-------------------+
Impala Type Conversion Functions
Conversion functions are usually used in combination with other functions, to explicitly pass the expected data types.
Impala has strict rules regarding data types for function parameters. For example, Impala does not automatically
convert a DOUBLE value to FLOAT, a BIGINT value to INT, or other conversion where precision could be lost or overflow
could occur. Also, for reporting or dealing with loosely defined schemas in big data contexts, you might frequently
need to convert values to or from the STRING type.
Note: Although in CDH 5.5 / Impala 2.3, the SHOW FUNCTIONS output for database
_IMPALA_BUILTINS contains some function signatures matching the pattern castto*, these functions
are not intended for public use and are expected to be hidden in future.
Function reference:
Impala supports the following type conversion functions:
• CAST
• TYPEOF
CAST(expr AS type)
Purpose: Converts the value of an expression to any other type. If the expression value is of a type that cannot be
converted to the target type, the result is NULL.
Usage notes: Use CAST when passing a column value or literal to a function that expects a parameter with a different
type. Frequently used in SQL operations such as CREATE TABLE AS SELECT and INSERT ... VALUES to ensure
that values from various sources are of the appropriate type for the destination columns. Where practical, do a
one-time CAST() operation during the ingestion process to make each column into the appropriate type, rather
than using many CAST() operations in each query; doing type conversions for each row during each query can be
expensive for tables with millions or billions of rows.
The way this function deals with time zones when converting to or from TIMESTAMP values is affected by the
--use_local_tz_for_unix_timestamp_conversions startup flag for the impalad daemon. See TIMESTAMP
Data Type on page 130 for details about how Impala handles time zone considerations for the TIMESTAMP data
type.
Examples:
SELECT CONCAT('Here are the first ',10,' results.'); -- Fails
SELECT CONCAT('Here are the first ',CAST(10 AS STRING),' results.'); -- Succeeds
The following example starts with a text table where every column has a type of STRING, which might be how you
ingest data of unknown schema until you can verify the cleanliness of the underly values. Then it uses CAST() to
create a new Parquet table with the same data, but using specific numeric data types for the columns with numeric
Apache Impala Guide | 423
Impala SQL Language Reference
data. Using numeric types of appropriate sizes can result in substantial space savings on disk and in memory, and
performance improvements in queries, over using strings or larger-than-necessary numeric types.
CREATE TABLE t1 (name STRING, x STRING, y STRING, z STRING);
CREATE TABLE t2 STORED AS PARQUET
AS SELECT
name,
CAST(x AS BIGINT) x,
CAST(y AS TIMESTAMP) y,
CAST(z AS SMALLINT) z
FROM t1;
Related information:
For details of casts from each kind of data type, see the description of the appropriate type: TINYINT Data Type on
page 136, SMALLINT Data Type on page 122, INT Data Type on page 117, BIGINT Data Type on page 105, FLOAT Data
Type on page 116, DOUBLE Data Type on page 114, DECIMAL Data Type (CDH 6.0 / Impala 3.0 or higher only) on page
109, STRING Data Type on page 123, CHAR Data Type (CDH 5.2 or higher only) on page 107, VARCHAR Data Type (CDH
5.2 or higher only) on page 137, TIMESTAMP Data Type on page 130, BOOLEAN Data Type on page 106
TYPEOF(type value)
Purpose: Returns the name of the data type corresponding to an expression. For types with extra attributes, such
as length for CHAR and VARCHAR, or precision and scale for DECIMAL, includes the full specification of the type.
Return type: STRING
Usage notes: Typically used in interactive exploration of a schema, or in application code that programmatically
generates schema definitions such as CREATE TABLE statements, for example, to get the type of an expression
such as col1 / col2 or CONCAT(col1, col2, col3). This function is especially useful for arithmetic expressions
involving DECIMAL types because the precision and scale of the result is can be different than that of the operands.
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
SELECT TYPEOF(2), TYPEOF(2+2);
+-----------+---------------+
| typeof(2) | typeof(2 + 2) |
+-----------+---------------+
| TINYINT | SMALLINT |
+-----------+---------------+
Impala Date and Time Functions
The underlying Impala data type for date and time data is TIMESTAMP, which has both a date and a time portion.
Functions that extract a single field, such as hour() or minute(), typically return an integer value. Functions that
format the date portion, such as date_add() or to_date(), typically return a string value.
You can also adjust a TIMESTAMP value by adding or subtracting an INTERVAL expression. See TIMESTAMP Data Type
on page 130 for details. INTERVAL expressions are also allowed as the second argument for the date_add() and
date_sub() functions, rather than integers.
Some of these functions are affected by the setting of the --use_local_tz_for_unix_timestamp_conversions
startup flag for the impalad daemon. This setting is off by default, meaning that functions such as from_unixtime()
and unix_timestamp() consider the input values to always represent the UTC time zone. This setting also applies
when you CAST() a BIGINT value to TIMESTAMP, or a TIMESTAMP value to BIGINT. When this setting is enabled,
these functions and operations convert to and from values representing the local time zone. See TIMESTAMP Data
Type on page 130 for details about how Impala handles time zone considerations for the TIMESTAMP data type.
Function reference:
Impala supports the following data and time functions:
• ADD_MONTHS
424 | Apache Impala Guide
• ADDDATE
• CURRENT_TIMESTAMP
• DATE_ADD
• DATE_PART
• DATE_SUB
• DATE_TRUNC
• DATEDIFF
• DAY
• DAYNAME
• DAYOFWEEK
• DAYOFYEAR
• DAYS_ADD
• DAYS_SUB
• EXTRACT
• FROM_TIMESTAMP
• FROM_UNIXTIME
• FROM_UTC_TIMESTAMP
• HOUR
• HOURS_ADD
• HOURS_SUB
• INT_MONTHS_BETWEEN
• MICROSECONDS_ADD
• MICROSECONDS_SUB
• MILLISECOND
• MILLISECONDS_ADD
• MILLISECONDS_SUB
• MINUTE
• MINUTES_ADD
• MINUTES_SUB
• MONTH
• MONTHNAME
• MONTHS_ADD
• MONTHS_BETWEEN
• MONTHS_SUB
• NANOSECONDS_ADD
• NANOSECONDS_SUB
• NEXT_DAY
• NOW
• QUARTER
• SECOND
• SECONDS_ADD
• SECONDS_SUB
• SUBDATE
• TIMEOFDAY
• TIMESTAMP_CMP
• TO_DATE
• TO_TIMESTAMP
• TO_UTC_TIMESTAMP
• TRUNC
• UNIX_TIMESTAMP
Impala SQL Language Reference
Apache Impala Guide | 425
Impala SQL Language Reference
• UTC_TIMESTAMP
• WEEKOFYEAR
• WEEKS_ADD
• WEEKS_SUB
• YEAR
• YEARS_ADD
• YEARS_SUB
ADD_MONTHS(TIMESTAMP date, INT months), ADD_MONTHS(TIMESTAMP date, BIGINT months)
Purpose: Returns the specified date and time plus some number of months.
Return type: TIMESTAMP
Usage notes:
Same as MONTHS_ADD(). Available in Impala 1.4 and higher. For compatibility when porting code with vendor
extensions.
Examples:
The following examples demonstrate adding months to construct the same day of the month in a different month;
how if the current day of the month does not exist in the target month, the last day of that month is substituted;
and how a negative argument produces a return value from a previous month.
select now(), add_months(now(), 2);
+-------------------------------+-------------------------------+
| now() | add_months(now(), 2) |
+-------------------------------+-------------------------------+
| 2016-05-31 10:47:00.429109000 | 2016-07-31 10:47:00.429109000 |
+-------------------------------+-------------------------------+
select now(), add_months(now(), 1);
+-------------------------------+-------------------------------+
| now() | add_months(now(), 1) |
+-------------------------------+-------------------------------+
| 2016-05-31 10:47:14.540226000 | 2016-06-30 10:47:14.540226000 |
+-------------------------------+-------------------------------+
select now(), add_months(now(), -1);
+-------------------------------+-------------------------------+
| now() | add_months(now(), -1) |
+-------------------------------+-------------------------------+
| 2016-05-31 10:47:31.732298000 | 2016-04-30 10:47:31.732298000 |
+-------------------------------+-------------------------------+
ADDDATE(TIMESTAMP startdate, INT days), ADDDATE(TIMESTAMP startdate, BIGINT days)
Purpose: Adds a specified number of days to a TIMESTAMP value. Similar to DATE_ADD(), but starts with an actual
TIMESTAMP value instead of a string that is converted to a TIMESTAMP.
Return type: TIMESTAMP
Examples:
The following examples show how to add a number of days to a TIMESTAMP. The number of days can also be
negative, which gives the same effect as the subdate() function.
select now() as right_now, adddate(now(), 30) as now_plus_30;
+-------------------------------+-------------------------------+
| right_now | now_plus_30 |
+-------------------------------+-------------------------------+
| 2016-05-20 10:23:08.640111000 | 2016-06-19 10:23:08.640111000 |
+-------------------------------+-------------------------------+
select now() as right_now, adddate(now(), -15) as now_minus_15;
426 | Apache Impala Guide
Impala SQL Language Reference
+-------------------------------+-------------------------------+
| right_now | now_minus_15 |
+-------------------------------+-------------------------------+
| 2016-05-20 10:23:38.214064000 | 2016-05-05 10:23:38.214064000 |
+-------------------------------+-------------------------------+
CURRENT_TIMESTAMP()
Purpose: Alias for the NOW() function.
Return type: TIMESTAMP
Examples:
select now(), current_timestamp();
+-------------------------------+-------------------------------+
| now() | current_timestamp() |
+-------------------------------+-------------------------------+
| 2016-05-19 16:10:14.237849000 | 2016-05-19 16:10:14.237849000 |
+-------------------------------+-------------------------------+
select current_timestamp() as right_now,
current_timestamp() + interval 3 hours as in_three_hours;
+-------------------------------+-------------------------------+
| right_now | in_three_hours |
+-------------------------------+-------------------------------+
| 2016-05-19 16:13:20.017117000 | 2016-05-19 19:13:20.017117000 |
+-------------------------------+-------------------------------+
DATE_ADD(TIMESTAMP startdate, INT days), DATE_ADD(TIMESTAMP startdate, interval_expression)
Purpose: Adds a specified number of days to a TIMESTAMP value. With an INTERVAL expression as the second
argument, you can calculate a delta value using other units such as weeks, years, hours, seconds, and so on; see
TIMESTAMP Data Type on page 130 for details.
Return type: TIMESTAMP
Examples:
The following example shows the simplest usage, of adding a specified number of days to a TIMESTAMP value:
select now() as right_now, date_add(now(), 7) as next_week;
+-------------------------------+-------------------------------+
| right_now | next_week |
+-------------------------------+-------------------------------+
| 2016-05-20 11:03:48.687055000 | 2016-05-27 11:03:48.687055000 |
+-------------------------------+-------------------------------+
The following examples show the shorthand notation of an INTERVAL expression, instead of specifying the precise
number of days. The INTERVAL notation also lets you work with units smaller than a single day.
select now() as right_now, date_add(now(), interval 3 weeks) as in_3_weeks;
+-------------------------------+-------------------------------+
| right_now | in_3_weeks |
+-------------------------------+-------------------------------+
| 2016-05-20 11:05:39.173331000 | 2016-06-10 11:05:39.173331000 |
+-------------------------------+-------------------------------+
select now() as right_now, date_add(now(), interval 6 hours) as in_6_hours;
+-------------------------------+-------------------------------+
| right_now | in_6_hours |
+-------------------------------+-------------------------------+
| 2016-05-20 11:13:51.492536000 | 2016-05-20 17:13:51.492536000 |
+-------------------------------+-------------------------------+
Apache Impala Guide | 427
Impala SQL Language Reference
Like all date/time functions that deal with months, date_add() handles nonexistent dates past the end of a month
by setting the date to the last day of the month. The following example shows how the nonexistent date April 31st
is normalized to April 30th:
select date_add(cast('2016-01-31' as timestamp), interval 3 months) as 'april_31st';
+---------------------+
| april_31st |
+---------------------+
| 2016-04-30 00:00:00 |
+---------------------+
DATE_PART(STRING a, TIMESTAMP timestamp)
Purpose: Similar to EXTRACT(), with the argument order reversed. Supports the same date and time units as
EXTRACT(). For compatibility with SQL code containing vendor extensions.
Return type: bigint
Examples:
select date_part('year',now()) as current_year;
+--------------+
| current_year |
+--------------+
| 2016 |
+--------------+
select date_part('hour',now()) as hour_of_day;
+-------------+
| hour_of_day |
+-------------+
| 11 |
+-------------+
DATE_SUB(TIMESTAMP startdate, INT days), DATE_SUB(TIMESTAMP startdate, interval_expression)
Purpose: Subtracts a specified number of days from a TIMESTAMP value. With an INTERVAL expression as the
second argument, you can calculate a delta value using other units such as weeks, years, hours, seconds, and so
on; see TIMESTAMP Data Type on page 130 for details.
Return type: TIMESTAMP
Examples:
The following example shows the simplest usage, of subtracting a specified number of days from a TIMESTAMP
value:
select now() as right_now, date_sub(now(), 7) as last_week;
+-------------------------------+-------------------------------+
| right_now | last_week |
+-------------------------------+-------------------------------+
| 2016-05-20 11:21:30.491011000 | 2016-05-13 11:21:30.491011000 |
+-------------------------------+-------------------------------+
The following examples show the shorthand notation of an INTERVAL expression, instead of specifying the precise
number of days. The INTERVAL notation also lets you work with units smaller than a single day.
select now() as right_now, date_sub(now(), interval 3 weeks) as 3_weeks_ago;
+-------------------------------+-------------------------------+
| right_now | 3_weeks_ago |
+-------------------------------+-------------------------------+
| 2016-05-20 11:23:05.176953000 | 2016-04-29 11:23:05.176953000 |
+-------------------------------+-------------------------------+
select now() as right_now, date_sub(now(), interval 6 hours) as 6_hours_ago;
428 | Apache Impala Guide
Impala SQL Language Reference
+-------------------------------+-------------------------------+
| right_now | 6_hours_ago |
+-------------------------------+-------------------------------+
| 2016-05-20 11:23:35.439631000 | 2016-05-20 05:23:35.439631000 |
+-------------------------------+-------------------------------+
Like all date/time functions that deal with months, DATE_ADD() handles nonexistent dates past the end of a month
by setting the date to the last day of the month. The following example shows how the nonexistent date April 31st
is normalized to April 30th:
select date_sub(cast('2016-05-31' as timestamp), interval 1 months) as 'april_31st';
+---------------------+
| april_31st |
+---------------------+
| 2016-04-30 00:00:00 |
+---------------------+
DATE_TRUNC(STRING unit, TIMESTAMP timestamp)
Purpose: Truncates a TIMESTAMP value to the specified precision.
Unit argument: The unit argument value for truncating TIMESTAMP values is not case-sensitive. This argument
string can be one of:
• microseconds
• milliseconds
• second
• minute
• hour
• day
• week
• month
• year
• decade
• century
• millennium
For example, calling date_trunc('hour',ts) truncates ts to the beginning of the corresponding hour, with all
minutes, seconds, milliseconds, and so on set to zero. Calling date_trunc('milliseconds',ts) truncates ts
to the beginning of the corresponding millisecond, with all microseconds and nanoseconds set to zero.
Note: The sub-second units are specified in plural form. All units representing one second or more
are specified in singular form.
Added in: CDH 5.14.0 / Impala 2.11.0
Usage notes:
Although this function is similar to calling TRUNC() with a TIMESTAMP argument, the order of arguments and the
recognized units are different between TRUNC() and DATE_TRUNC(). Therefore, these functions are not
interchangeable.
This function is typically used in GROUP BY queries to aggregate results from the same hour, day, week, month,
quarter, and so on. You can also use this function in an INSERT ... SELECT into a partitioned table to divide
TIMESTAMP values into the correct partition.
Because the return value is a TIMESTAMP, if you cast the result of DATE_TRUNC() to STRING, you will often see
zeroed-out portions such as 00:00:00 in the time field. If you only need the individual units such as hour, day,
month, or year, use the EXTRACT() function instead. If you need the individual units from a truncated TIMESTAMP
value, run the TRUNCATE() function on the original value, then run EXTRACT() on the result.
Apache Impala Guide | 429
Impala SQL Language Reference
Return type: TIMESTAMP
Examples:
The following examples show how to call DATE_TRUNC() with different unit values:
select now(), date_trunc('second', now());
+-------------------------------+-----------------------------------+
| now() | date_trunc('second', now()) |
+-------------------------------+-----------------------------------+
| 2017-12-05 13:58:04.565403000 | 2017-12-05 13:58:04 |
+-------------------------------+-----------------------------------+
select now(), date_trunc('hour', now());
+-------------------------------+---------------------------+
| now() | date_trunc('hour', now()) |
+-------------------------------+---------------------------+
| 2017-12-05 13:59:01.884459000 | 2017-12-05 13:00:00 |
+-------------------------------+---------------------------+
select now(), date_trunc('millennium', now());
+-------------------------------+---------------------------------+
| now() | date_trunc('millennium', now()) |
+-------------------------------+---------------------------------+
| 2017-12-05 14:00:30.296812000 | 2000-01-01 00:00:00 |
+-------------------------------+---------------------------------+
DATEDIFF(TIMESTAMP enddate, TIMESTAMP startdate)
Purpose: Returns the number of days between two TIMESTAMP values.
Return type: INT
Usage notes:
If the first argument represents a later date than the second argument, the return value is positive. If both arguments
represent the same date, the return value is zero. The time portions of the TIMESTAMP values are irrelevant. For
example, 11:59 PM on one day and 12:01 on the next day represent a datediff() of -1 because the date/time
values represent different days, even though the TIMESTAMP values differ by only 2 minutes.
Examples:
The following example shows how comparing a “late” value with an “earlier” value produces a positive number. In
this case, the result is (365 * 5) + 1, because one of the intervening years is a leap year.
select now() as right_now, datediff(now() + interval 5 years, now()) as in_5_years;
+-------------------------------+------------+
| right_now | in_5_years |
+-------------------------------+------------+
| 2016-05-20 13:43:55.873826000 | 1826 |
+-------------------------------+------------+
The following examples show how the return value represent the number of days between the associated dates,
regardless of the time portion of each TIMESTAMP. For example, different times on the same day produce a
DATE_DIFF() of 0, regardless of which one is earlier or later. But if the arguments represent different dates,
DATE_DIFF() returns a non-zero integer value, regardless of the time portions of the dates.
select now() as right_now, datediff(now(), now() + interval 4 hours) as in_4_hours;
+-------------------------------+------------+
| right_now | in_4_hours |
+-------------------------------+------------+
| 2016-05-20 13:42:05.302747000 | 0 |
+-------------------------------+------------+
select now() as right_now, datediff(now(), now() - interval 4 hours) as 4_hours_ago;
+-------------------------------+-------------+
430 | Apache Impala Guide
Impala SQL Language Reference
| right_now | 4_hours_ago |
+-------------------------------+-------------+
| 2016-05-20 13:42:21.134958000 | 0 |
+-------------------------------+-------------+
select now() as right_now, datediff(now(), now() + interval 12 hours) as in_12_hours;
+-------------------------------+-------------+
| right_now | in_12_hours |
+-------------------------------+-------------+
| 2016-05-20 13:42:44.765873000 | -1 |
+-------------------------------+-------------+
select now() as right_now, datediff(now(), now() - interval 18 hours) as 18_hours_ago;
+-------------------------------+--------------+
| right_now | 18_hours_ago |
+-------------------------------+--------------+
| 2016-05-20 13:54:38.829827000 | 1 |
+-------------------------------+--------------+
DAY(TIMESTAMP date), DAYOFMONTH(TIMESTAMP date)
Purpose: Returns the day field from the date portion of a TIMESTAMP. The value represents the day of the month,
therefore is in the range 1-31, or less for months without 31 days.
Return type: INT
Examples:
The following examples show how the day value corresponds to the day of the month, resetting back to 1 at the
start of each month.
select now(), day(now());
+-------------------------------+------------+
| now() | day(now()) |
+-------------------------------+------------+
| 2016-05-20 15:01:51.042185000 | 20 |
+-------------------------------+------------+
select now() + interval 11 days, day(now() + interval 11 days);
+-------------------------------+-------------------------------+
| now() + interval 11 days | day(now() + interval 11 days) |
+-------------------------------+-------------------------------+
| 2016-05-31 15:05:56.843139000 | 31 |
+-------------------------------+-------------------------------+
select now() + interval 12 days, day(now() + interval 12 days);
+-------------------------------+-------------------------------+
| now() + interval 12 days | day(now() + interval 12 days) |
+-------------------------------+-------------------------------+
| 2016-06-01 15:06:05.074236000 | 1 |
+-------------------------------+-------------------------------+
The following examples show how the day value is NULL for nonexistent dates or misformatted date strings.
-- 2016 is a leap year, so it has a Feb. 29.
select day('2016-02-29');
+-------------------+
| day('2016-02-29') |
+-------------------+
| 29 |
+-------------------+
-- 2015 is not a leap year, so Feb. 29 is nonexistent.
select day('2015-02-29');
+-------------------+
| day('2015-02-29') |
+-------------------+
| NULL |
+-------------------+
Apache Impala Guide | 431
Impala SQL Language Reference
-- A string that does not match the expected YYYY-MM-DD format
-- produces an invalid TIMESTAMP, causing day() to return NULL.
select day('2016-02-028');
+--------------------+
| day('2016-02-028') |
+--------------------+
| NULL |
+--------------------+
DAYNAME(TIMESTAMP date)
Purpose: Returns the day field from a TIMESTAMP value, converted to the string corresponding to that day name.
The range of return values is 'Sunday' to 'Saturday'. Used in report-generating queries, as an alternative to
calling DAYOFWEEK() and turning that numeric return value into a string using a CASE expression.
Return type: STRING
Examples:
The following examples show the day name associated with TIMESTAMP values representing different days.
select now() as right_now,
dayofweek(now()) as todays_day_of_week,
dayname(now()) as todays_day_name;
+-------------------------------+--------------------+-----------------+
| right_now | todays_day_of_week | todays_day_name |
+-------------------------------+--------------------+-----------------+
| 2016-05-31 10:57:03.953670000 | 3 | Tuesday |
+-------------------------------+--------------------+-----------------+
select now() + interval 1 day as tomorrow,
dayname(now() + interval 1 day) as tomorrows_day_name;
+-------------------------------+--------------------+
| tomorrow | tomorrows_day_name |
+-------------------------------+--------------------+
| 2016-06-01 10:58:53.945761000 | Wednesday |
+-------------------------------+--------------------+
DAYOFWEEK(TIMESTAMP date)
Purpose: Returns the day field from the date portion of a TIMESTAMP, corresponding to the day of the week. The
range of return values is 1 (Sunday) to 7 (Saturday).
Return type: INT
Examples:
select now() as right_now,
dayofweek(now()) as todays_day_of_week,
dayname(now()) as todays_day_name;
+-------------------------------+--------------------+-----------------+
| right_now | todays_day_of_week | todays_day_name |
+-------------------------------+--------------------+-----------------+
| 2016-05-31 10:57:03.953670000 | 3 | Tuesday |
+-------------------------------+--------------------+-----------------+
DAYOFYEAR(TIMESTAMP date)
Purpose: Returns the day field from a TIMESTAMP value, corresponding to the day of the year. The range of return
values is 1 (January 1) to 366 (December 31 of a leap year).
Return type: INT
Examples:
432 | Apache Impala Guide
The following examples show return values from the dayofyear() function. The same date in different years
returns a different day number for all dates after February 28, because 2016 is a leap year while 2015 is not a leap
year.
Impala SQL Language Reference
select now() as right_now,
dayofyear(now()) as today_day_of_year;
+-------------------------------+-------------------+
| right_now | today_day_of_year |
+-------------------------------+-------------------+
| 2016-05-31 11:05:48.314932000 | 152 |
+-------------------------------+-------------------+
select now() - interval 1 year as last_year,
dayofyear(now() - interval 1 year) as year_ago_day_of_year;
+-------------------------------+----------------------+
| last_year | year_ago_day_of_year |
+-------------------------------+----------------------+
| 2015-05-31 11:07:03.733689000 | 151 |
+-------------------------------+----------------------+
DAYS_ADD(TIMESTAMP startdate, INT days), DAYS_ADD(TIMESTAMP startdate, BIGINT days)
Purpose: Adds a specified number of days to a TIMESTAMP value. Similar to DATE_ADD(), but starts with an actual
TIMESTAMP value instead of a string that is converted to a TIMESTAMP.
Return type: TIMESTAMP
Examples:
select now() as right_now, days_add(now(), 31) as 31_days_later;
+-------------------------------+-------------------------------+
| right_now | 31_days_later |
+-------------------------------+-------------------------------+
| 2016-05-31 11:12:32.216764000 | 2016-07-01 11:12:32.216764000 |
+-------------------------------+-------------------------------+
DAYS_SUB(TIMESTAMP startdate, INT days), DAYS_SUB(TIMESTAMP startdate, BIGINT days)
Purpose: Subtracts a specified number of days from a TIMESTAMP value. Similar to DATE_SUB(), but starts with
an actual TIMESTAMP value instead of a string that is converted to a TIMESTAMP.
Return type: TIMESTAMP
Examples:
select now() as right_now, days_sub(now(), 31) as 31_days_ago;
+-------------------------------+-------------------------------+
| right_now | 31_days_ago |
+-------------------------------+-------------------------------+
| 2016-05-31 11:13:42.163905000 | 2016-04-30 11:13:42.163905000 |
+-------------------------------+-------------------------------+
EXTRACT(TIMESTAMP timestamp, STRING unit), EXTRACT(unit FROM TIMESTAMP ts)
Purpose: Returns one of the numeric date or time fields from a TIMESTAMP value.
Unit argument: The unit string can be one of epoch, year, quarter, month, day, hour, minute, second, or
millisecond. This argument value is case-insensitive.
If you specify millisecond for the unit argument, the function returns the seconds component and the milliseconds
component. For example, EXTRACT(CAST('2006-05-12 18:27:28.123456789' AS TIMESTAMP),
'MILLISECOND') will return 28123.
Apache Impala Guide | 433
Impala SQL Language Reference
In Impala 2.0 and higher, you can use special syntax rather than a regular function call, for compatibility with code
that uses the SQL-99 format with the FROM keyword. With this style, the unit names are identifiers rather than
STRING literals. For example, the following calls are both equivalent:
EXTRACT(year FROM NOW());
EXTRACT(NOW(), 'year');
Usage notes:
Typically used in GROUP BY queries to arrange results by hour, day, month, and so on. You can also use this function
in an INSERT ... SELECT statement to insert into a partitioned table to split up TIMESTAMP values into individual
parts, if the partitioned table has separate partition key columns representing year, month, day, and so on. If you
need to divide by more complex units of time, such as by week or by quarter, use the TRUNC() function instead.
Return type: BIGINT
Examples:
SELECT NOW() AS right_now,
EXTRACT(day FROM NOW()) AS this_day,
EXTRACT(hour FROM NOW()) AS this_hour;
+-------------------------------+----------+-----------+
| right_now | this_day | this_hour |
+-------------------------------+----------+-----------+
| 2016-05-31 11:19:24.025303000 | 31 | 11 |
+-------------------------------+----------+-----------+
FROM_TIMESTAMP(TIMESTAMP datetime, STRING pattern), FROM_TIMESTAMP(STRING datetime, STRING pattern)
Purpose: Converts a TIMESTAMP value into a string representing the same value.
Return type: STRING
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes:
The FROM_TIMESTAMP() function provides a flexible way to convert TIMESTAMP values into arbitrary string formats
for reporting purposes.
Because Impala implicitly converts string values into TIMESTAMP, you can pass date/time values represented as
strings (in the standard yyyy-MM-dd HH:mm:ss.SSS format) to this function. The result is a string using different
separator characters, order of fields, spelled-out month names, or other variation of the date/time string
representation.
The allowed tokens for the pattern string are the same as for the FROM_UNIXTIME() function.
Examples:
The following examples show different ways to format a TIMESTAMP value as a string:
-- Reformat a TIMESTAMP value.
SELECT FROM_TIMESTAMP(NOW(), 'yyyy/MM/dd');
+-------------------------------------+
| from_timestamp(now(), 'yyyy/mm/dd') |
+-------------------------------------+
| 2018/10/09 |
+-------------------------------------+
-- Alternative format for reporting purposes.
SELECT FROM_TIMESTAMP('1984-09-25 16:45:30.125', 'MMM dd, yyyy HH:mm:ss.SSS');
+------------------------------------------------------------------------+
| from_timestamp('1984-09-25 16:45:30.125', 'mmm dd, yyyy hh:mm:ss.sss') |
+------------------------------------------------------------------------+
| Sep 25, 1984 16:45:30.125 |
+------------------------------------------------------------------------+
434 | Apache Impala Guide
Impala SQL Language Reference
FROM_UNIXTIME(BIGINT unixtime[, STRING format])
Purpose: Converts the number of seconds from the Unix epoch to the specified time into a string in the local time
zone.
Return type: STRING
In Impala 2.2.0 and higher, built-in functions that accept or return integers representing TIMESTAMP values use the
BIGINT type for parameters and return values, rather than INT. This change lets the date and time functions avoid
an overflow error that would otherwise occur on January 19th, 2038 (known as the “Year 2038 problem” or “Y2K38
problem”). This change affects the FROM_UNIXTIME() and UNIX_TIMESTAMP() functions. You might need to
change application code that interacts with these functions, change the types of columns that store the return
values, or add CAST() calls to SQL statements that call these functions.
Usage notes:
The format string accepts the variations allowed for the TIMESTAMP data type: date plus time, date by itself, time
by itself, and optional fractional seconds for the time. See TIMESTAMP Data Type on page 130 for details.
Currently, the format string is case-sensitive, especially to distinguish m for minutes and M for months. In Impala 1.3
and later, you can switch the order of elements, use alternative separator characters, and use a different number
of placeholders for each unit. Adding more instances of y, d, H, and so on produces output strings zero-padded to
the requested number of characters. The exception is M for months, where M produces a non-padded value such
as 3, MM produces a zero-padded value such as 03, MMM produces an abbreviated month name such as Mar, and
sequences of 4 or more M are not allowed. A date string including all fields could be 'yyyy-MM-dd
HH:mm:ss.SSSSSS', 'dd/MM/yyyy HH:mm:ss.SSSSSS', 'MMM dd, yyyy HH.mm.ss (SSSSSS)' or other
combinations of placeholders and separator characters.
The way this function deals with time zones when converting to or from TIMESTAMP values is affected by the
--use_local_tz_for_unix_timestamp_conversions startup flag for the impalad daemon. See TIMESTAMP
Data Type on page 130 for details about how Impala handles time zone considerations for the TIMESTAMP data
type.
Note:
The more flexible format strings allowed with the built-in functions do not change the rules about
using CAST() to convert from a string to a TIMESTAMP value. Strings being converted through
CAST() must still have the elements in the specified order and use the specified delimiter characters,
as described in TIMESTAMP Data Type on page 130.
Examples:
SELECT FROM_UNIXTIME(1392394861,'yyyy-MM-dd HH:mm:ss.SSSS');
+-------------------------------------------------------+
| from_unixtime(1392394861, 'yyyy-mm-dd hh:mm:ss.ssss') |
+-------------------------------------------------------+
| 2014-02-14 16:21:01.0000 |
+-------------------------------------------------------+
SELECT FROM_UNIXTIME(1392394861,'HH:mm:ss.SSSS');
+--------------------------------------------+
| from_unixtime(1392394861, 'hh:mm:ss.ssss') |
+--------------------------------------------+
| 16:21:01.0000 |
+--------------------------------------------+
UNIX_TIMESTAMP() and FROM_UNIXTIME() are often used in combination to convert a TIMESTAMP value into
a particular string format. For example:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(NOW() + interval 3 days),
'yyyy/MM/dd HH:mm') AS yyyy_mm_dd_hh_mm;
+------------------+
| yyyy_mm_dd_hh_mm |
Apache Impala Guide | 435
Impala SQL Language Reference
+------------------+
| 2016/06/03 11:38 |
+------------------+
FROM_UTC_TIMESTAMP(TIMESTAMP timestamp, STRING timezone)
Purpose: Converts a specified UTC timestamp value into the appropriate value for a specified time zone.
Return type: TIMESTAMP
Usage notes: Often used to translate UTC time zone data stored in a table back to the local date and time for
reporting. The opposite of the TO_UTC_TIMESTAMP() function.
To determine the time zone of the server you are connected to, in CDH 5.5 / Impala 2.3 and higher you can call the
timeofday() function, which includes the time zone specifier in its return value. Remember that with cloud
computing, the server you interact with might be in a different time zone than you are, or different sessions might
connect to servers in different time zones, or a cluster might include servers in more than one time zone.
Examples:
See discussion of time zones in TIMESTAMP Data Type on page 130 for information about using this function for
conversions between the local time zone and UTC.
The following example shows how when TIMESTAMP values representing the UTC time zone are stored in a table,
a query can display the equivalent local date and time for a different time zone.
with t1 as (select cast('2016-06-02 16:25:36.116143000' as timestamp) as utc_datetime)
select utc_datetime as 'Date/time in Greenwich UK',
from_utc_timestamp(utc_datetime, 'PDT')
as 'Equivalent in California USA'
from t1;
+-------------------------------+-------------------------------+
| date/time in greenwich uk | equivalent in california usa |
+-------------------------------+-------------------------------+
| 2016-06-02 16:25:36.116143000 | 2016-06-02 09:25:36.116143000 |
+-------------------------------+-------------------------------+
The following example shows that for a date and time when daylight savings is in effect (PDT), the UTC time is 7
hours ahead of the local California time; while when daylight savings is not in effect (PST), the UTC time is 8 hours
ahead of the local California time.
select now() as local_datetime,
to_utc_timestamp(now(), 'PDT') as utc_datetime;
+-------------------------------+-------------------------------+
| local_datetime | utc_datetime |
+-------------------------------+-------------------------------+
| 2016-05-31 11:50:02.316883000 | 2016-05-31 18:50:02.316883000 |
+-------------------------------+-------------------------------+
select '2016-01-05' as local_datetime,
to_utc_timestamp('2016-01-05', 'PST') as utc_datetime;
+----------------+---------------------+
| local_datetime | utc_datetime |
+----------------+---------------------+
| 2016-01-05 | 2016-01-05 08:00:00 |
+----------------+---------------------+
HOUR(TIMESTAMP date)
Purpose: Returns the hour field from a TIMESTAMP field.
Return type: INT
436 | Apache Impala Guide
Impala SQL Language Reference
Examples:
select now() as right_now, hour(now()) as current_hour;
+-------------------------------+--------------+
| right_now | current_hour |
+-------------------------------+--------------+
| 2016-06-01 14:14:12.472846000 | 14 |
+-------------------------------+--------------+
select now() + interval 12 hours as 12_hours_from_now,
hour(now() + interval 12 hours) as hour_in_12_hours;
+-------------------------------+-------------------+
| 12_hours_from_now | hour_in_12_hours |
+-------------------------------+-------------------+
| 2016-06-02 02:15:32.454750000 | 2 |
+-------------------------------+-------------------+
HOURS_ADD(TIMESTAMP date, INT hours), HOURS_ADD(TIMESTAMP date, BIGINT hours)
Purpose: Returns the specified date and time plus some number of hours.
Return type: TIMESTAMP
Examples:
select now() as right_now,
hours_add(now(), 12) as in_12_hours;
+-------------------------------+-------------------------------+
| right_now | in_12_hours |
+-------------------------------+-------------------------------+
| 2016-06-01 14:19:48.948107000 | 2016-06-02 02:19:48.948107000 |
+-------------------------------+-------------------------------+
HOURS_SUB(TIMESTAMP date, INT hours), HOURS_SUB(TIMESTAMP date, BIGINT hours)
Purpose: Returns the specified date and time minus some number of hours.
Return type: TIMESTAMP
Examples:
select now() as right_now,
hours_sub(now(), 18) as 18_hours_ago;
+-------------------------------+-------------------------------+
| right_now | 18_hours_ago |
+-------------------------------+-------------------------------+
| 2016-06-01 14:23:13.868150000 | 2016-05-31 20:23:13.868150000 |
+-------------------------------+-------------------------------+
INT_MONTHS_BETWEEN(TIMESTAMP newer, TIMESTAMP older)
Purpose: Returns the number of months between the date portions of two TIMESTAMP values, as an INT representing
only the full months that passed.
Return type: INT
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes:
Typically used in business contexts, for example to determine whether a specified number of months have passed
or whether some end-of-month deadline was reached.
The method of determining the number of elapsed months includes some special handling of months with different
numbers of days that creates edge cases for dates between the 28th and 31st days of certain months. See
MONTHS_BETWEEN() for details. The INT_MOINTHS_BETWEEN() result is essentially the FLOOR() of the
MONTHS_BETWEEN() result.
Apache Impala Guide | 437
Impala SQL Language Reference
If either value is NULL, which could happen for example when converting a nonexistent date string such as
'2015-02-29' to a TIMESTAMP, the result is also NULL.
If the first argument represents an earlier time than the second argument, the result is negative.
Examples:
/* Less than a full month = 0. */
select int_months_between('2015-02-28', '2015-01-29');
+------------------------------------------------+
| int_months_between('2015-02-28', '2015-01-29') |
+------------------------------------------------+
| 0 |
+------------------------------------------------+
/* Last day of month to last day of next month = 1. */
select int_months_between('2015-02-28', '2015-01-31');
+------------------------------------------------+
| int_months_between('2015-02-28', '2015-01-31') |
+------------------------------------------------+
| 1 |
+------------------------------------------------+
/* Slightly less than 2 months = 1. */
select int_months_between('2015-03-28', '2015-01-31');
+------------------------------------------------+
| int_months_between('2015-03-28', '2015-01-31') |
+------------------------------------------------+
| 1 |
+------------------------------------------------+
/* 2 full months (identical days of the month) = 2. */
select int_months_between('2015-03-31', '2015-01-31');
+------------------------------------------------+
| int_months_between('2015-03-31', '2015-01-31') |
+------------------------------------------------+
| 2 |
+------------------------------------------------+
/* Last day of month to last day of month-after-next = 2. */
select int_months_between('2015-03-31', '2015-01-30');
+------------------------------------------------+
| int_months_between('2015-03-31', '2015-01-30') |
+------------------------------------------------+
| 2 |
+------------------------------------------------+
LAST_DAY(TIMESTAMP t)
Purpose: Returns a TIMESTAMP corresponding to the beginning of the last calendar day in the same month as the
TIMESTAMP argument.
Return type: TIMESTAMP
Added in: CDH 5.12.0 / Impala 2.9.0
Usage notes:
If the input argument does not represent a valid Impala TIMESTAMP including both date and time portions, the
function returns NULL. For example, if the input argument is a string that cannot be implicitly cast to TIMESTAMP,
does not include a date portion, or is out of the allowed range for Impala TIMESTAMP values, the function returns
NULL.
Examples:
The following example shows how to examine the current date, and dates around the end of the month, as
TIMESTAMP values with any time portion removed:
select
now() as right_now
438 | Apache Impala Guide
Impala SQL Language Reference
, trunc(now(),'dd') as today
, last_day(now()) as last_day_of_month
, last_day(now()) + interval 1 day as first_of_next_month;
+-------------------------------+---------------------+---------------------+---------------------+
| right_now | today | last_day_of_month |
first_of_next_month |
+-------------------------------+---------------------+---------------------+---------------------+
| 2017-08-15 15:07:58.823812000 | 2017-08-15 00:00:00 | 2017-08-31 00:00:00 | 2017-09-01
00:00:00 |
+-------------------------------+---------------------+---------------------+---------------------+
The following example shows how to examine the current date and dates around the end of the month as integers
representing the day of the month:
select
now() as right_now
, dayofmonth(now()) as day
, extract(day from now()) as also_day
, dayofmonth(last_day(now())) as last_day
, extract(day from last_day(now())) as also_last_day;
+-------------------------------+-----+----------+----------+---------------+
| right_now | day | also_day | last_day | also_last_day |
+-------------------------------+-----+----------+----------+---------------+
| 2017-08-15 15:07:59.417755000 | 15 | 15 | 31 | 31 |
+-------------------------------+-----+----------+----------+---------------+
MICROSECONDS_ADD(TIMESTAMP date, INT microseconds), MICROSECONDS_ADD(TIMESTAMP date, BIGINT
microseconds)
Purpose: Returns the specified date and time plus some number of microseconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
microseconds_add(now(), 500000) as half_a_second_from_now;
+-------------------------------+-------------------------------+
| right_now | half_a_second_from_now |
+-------------------------------+-------------------------------+
| 2016-06-01 14:25:11.455051000 | 2016-06-01 14:25:11.955051000 |
+-------------------------------+-------------------------------+
MICROSECONDS_SUB(TIMESTAMP date, INT microseconds), MICROSECONDS_SUB(TIMESTAMP date, BIGINT
microseconds)
Purpose: Returns the specified date and time minus some number of microseconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
microseconds_sub(now(), 500000) as half_a_second_ago;
+-------------------------------+-------------------------------+
| right_now | half_a_second_ago |
+-------------------------------+-------------------------------+
| 2016-06-01 14:26:16.509990000 | 2016-06-01 14:26:16.009990000 |
+-------------------------------+-------------------------------+
MILLISECOND(TIMESTAMP t)
Purpose: Returns the millisecond portion of a t value.
Return type: INT
Apache Impala Guide | 439
Impala SQL Language Reference
Added in: CDH 5.7.0 / Impala 2.5.0
Usage notes:
The millisecond value is truncated, not rounded, if the TIMESTAMP value contains more than 3 significant digits to
the right of the decimal point.
Examples:
252.4 milliseconds truncated to 252.
select now(), millisecond(now());
+-------------------------------+--------------------+
| now() | millisecond(now()) |
+-------------------------------+--------------------+
| 2016-03-14 22:30:25.252400000 | 252 |
+-------------------------------+--------------------+
761.767 milliseconds truncated to 761.
select now(), millisecond(now());
+-------------------------------+--------------------+
| now() | millisecond(now()) |
+-------------------------------+--------------------+
| 2016-03-14 22:30:58.761767000 | 761 |
+-------------------------------+--------------------+
MILLISECONDS_ADD(TIMESTAMP date, INT milliseconds), MILLISECONDS_ADD(TIMESTAMP date, BIGINT milliseconds)
Purpose: Returns the specified date and time plus some number of milliseconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
milliseconds_add(now(), 1500) as 1_point_5_seconds_from_now;
+-------------------------------+-------------------------------+
| right_now | 1_point_5_seconds_from_now |
+-------------------------------+-------------------------------+
| 2016-06-01 14:30:30.067366000 | 2016-06-01 14:30:31.567366000 |
+-------------------------------+-------------------------------+
MILLISECONDS_SUB(TIMESTAMP date, INT milliseconds), MILLISECONDS_SUB(TIMESTAMP date, BIGINT milliseconds)
Purpose: Returns the specified date and time minus some number of milliseconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
milliseconds_sub(now(), 1500) as 1_point_5_seconds_ago;
+-------------------------------+-------------------------------+
| right_now | 1_point_5_seconds_ago |
+-------------------------------+-------------------------------+
| 2016-06-01 14:30:53.467140000 | 2016-06-01 14:30:51.967140000 |
+-------------------------------+-------------------------------+
MINUTE(TIMESTAMP date)
Purpose: Returns the minute field from a TIMESTAMP value.
Return type: INT
440 | Apache Impala Guide
Impala SQL Language Reference
Examples:
select now() as right_now, minute(now()) as current_minute;
+-------------------------------+----------------+
| right_now | current_minute |
+-------------------------------+----------------+
| 2016-06-01 14:34:08.051702000 | 34 |
+-------------------------------+----------------+
MINUTES_ADD(TIMESTAMP date, INT minutes), MINUTES_ADD(TIMESTAMP date, BIGINT minutes)
Purpose: Returns the specified date and time plus some number of minutes.
Return type: TIMESTAMP
Examples:
select now() as right_now, minutes_add(now(), 90) as 90_minutes_from_now;
+-------------------------------+-------------------------------+
| right_now | 90_minutes_from_now |
+-------------------------------+-------------------------------+
| 2016-06-01 14:36:04.887095000 | 2016-06-01 16:06:04.887095000 |
+-------------------------------+-------------------------------+
MINUTES_SUB(TIMESTAMP date, INT minutes), MINUTES_SUB(TIMESTAMP date, BIGINT minutes)
Purpose: Returns the specified date and time minus some number of minutes.
Return type: TIMESTAMP
Examples:
select now() as right_now, minutes_sub(now(), 90) as 90_minutes_ago;
+-------------------------------+-------------------------------+
| right_now | 90_minutes_ago |
+-------------------------------+-------------------------------+
| 2016-06-01 14:36:32.643061000 | 2016-06-01 13:06:32.643061000 |
+-------------------------------+-------------------------------+
MONTH(TIMESTAMP date)
Purpose: Returns the month field, represented as an integer, from the date portion of a TIMESTAMP.
Return type: INT
Examples:
select now() as right_now, month(now()) as current_month;
+-------------------------------+---------------+
| right_now | current_month |
+-------------------------------+---------------+
| 2016-06-01 14:43:37.141542000 | 6 |
+-------------------------------+---------------+
MONTHNAME(TIMESTAMP date)
Purpose: Returns the month field from a TIMESTAMP value, converted to the string corresponding to that month
name.
Return type: STRING
MONTHS_ADD(TIMESTAMP date, INT months), MONTHS_ADD(TIMESTAMP date, BIGINT months)
Purpose: Returns the specified date and time plus some number of months.
Return type: TIMESTAMP
Examples:
Apache Impala Guide | 441
Impala SQL Language Reference
The following example shows the effects of adding some number of months to a TIMESTAMP value, using both the
months_add() function and its add_months() alias. These examples use trunc() to strip off the time portion
and leave just the date.
with t1 as (select trunc(now(), 'dd') as today)
select today, months_add(today,1) as next_month from t1;
+---------------------+---------------------+
| today | next_month |
+---------------------+---------------------+
| 2016-05-19 00:00:00 | 2016-06-19 00:00:00 |
+---------------------+---------------------+
with t1 as (select trunc(now(), 'dd') as today)
select today, add_months(today,1) as next_month from t1;
+---------------------+---------------------+
| today | next_month |
+---------------------+---------------------+
| 2016-05-19 00:00:00 | 2016-06-19 00:00:00 |
+---------------------+---------------------+
The following examples show how if months_add() would return a nonexistent date, due to different months
having different numbers of days, the function returns a TIMESTAMP from the last day of the relevant month. For
example, adding one month to January 31 produces a date of February 29th in the year 2016 (a leap year), and
February 28th in the year 2015 (a non-leap year).
with t1 as (select cast('2016-01-31' as timestamp) as jan_31)
select jan_31, months_add(jan_31,1) as feb_31 from t1;
+---------------------+---------------------+
| jan_31 | feb_31 |
+---------------------+---------------------+
| 2016-01-31 00:00:00 | 2016-02-29 00:00:00 |
+---------------------+---------------------+
with t1 as (select cast('2015-01-31' as timestamp) as jan_31)
select jan_31, months_add(jan_31,1) as feb_31 from t1;
+---------------------+---------------------+
| jan_31 | feb_31 |
+---------------------+---------------------+
| 2015-01-31 00:00:00 | 2015-02-28 00:00:00 |
+---------------------+---------------------+
MONTHS_BETWEEN(TIMESTAMP newer, TIMESTAMP older)
Purpose: Returns the number of months between the date portions of two TIMESTAMP values. Can include a
fractional part representing extra days in addition to the full months between the dates. The fractional component
is computed by dividing the difference in days by 31 (regardless of the month).
Return type: DOUBLE
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes:
Typically used in business contexts, for example to determine whether a specified number of months have passed
or whether some end-of-month deadline was reached.
If the only consideration is the number of full months and any fractional value is not significant, use
int_months_between() instead.
The method of determining the number of elapsed months includes some special handling of months with different
numbers of days that creates edge cases for dates between the 28th and 31st days of certain months.
If either value is NULL, which could happen for example when converting a nonexistent date string such as
'2015-02-29' to a TIMESTAMP, the result is also NULL.
If the first argument represents an earlier time than the second argument, the result is negative.
442 | Apache Impala Guide
Examples:
The following examples show how dates that are on the same day of the month are considered to be exactly N
months apart, even if the months have different numbers of days.
Impala SQL Language Reference
select months_between('2015-02-28', '2015-01-28');
+--------------------------------------------+
| months_between('2015-02-28', '2015-01-28') |
+--------------------------------------------+
| 1 |
+--------------------------------------------+
select months_between(now(), now() + interval 1 month);
+-------------------------------------------------+
| months_between(now(), now() + interval 1 month) |
+-------------------------------------------------+
| -1 |
+-------------------------------------------------+
select months_between(now() + interval 1 year, now());
+------------------------------------------------+
| months_between(now() + interval 1 year, now()) |
+------------------------------------------------+
| 12 |
+------------------------------------------------+
The following examples show how dates that are on the last day of the month are considered to be exactly N months
apart, even if the months have different numbers of days. For example, from January 28th to February 28th is
exactly one month because the day of the month is identical; January 31st to February 28th is exactly one month
because in both cases it is the last day of the month; but January 29th or 30th to February 28th is considered a
fractional month.
select months_between('2015-02-28', '2015-01-31');
+--------------------------------------------+
| months_between('2015-02-28', '2015-01-31') |
+--------------------------------------------+
| 1 |
+--------------------------------------------+
select months_between('2015-02-28', '2015-01-29');
+--------------------------------------------+
| months_between('2015-02-28', '2015-01-29') |
+--------------------------------------------+
| 0.967741935483871 |
+--------------------------------------------+
select months_between('2015-02-28', '2015-01-30');;
+--------------------------------------------+
| months_between('2015-02-28', '2015-01-30') |
+--------------------------------------------+
| 0.935483870967742 |
+--------------------------------------------+
The following examples show how dates that are not a precise number of months apart result in a fractional return
value.
select months_between('2015-03-01', '2015-01-28');
+--------------------------------------------+
| months_between('2015-03-01', '2015-01-28') |
+--------------------------------------------+
| 1.129032258064516 |
+--------------------------------------------+
select months_between('2015-03-01', '2015-02-28');
+--------------------------------------------+
| months_between('2015-03-01', '2015-02-28') |
+--------------------------------------------+
| 0.1290322580645161 |
+--------------------------------------------+
Apache Impala Guide | 443
Impala SQL Language Reference
select months_between('2015-06-02', '2015-05-29');
+--------------------------------------------+
| months_between('2015-06-02', '2015-05-29') |
+--------------------------------------------+
| 0.1290322580645161 |
+--------------------------------------------+
select months_between('2015-03-01', '2015-01-25');
+--------------------------------------------+
| months_between('2015-03-01', '2015-01-25') |
+--------------------------------------------+
| 1.225806451612903 |
+--------------------------------------------+
select months_between('2015-03-01', '2015-02-25');
+--------------------------------------------+
| months_between('2015-03-01', '2015-02-25') |
+--------------------------------------------+
| 0.2258064516129032 |
+--------------------------------------------+
select months_between('2015-02-28', '2015-02-01');
+--------------------------------------------+
| months_between('2015-02-28', '2015-02-01') |
+--------------------------------------------+
| 0.8709677419354839 |
+--------------------------------------------+
select months_between('2015-03-28', '2015-03-01');
+--------------------------------------------+
| months_between('2015-03-28', '2015-03-01') |
+--------------------------------------------+
| 0.8709677419354839 |
+--------------------------------------------+
The following examples show how the time portion of the TIMESTAMP values are irrelevant for calculating the
month interval. Even the fractional part of the result only depends on the number of full days between the argument
values, regardless of the time portion.
select months_between('2015-05-28 23:00:00', '2015-04-28 11:45:00');
+--------------------------------------------------------------+
| months_between('2015-05-28 23:00:00', '2015-04-28 11:45:00') |
+--------------------------------------------------------------+
| 1 |
+--------------------------------------------------------------+
select months_between('2015-03-28', '2015-03-01');
+--------------------------------------------+
| months_between('2015-03-28', '2015-03-01') |
+--------------------------------------------+
| 0.8709677419354839 |
+--------------------------------------------+
select months_between('2015-03-28 23:00:00', '2015-03-01 11:45:00');
+--------------------------------------------------------------+
| months_between('2015-03-28 23:00:00', '2015-03-01 11:45:00') |
+--------------------------------------------------------------+
| 0.8709677419354839 |
+--------------------------------------------------------------+
MONTHS_SUB(TIMESTAMP date, INT months), MONTHS_SUB(TIMESTAMP date, BIGINT months)
Purpose: Returns the specified date and time minus some number of months.
Return type: TIMESTAMP
Examples:
with t1 as (select trunc(now(), 'dd') as today)
444 | Apache Impala Guide
Impala SQL Language Reference
select today, months_sub(today,1) as last_month from t1;
+---------------------+---------------------+
| today | last_month |
+---------------------+---------------------+
| 2016-06-01 00:00:00 | 2016-05-01 00:00:00 |
+---------------------+---------------------+
NANOSECONDS_ADD(TIMESTAMP date, INT nanoseconds), NANOSECONDS_ADD(TIMESTAMP date, BIGINT
nanoseconds)
Purpose: Returns the specified date and time plus some number of nanoseconds.
Return type: TIMESTAMP
Kudu considerations:
The nanosecond portion of an Impala TIMESTAMP value is rounded to the nearest microsecond when that value is
stored in a Kudu table.
Examples:
select now() as right_now, nanoseconds_add(now(), 1) as 1_nanosecond_later;
+-------------------------------+-------------------------------+
| right_now | 1_nanosecond_later |
+-------------------------------+-------------------------------+
| 2016-06-01 15:42:00.361026000 | 2016-06-01 15:42:00.361026001 |
+-------------------------------+-------------------------------+
-- 1 billion nanoseconds = 1 second.
select now() as right_now, nanoseconds_add(now(), 1e9) as 1_second_later;
+-------------------------------+-------------------------------+
| right_now | 1_second_later |
+-------------------------------+-------------------------------+
| 2016-06-01 15:42:52.926706000 | 2016-06-01 15:42:53.926706000 |
+-------------------------------+-------------------------------+
NANOSECONDS_SUB(TIMESTAMP date, INT nanoseconds), NANOSECONDS_SUB(TIMESTAMP date, BIGINT
nanoseconds)
Purpose: Returns the specified date and time minus some number of nanoseconds.
Return type: TIMESTAMP
Kudu considerations:
The nanosecond portion of an Impala TIMESTAMP value is rounded to the nearest microsecond when that value is
stored in a Kudu table.
select now() as right_now, nanoseconds_sub(now(), 1) as 1_nanosecond_earlier;
+-------------------------------+-------------------------------+
| right_now | 1_nanosecond_earlier |
+-------------------------------+-------------------------------+
| 2016-06-01 15:44:14.355837000 | 2016-06-01 15:44:14.355836999 |
+-------------------------------+-------------------------------+
-- 1 billion nanoseconds = 1 second.
select now() as right_now, nanoseconds_sub(now(), 1e9) as 1_second_earlier;
+-------------------------------+-------------------------------+
| right_now | 1_second_earlier |
+-------------------------------+-------------------------------+
| 2016-06-01 15:44:54.474929000 | 2016-06-01 15:44:53.474929000 |
+-------------------------------+-------------------------------+
NEXT_DAY(TIMESTAMP date, STRING weekday)
Purpose: Returns the date of the weekday that follows the specified date.
Return type: TIMESTAMP
Apache Impala Guide | 445
Impala SQL Language Reference
Usage notes:
The weekday parameter is case-insensitive. The following values are accepted for weekday: "Sunday"/"Sun",
"Monday"/"Mon", "Tuesday"/"Tue", "Wednesday"/"Wed", "Thursday"/"Thu", "Friday"/"Fri",
"Saturday"/"Sat"
Calling the function with the current date and weekday returns the date that is one week later.
Examples:
select next_day('2013-12-25','Saturday');
-- Returns '2013-12-28 00:00:00' the first Saturday after December 25, 2013.
select next_day(to_timestamp('08-1987-21', 'mm-yyyy-dd'), 'Friday');
-- Returns '1987-08-28 00:00:00' the first Friday after August 28, 1987.
select next_day(now(), 'Thu');
-- Executed on 2018-07-12, the function returns '2018-07-13 00:00:00', one week
-- after the current date.
NOW()
Purpose: Returns the current date and time (in the local time zone) as a TIMESTAMP value.
Return type: TIMESTAMP
Usage notes:
To find a date/time value in the future or the past relative to the current date and time, add or subtract an INTERVAL
expression to the return value of now(). See TIMESTAMP Data Type on page 130 for examples.
To produce a TIMESTAMP representing the current date and time that can be shared or stored without interoperability
problems due to time zone differences, use the to_utc_timestamp() function and specify the time zone of the
server. When TIMESTAMP data is stored in UTC form, any application that queries those values can convert them
to the appropriate local time zone by calling the inverse function, from_utc_timestamp().
To determine the time zone of the server you are connected to, in CDH 5.5 / Impala 2.3 and higher you can call the
timeofday() function, which includes the time zone specifier in its return value. Remember that with cloud
computing, the server you interact with might be in a different time zone than you are, or different sessions might
connect to servers in different time zones, or a cluster might include servers in more than one time zone.
Any references to the now() function are evaluated at the start of a query. All calls to now() within the same query
return the same value, and the value does not depend on how long the query takes.
Examples:
select now() as 'Current time in California USA',
to_utc_timestamp(now(), 'PDT') as 'Current time in Greenwich UK';
+--------------------------------+-------------------------------+
| current time in california usa | current time in greenwich uk |
+--------------------------------+-------------------------------+
| 2016-06-01 15:52:08.980072000 | 2016-06-01 22:52:08.980072000 |
+--------------------------------+-------------------------------+
select now() as right_now,
now() + interval 1 day as tomorrow,
now() + interval 1 week - interval 3 hours as almost_a_week_from_now;
+-------------------------------+-------------------------------+-------------------------------+
| right_now | tomorrow | almost_a_week_from_now
|
+-------------------------------+-------------------------------+-------------------------------+
| 2016-06-01 15:55:39.671690000 | 2016-06-02 15:55:39.671690000 | 2016-06-08
12:55:39.671690000 |
+-------------------------------+-------------------------------+-------------------------------+
446 | Apache Impala Guide
QUARTER(TIMESTAMP date)
Purpose: Returns the quarter in the input TIMESTAMP expression as an integer value, 1, 2, 3, or 4, where 1 represents
January 1 through March 31.
Impala SQL Language Reference
Return type: INT
SECOND(TIMESTAMP date)
Purpose: Returns the second field from a TIMESTAMP value.
Return type: INT
Examples:
select now() as right_now,
second(now()) as seconds_in_current_minute;
+-------------------------------+---------------------------+
| right_now | seconds_in_current_minute |
+-------------------------------+---------------------------+
| 2016-06-01 16:03:57.006603000 | 57 |
+-------------------------------+---------------------------+
SECONDS_ADD(TIMESTAMP date, INT seconds), SECONDS_ADD(TIMESTAMP date, BIGINT seconds)
Purpose: Returns the specified date and time plus some number of seconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
seconds_add(now(), 10) as 10_seconds_from_now;
+-------------------------------+-------------------------------+
| right_now | 10_seconds_from_now |
+-------------------------------+-------------------------------+
| 2016-06-01 16:05:21.573935000 | 2016-06-01 16:05:31.573935000 |
+-------------------------------+-------------------------------+
SECONDS_SUB(TIMESTAMP date, INT seconds), SECONDS_SUB(TIMESTAMP date, BIGINT seconds)
Purpose: Returns the specified date and time minus some number of seconds.
Return type: TIMESTAMP
Examples:
select now() as right_now,
seconds_sub(now(), 10) as 10_seconds_ago;
+-------------------------------+-------------------------------+
| right_now | 10_seconds_ago |
+-------------------------------+-------------------------------+
| 2016-06-01 16:06:03.467931000 | 2016-06-01 16:05:53.467931000 |
+-------------------------------+-------------------------------+
SUBDATE(TIMESTAMP startdate, INT days), SUBDATE(TIMESTAMP startdate, BIGINT days)
Purpose: Subtracts a specified number of days from a TIMESTAMP value. Similar to DATE_SUB(), but starts with
an actual TIMESTAMP value instead of a string that is converted to a TIMESTAMP.
Return type: TIMESTAMP
Examples:
Apache Impala Guide | 447
Impala SQL Language Reference
The following examples show how to subtract a number of days from a TIMESTAMP. The number of days can also
be negative, which gives the same effect as the ADDDATE() function.
select now() as right_now, subdate(now(), 30) as now_minus_30;
+-------------------------------+-------------------------------+
| right_now | now_minus_30 |
+-------------------------------+-------------------------------+
| 2016-05-20 11:00:15.084991000 | 2016-04-20 11:00:15.084991000 |
+-------------------------------+-------------------------------+
select now() as right_now, subdate(now(), -15) as now_plus_15;
+-------------------------------+-------------------------------+
| right_now | now_plus_15 |
+-------------------------------+-------------------------------+
| 2016-05-20 11:00:44.766091000 | 2016-06-04 11:00:44.766091000 |
+-------------------------------+-------------------------------+
TIMEOFDAY()
Purpose: Returns a string representation of the current date and time, according to the time of the local system,
including any time zone designation.
Return type: STRING
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes: The result value represents similar information as the now() function, only as a STRING type and
with somewhat different formatting. For example, the day of the week and the time zone identifier are included.
This function is intended primarily for compatibility with SQL code from other systems that also have a timeofday()
function. Prefer to use now() if practical for any new Impala code.
Examples:
The following examples show the format of the TIMEOFDAY() return value, illustrate how that value is represented
as a STRING that you can manipulate with string processing functions, and how the format compares with the
return value from the NOW() function.
/* Date and time fields in a STRING return value. */
select timeofday();
+------------------------------+
| timeofday() |
+------------------------------+
| Tue Sep 01 15:13:18 2015 PDT |
+------------------------------+
/* The return value can be processed by other string functions. */
select upper(timeofday());
+------------------------------+
| upper(timeofday()) |
+------------------------------+
| TUE SEP 01 15:13:38 2015 PDT |
+------------------------------+
/* The TIMEOFDAY() result is formatted differently than NOW(). NOW() returns a TIMESTAMP.
*/
select now(), timeofday();
+-------------------------------+------------------------------+
| now() | timeofday() |
+-------------------------------+------------------------------+
| 2015-09-01 15:15:25.930021000 | Tue Sep 01 15:15:25 2015 PDT |
+-------------------------------+------------------------------+
/* You can strip out the time zone field to use in calls to from_utc_timestamp(). */
select regexp_replace(timeofday(), '.* ([A-Z]+)$', '\\1') as current_timezone;
+------------------+
| current_timezone |
+------------------+
| PDT |
+------------------+
448 | Apache Impala Guide
Impala SQL Language Reference
TIMESTAMP_CMP(TIMESTAMP t1, TIMESTAMP t2)
Purpose: Tests if one TIMESTAMP value is newer than, older than, or identical to another TIMESTAMP
Return type: INT (either -1, 0, 1, or NULL)
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes:
Usage notes: A comparison function for TIMESTAMP values that only tests whether the date and time increases,
decreases, or stays the same. Similar to the sIGN() function for numeric values.
Examples:
The following examples show all the possible return values for timestamp_cmp(). If the first argument represents
a later point in time than the second argument, the result is 1. The amount of the difference is irrelevant, only the
fact that one argument is greater than or less than the other. If the first argument represents an earlier point in
time than the second argument, the result is -1. If the first and second arguments represent identical points in time,
the result is 0. If either argument is NULL, the result is NULL.
/* First argument 'later' than second argument. */
select timestamp_cmp(now() + interval 70 minutes, now())
as now_vs_in_70_minutes;
+----------------------+
| now_vs_in_70_minutes |
+----------------------+
| 1 |
+----------------------+
select timestamp_cmp(now() +
interval 3 days +
interval 5 hours, now())
as now_vs_days_from_now;
+----------------------+
| now_vs_days_from_now |
+----------------------+
| 1 |
+----------------------+
/* First argument 'earlier' than second argument. */
select timestamp_cmp(now(), now() + interval 2 hours)
as now_vs_2_hours_ago;
+--------------------+
| now_vs_2_hours_ago |
+--------------------+
| -1 |
+--------------------+
/* Both arguments represent the same point in time. */
select timestamp_cmp(now(), now())
as identical_timestamps;
+----------------------+
| identical_timestamps |
+----------------------+
| 0 |
+----------------------+
select timestamp_cmp
(
now() + interval 1 hour,
now() + interval 60 minutes
) as equivalent_date_times;
+-----------------------+
| equivalent_date_times |
+-----------------------+
| 0 |
+-----------------------+
/* Either argument NULL. */
Apache Impala Guide | 449
Impala SQL Language Reference
select timestamp_cmp(now(), null)
as now_vs_null;
+-------------+
| now_vs_null |
+-------------+
| NULL |
+-------------+
TO_DATE(TIMESTAMP timestamp)
Purpose: Returns a string representation of the date field from a timestamp value.
Return type: STRING
Examples:
select now() as right_now,
concat('The date today is ',to_date(now()),'.') as date_announcement;
+-------------------------------+-------------------------------+
| right_now | date_announcement |
+-------------------------------+-------------------------------+
| 2016-06-01 16:30:36.890325000 | The date today is 2016-06-01. |
+-------------------------------+-------------------------------+
TO_TIMESTAMP(BIGINT unixtime), TO_TIMESTAMP(STRING date, STRING pattern)
Purpose: Converts an integer or string representing a date/time value into the corresponding TIMESTAMP value.
Return type: TIMESTAMP
Added in: CDH 5.5.0 / Impala 2.3.0
Usage notes:
An integer argument represents the number of seconds past the epoch (midnight on January 1, 1970). It is the
converse of the unix_timestamp() function, which produces a BIGINT representing the number of seconds past
the epoch.
A string argument, plus another string argument representing the pattern, turns an arbitrary string representation
of a date and time into a true TIMESTAMP value. The ability to parse many kinds of date and time formats allows
you to deal with temporal data from diverse sources, and if desired to convert to efficient TIMESTAMP values during
your ETL process. Using TIMESTAMP directly in queries and expressions lets you perform date and time calculations
without the overhead of extra function calls and conversions each time you reference the applicable columns.
Examples:
The following examples demonstrate how to convert an arbitrary string representation to TIMESTAMP based on a
pattern string:
select to_timestamp('Sep 25, 1984', 'MMM dd, yyyy');
+----------------------------------------------+
| to_timestamp('sep 25, 1984', 'mmm dd, yyyy') |
+----------------------------------------------+
| 1984-09-25 00:00:00 |
+----------------------------------------------+
select to_timestamp('1984/09/25', 'yyyy/MM/dd');
+------------------------------------------+
| to_timestamp('1984/09/25', 'yyyy/mm/dd') |
+------------------------------------------+
| 1984-09-25 00:00:00 |
+------------------------------------------+
450 | Apache Impala Guide
The following examples show how to convert a BIGINT representing seconds past epoch into a TIMESTAMP value:
Impala SQL Language Reference
-- One day past the epoch.
select to_timestamp(24 * 60 * 60);
+----------------------------+
| to_timestamp(24 * 60 * 60) |
+----------------------------+
| 1970-01-02 00:00:00 |
+----------------------------+
-- 60 seconds in the past.
select now() as 'current date/time',
unix_timestamp(now()) 'now in seconds',
to_timestamp(unix_timestamp(now()) - 60) as '60 seconds ago';
+-------------------------------+----------------+---------------------+
| current date/time | now in seconds | 60 seconds ago |
+-------------------------------+----------------+---------------------+
| 2017-10-01 22:03:46.885624000 | 1506895426 | 2017-10-01 22:02:46 |
+-------------------------------+----------------+---------------------+
TO_UTC_TIMESTAMP(TIMESTAMP, STRING timezone)
Purpose: Converts a specified timestamp value in a specified time zone into the corresponding value for the UTC
time zone.
Return type: TIMESTAMP
Usage notes:
Often used in combination with the now() function, to translate local date and time values to the UTC time zone
for consistent representation on disk. The opposite of the from_utc_timestamp() function.
See discussion of time zones in TIMESTAMP Data Type on page 130 for information about using this function for
conversions between the local time zone and UTC.
Examples:
The simplest use of this function is to turn a local date/time value to one with the standardized UTC time zone.
Because the time zone specifier is not saved as part of the Impala TIMESTAMP value, all applications that refer to
such data must agree in advance which time zone the values represent. If different parts of the ETL cycle, or different
instances of the application, occur in different time zones, the ideal reference point is to convert all TIMESTAMP
values to UTC for storage.
select now() as 'Current time in California USA',
to_utc_timestamp(now(), 'PDT') as 'Current time in Greenwich UK';
+--------------------------------+-------------------------------+
| current time in california usa | current time in greenwich uk |
+--------------------------------+-------------------------------+
| 2016-06-01 15:52:08.980072000 | 2016-06-01 22:52:08.980072000 |
+--------------------------------+-------------------------------+
Once a value is converted to the UTC time zone by to_utc_timestamp(), it can be converted back to the local
time zone with from_utc_timestamp(). You can combine these functions using different time zone identifiers
to convert a TIMESTAMP between any two time zones. This example starts with a TIMESTAMP value representing
Pacific Daylight Time, converts it to UTC, and converts it to the equivalent value in Eastern Daylight Time.
select now() as 'Current time in California USA',
from_utc_timestamp
(
to_utc_timestamp(now(), 'PDT'),
'EDT'
) as 'Current time in New York, USA';
+--------------------------------+-------------------------------+
| current time in california usa | current time in new york, usa |
+--------------------------------+-------------------------------+
Apache Impala Guide | 451
Impala SQL Language Reference
| 2016-06-01 18:14:12.743658000 | 2016-06-01 21:14:12.743658000 |
+--------------------------------+-------------------------------+
TRUNC(TIMESTAMP timestamp, STRING unit)
Purpose: Strips off fields from a TIMESTAMP value.
Unit argument: The unit argument value for truncating TIMESTAMP values is case-sensitive. This argument string
can be one of:
• SYYYY, YYYY, YEAR, SYEAR, YYY, YY, Y: Year.
• Q: Quarter.
• MONTH, MON, MM, RM: Month.
• WW, W: Same day of the week as the first day of the month.
• DDD, DD, J: Day.
• DAY, DY, D: Starting day of the week. (Not necessarily the current day.)
• HH, HH12, HH24: Hour. A TIMESTAMP value truncated to the hour is always represented in 24-hour notation,
even for the HH12 argument string.
• MI: Minute.
Added in: The ability to truncate numeric values is new starting in CDH 5.13 / Impala 2.10.
Usage notes:
The TIMESTAMP form is typically used in GROUP BY queries to aggregate results from the same hour, day, week,
month, quarter, and so on. You can also use this function in an INSERT ... SELECT into a partitioned table to
divide TIMESTAMP values into the correct partition.
Because the return value is a TIMESTAMP, if you cast the result of TRUNC() to STRING, you will often see zeroed-out
portions such as 00:00:00 in the time field. If you only need the individual units such as hour, day, month, or year,
use the EXTRACT() function instead. If you need the individual units from a truncated TIMESTAMP value, run the
TRUNCATE() function on the original value, then run EXTRACT() on the result.
The trunc() function also has a signature that applies to DOUBLE or DECIMALvalues. truncate(), trunc(), and
dtrunc() are all aliased to the same function. See truncate() under Impala Mathematical Functions on page
397 for details.
Return type: TIMESTAMP
Examples:
The following example shows how the argument 'Q' returns a TIMESTAMP representing the beginning of the
appropriate calendar quarter. This return value is the same for input values that could be separated by weeks or
months. If you stored the trunc() result in a partition key column, the table would have four partitions per year.
select now() as right_now, trunc(now(), 'Q') as current_quarter;
+-------------------------------+---------------------+
| right_now | current_quarter |
+-------------------------------+---------------------+
| 2016-06-01 18:32:02.097202000 | 2016-04-01 00:00:00 |
+-------------------------------+---------------------+
select now() + interval 2 weeks as 2_weeks_from_now,
trunc(now() + interval 2 weeks, 'Q') as still_current_quarter;
+-------------------------------+-----------------------+
| 2_weeks_from_now | still_current_quarter |
+-------------------------------+-----------------------+
| 2016-06-15 18:36:19.584257000 | 2016-04-01 00:00:00 |
+-------------------------------+-----------------------+
452 | Apache Impala Guide
Impala SQL Language Reference
UNIX_TIMESTAMP(), UNIX_TIMESTAMP(STRING datetime), UNIX_TIMESTAMP(STRING datetime, STRING format),
UNIX_TIMESTAMP(TIMESTAMP datetime)
Purpose: Returns a Unix time, which is a number of seconds elapsed since '1970-01-01 00:00:00' UTC. If called with
no argument, the current date and time is converted to its Unix time. If called with arguments, the first argument
represented as the TIMESTAMP or STRING is converted to its Unix time.
Return type: BIGINT
Usage notes:
See FROM_UNIXTIME() for details about the patterns you can use in the format string to represent the position
of year, month, day, and so on in the date string. In Impala 1.3 and higher, you have more flexibility to switch the
positions of elements and use different separator characters.
In CDH 5.4.3 and higher, you can include a trailing uppercase Z qualifier to indicate “Zulu” time, a synonym for UTC.
In CDH 5.5 / Impala 2.3 and higher, you can include a timezone offset specified as minutes and hours, provided you
also specify the details in the format string argument. The offset is specified in the format string as a plus or minus
sign followed by hh:mm, hhmm, or hh. The hh must be lowercase, to distinguish it from the HH represent hours in
the actual time value. Currently, only numeric timezone offsets are allowed, not symbolic names.
In Impala 2.2.0 and higher, built-in functions that accept or return integers representing TIMESTAMP values use the
BIGINT type for parameters and return values, rather than INT. This change lets the date and time functions avoid
an overflow error that would otherwise occur on January 19th, 2038 (known as the “Year 2038 problem” or “Y2K38
problem”). This change affects the FROM_UNIXTIME() and UNIX_TIMESTAMP() functions. You might need to
change application code that interacts with these functions, change the types of columns that store the return
values, or add CAST() calls to SQL statements that call these functions.
UNIX_TIMESTAMP() and FROM_UNIXTIME() are often used in combination to convert a TIMESTAMP value into
a particular string format. For example:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(NOW() + interval 3 days),
'yyyy/MM/dd HH:mm') AS yyyy_mm_dd_hh_mm;
+------------------+
| yyyy_mm_dd_hh_mm |
+------------------+
| 2016/06/03 11:38 |
+------------------+
The way this function deals with time zones when converting to or from TIMESTAMP values is affected by the
--use_local_tz_for_unix_timestamp_conversions startup flag for the impalad daemon. See TIMESTAMP
Data Type on page 130 for details about how Impala handles time zone considerations for the TIMESTAMP data
type.
Examples:
The following examples show different ways of turning the same date and time into an integer value. A format
string that Impala recognizes by default is interpreted as a UTC date and time. The trailing Z is a confirmation that
the timezone is UTC. If the date and time string is formatted differently, a second argument specifies the position
and units for each of the date and time values.
The final two examples show how to specify a timezone offset of Pacific Daylight Saving Time, which is 7 hours
earlier than UTC. You can use the numeric offset -07:00 and the equivalent suffix of -hh:mm in the format string,
or specify the mnemonic name for the time zone in a call to to_utc_timestamp(). This particular date and time
expressed in PDT translates to a different number than the same date and time expressed in UTC.
-- 3 ways of expressing the same date/time in UTC and converting to an integer.
select unix_timestamp('2015-05-15 12:00:00');
+---------------------------------------+
| unix_timestamp('2015-05-15 12:00:00') |
+---------------------------------------+
| 1431691200 |
Apache Impala Guide | 453
Impala SQL Language Reference
+---------------------------------------+
select unix_timestamp('2015-05-15 12:00:00Z');
+----------------------------------------+
| unix_timestamp('2015-05-15 12:00:00z') |
+----------------------------------------+
| 1431691200 |
+----------------------------------------+
select unix_timestamp
(
'May 15, 2015 12:00:00',
'MMM dd, yyyy HH:mm:ss'
) as may_15_month_day_year;
+-----------------------+
| may_15_month_day_year |
+-----------------------+
| 1431691200 |
+-----------------------+
-- 2 ways of expressing the same date and time but in a different timezone.
-- The resulting integer is different from the previous examples.
select unix_timestamp
(
'2015-05-15 12:00:00-07:00',
'yyyy-MM-dd HH:mm:ss-hh:mm'
) as may_15_year_month_day;
+-----------------------+
| may_15_year_month_day |
+-----------------------+
| 1431716400 |
+-----------------------+
select unix_timestamp
(to_utc_timestamp(
'2015-05-15 12:00:00',
'PDT')
) as may_15_pdt;
+------------+
| may_15_pdt |
+------------+
| 1431716400 |
+------------+
UTC_TIMESTAMP()
Purpose: Returns a TIMESTAMP corresponding to the current date and time in the UTC time zone.
Return type: TIMESTAMP
Added in: CDH 5.13
Usage notes:
Similar to the now() or current_timestamp() functions, but does not use the local time zone as those functions
do. Use utc_timestamp() to record TIMESTAMP values that are interoperable with servers around the world, in
arbitrary time zones, without the need for additional conversion functions to standardize the time zone of each
value representing a date/time.
For working with date/time values represented as integer values, you can convert back and forth between TIMESTAMP
and BIGINT with the unix_micros_to_utc_timestamp() and utc_to_unix_micros() functions. The integer
values represent the number of microseconds since the Unix epoch (midnight on January 1, 1970).
Examples:
454 | Apache Impala Guide
The following example shows how NOW() and CURRENT_TIMESTAMP() represent the current date/time in the
local time zone (in this case, UTC-7), while UTC_TIMESTAMP() represents the same date/time in the standardized
UTC time zone:
Impala SQL Language Reference
select now(), utc_timestamp();
+-------------------------------+-------------------------------+
| now() | utc_timestamp() |
+-------------------------------+-------------------------------+
| 2017-10-01 23:33:58.919688000 | 2017-10-02 06:33:58.919688000 |
+-------------------------------+-------------------------------+
select current_timestamp(), utc_timestamp();
+-------------------------------+-------------------------------+
| current_timestamp() | utc_timestamp() |
+-------------------------------+-------------------------------+
| 2017-10-01 23:34:07.400642000 | 2017-10-02 06:34:07.400642000 |
+-------------------------------+-------------------------------+
WEEK(TIMESTAMP date), WEEKOFYEAR(TIMESTAMP date)
Purpose: Returns the corresponding week (1-53) from the date portion of a TIMESTAMP.
Return type: INT
Examples:
select now() as right_now, weekofyear(now()) as this_week;
+-------------------------------+-----------+
| right_now | this_week |
+-------------------------------+-----------+
| 2016-06-01 22:40:06.763771000 | 22 |
+-------------------------------+-----------+
select now() + interval 2 weeks as in_2_weeks,
weekofyear(now() + interval 2 weeks) as week_after_next;
+-------------------------------+-----------------+
| in_2_weeks | week_after_next |
+-------------------------------+-----------------+
| 2016-06-15 22:41:22.098823000 | 24 |
+-------------------------------+-----------------+
WEEKS_ADD(TIMESTAMP date, INT weeks), WEEKS_ADD(TIMESTAMP date, BIGINT weeks)
Purpose: Returns the specified date and time plus some number of weeks.
Return type: TIMESTAMP
Examples:
select now() as right_now, weeks_add(now(), 2) as week_after_next;
+-------------------------------+-------------------------------+
| right_now | week_after_next |
+-------------------------------+-------------------------------+
| 2016-06-01 22:43:20.973834000 | 2016-06-15 22:43:20.973834000 |
+-------------------------------+-------------------------------+
WEEKS_SUB(TIMESTAMP date, INT weeks), WEEKS_SUB(TIMESTAMP date, BIGINT weeks)
Purpose: Returns the specified date and time minus some number of weeks.
Return type: TIMESTAMP
Examples:
select now() as right_now, weeks_sub(now(), 2) as week_before_last;
+-------------------------------+-------------------------------+
| right_now | week_before_last |
Apache Impala Guide | 455
Impala SQL Language Reference
+-------------------------------+-------------------------------+
| 2016-06-01 22:44:21.291913000 | 2016-05-18 22:44:21.291913000 |
+-------------------------------+-------------------------------+
YEAR(TIMESTAMP date)
Purpose: Returns the year field from the date portion of a TIMESTAMP.
Return type: INT
Examples:
select now() as right_now, year(now()) as this_year;
+-------------------------------+-----------+
| right_now | this_year |
+-------------------------------+-----------+
| 2016-06-01 22:46:23.647925000 | 2016 |
+-------------------------------+-----------+
YEARS_ADD(TIMESTAMP date, INT years), YEARS_ADD(TIMESTAMP date, BIGINT years)
Purpose: Returns the specified date and time plus some number of years.
Return type: TIMESTAMP
Examples:
select now() as right_now, years_add(now(), 1) as next_year;
+-------------------------------+-------------------------------+
| right_now | next_year |
+-------------------------------+-------------------------------+
| 2016-06-01 22:47:45.556851000 | 2017-06-01 22:47:45.556851000 |
+-------------------------------+-------------------------------+
The following example shows how if the equivalent date does not exist in the year of the result due to a leap year,
the date is changed to the last day of the appropriate month.
-- Spoiler alert: there is no Feb. 29, 2017
select cast('2016-02-29' as timestamp) as feb_29_2016,
years_add('2016-02-29', 1) as feb_29_2017;
+---------------------+---------------------+
| feb_29_2016 | feb_29_2017 |
+---------------------+---------------------+
| 2016-02-29 00:00:00 | 2017-02-28 00:00:00 |
+---------------------+---------------------+
YEARS_SUB(TIMESTAMP date, INT years), YEARS_SUB(TIMESTAMP date, BIGINT years)
Purpose: Returns the specified date and time minus some number of years.
Return type: TIMESTAMP
Examples:
select now() as right_now, years_sub(now(), 1) as last_year;
+-------------------------------+-------------------------------+
| right_now | last_year |
+-------------------------------+-------------------------------+
| 2016-06-01 22:48:11.851780000 | 2015-06-01 22:48:11.851780000 |
+-------------------------------+-------------------------------+
456 | Apache Impala Guide
The following example shows how if the equivalent date does not exist in the year of the result due to a leap year,
the date is changed to the last day of the appropriate month.
Impala SQL Language Reference
-- Spoiler alert: there is no Feb. 29, 2015
select cast('2016-02-29' as timestamp) as feb_29_2016,
years_sub('2016-02-29', 1) as feb_29_2015;
+---------------------+---------------------+
| feb_29_2016 | feb_29_2015 |
+---------------------+---------------------+
| 2016-02-29 00:00:00 | 2015-02-28 00:00:00 |
+---------------------+---------------------+
Impala Conditional Functions
Impala supports the following conditional functions for testing equality, comparison operators, and nullity:
• CASE
• CASE2
• COALESCE
• DECODE
• IF
• IFNULL
• ISFALSE
• ISNOTFALSE
• ISNOTTRUE
• ISNULL
• ISTRUE
• NONNULLVALUE
• NULLIF
• NULLIFZERO
• NULLVALUE
• NVL
• NVL2
• ZEROIFNULL
CASE a WHEN b THEN c [WHEN d THEN e]... [ELSE f] END
Purpose: Compares an expression to one or more possible values, and returns a corresponding result when a match
is found.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
Usage notes:
In this form of the CASE expression, the initial value A being evaluated for each row it typically a column reference,
or an expression involving a column. This form can only compare against a set of specified values, not ranges,
multi-value comparisons such as BETWEEN or IN, regular expressions, or NULL.
Examples:
Although this example is split across multiple lines, you can put any or all parts of a CASE expression on a single
line, with no punctuation or other separators between the WHEN, ELSE, and END clauses.
select case x
when 1 then 'one'
when 2 then 'two'
when 0 then 'zero'
else 'out of range'
Apache Impala Guide | 457
Impala SQL Language Reference
end
from t1;
CASE WHEN a THEN b [WHEN c THEN d]... [ELSE e] END
Purpose: Tests whether any of a sequence of expressions is TRUE, and returns a corresponding result for the first
TRUE expression.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
Usage notes:
CASE expressions without an initial test value have more flexibility. For example, they can test different columns
in different WHEN clauses, or use comparison operators such as BETWEEN, IN and IS NULL rather than comparing
against discrete values.
CASE expressions are often the foundation of long queries that summarize and format results for easy-to-read
reports. For example, you might use a CASE function call to turn values from a numeric column into category strings
corresponding to integer values, or labels such as “Small”, “Medium” and “Large” based on ranges. Then subsequent
parts of the query might aggregate based on the transformed values, such as how many values are classified as
small, medium, or large. You can also use CASE to signal problems with out-of-bounds values, NULL values, and so
on.
By using operators such as OR, IN, REGEXP, and so on in CASE expressions, you can build extensive tests and
transformations into a single query. Therefore, applications that construct SQL statements often rely heavily on
CASE calls in the generated SQL code.
Because this flexible form of the CASE expressions allows you to perform many comparisons and call multiple
functions when evaluating each row, be careful applying elaborate CASE expressions to queries that process large
amounts of data. For example, when practical, evaluate and transform values through CASE after applying operations
such as aggregations that reduce the size of the result set; transform numbers to strings after performing joins with
the original numeric values.
Examples:
Although this example is split across multiple lines, you can put any or all parts of a CASE expression on a single
line, with no punctuation or other separators between the WHEN, ELSE, and END clauses.
select case
when dayname(now()) in ('Saturday','Sunday') then 'result undefined on weekends'
when x > y then 'x greater than y'
when x = y then 'x and y are equal'
when x is null or y is null then 'one of the columns is null'
else null
end
from t1;
COALESCE(type v1, type v2, ...)
Purpose: Returns the first specified argument that is not NULL, or NULL if all arguments are NULL.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
DECODE(type expression, type search1, type result1 [, type search2, type result2 ...] [, type default] )
Purpose: Compares the first argument, expression, to the search expressions using the IS NOT DISTINCT
operator, and returns:
• The corresponding result when a match is found.
• The first corresponding result if there are more than one matching search expressions.
• The default expression if none of the search expressions matches the first argument expression.
• NULL if the final default expression is omitted and none of the search expressions matches the first argument.
Return type: Same as the first argument with the following exceptions:
458 | Apache Impala Guide
Impala SQL Language Reference
• Integer values are promoted to BIGINT.
• Floating-point values are promoted to DOUBLE.
• Use CAST() when inserting into a smaller numeric column.
Usage notes:
• Can be used as shorthand for a CASE expression.
• The first argument, expression, and the search expressions must be of the same type or convertible types.
• The result expression can be a different type, but all result expressions must be of the same type.
• Returns a successful match if the first argument is NULL and a search expression is also NULL.
• NULL can be used as a search expression.
Examples:
The following example translates numeric day values into weekday names, such as 1 to Monday, 2 to Tuesday, etc.
SELECT event, DECODE(day_of_week, 1, "Monday", 2, "Tuesday", 3, "Wednesday",
4, "Thursday", 5, "Friday", 6, "Saturday", 7, "Sunday", "Unknown day")
FROM calendar;
IF(BOOLEAN condition, type ifTrue, type ifFalseOrNull)
Purpose: Tests an expression and returns a corresponding result depending on whether the result is TRUE, FALSE,
or NULL.
Return type: Same as the ifTrue argument value
IFNULL(type a, type ifNull)
Purpose: Alias for the ISNULL() function, with the same behavior. To simplify porting SQL with vendor extensions
to Impala.
Added in: Impala 1.3.0
ISFALSE(BOOLEAN expression)
Purpose: Returns TRUE if the expression is FALSE. Returns FALSE if the expression is TRUE or NULL.
Same as the IS FALSE operator.
Similar to ISNOTTRUE(), except it returns the opposite value for a NULL argument.
Return type: BOOLEAN
Added in: CDH 5.4.0 / Impala 2.2.0
Usage notes:
In CDH 5.14 / Impala 2.11 and higher, you can use the operators IS [NOT] TRUE and IS [NOT] FALSE as
equivalents for the built-in functions ISTRUE(), ISNOTTRUE(), ISFALSE(), and ISNOTFALSE().
ISNOTFALSE(BOOLEAN expression)
Purpose: Tests if a Boolean expression is not FALSE (that is, either TRUE or NULL). Returns TRUE if so. If the argument
is NULL, returns TRUE.
Same as the IS NOT FALSE operator.
Similar to ISTRUE(), except it returns the opposite value for a NULL argument.
Return type: BOOLEAN
Usage notes: Primarily for compatibility with code containing industry extensions to SQL.
Added in: CDH 5.4.0 / Impala 2.2.0
Usage notes:
In CDH 5.14 / Impala 2.11 and higher, you can use the operators IS [NOT] TRUE and IS [NOT] FALSE as
equivalents for the built-in functions ISTRUE(), ISNOTTRUE(), ISFALSE(), and ISNOTFALSE().
Apache Impala Guide | 459
Impala SQL Language Reference
ISNOTTRUE(BOOLEAN expression)
Purpose: Tests if a Boolean expression is not TRUE (that is, either FALSE or NULL). Returns TRUE if so. If the argument
is NULL, returns TRUE.
Same as the IS NOT TRUE operator.
Similar to ISFALSE(), except it returns the opposite value for a NULL argument.
Return type: BOOLEAN
Added in: CDH 5.4.0 / Impala 2.2.0
Usage notes:
In CDH 5.14 / Impala 2.11 and higher, you can use the operators IS [NOT] TRUE and IS [NOT] FALSE as
equivalents for the built-in functions ISTRUE(), ISNOTTRUE(), ISFALSE(), and ISNOTFALSE().
ISNULL(type a, type ifNull)
Purpose: Tests if an expression is NULL, and returns the expression result value if not. If the first argument is NULL,
returns the second argument.
Compatibility notes: Equivalent to the NVL() function from Oracle Database or IFNULL() from MySQL. The NVL()
and IFNULL() functions are also available in Impala.
Return type: Same as the first argument value
ISTRUE(BOOLEAN expression)
Purpose: Returns TRUE if the expression is TRUE. Returns FALSE if the expression is FALSE or NULL.
Same as the IS TRUE operator.
Similar to ISNOTFALSE(), except it returns the opposite value for a NULL argument.
Return type: BOOLEAN
Usage notes: Primarily for compatibility with code containing industry extensions to SQL.
Added in: CDH 5.4.0 / Impala 2.2.0
Usage notes:
In CDH 5.14 / Impala 2.11 and higher, you can use the operators IS [NOT] TRUE and IS [NOT] FALSE as
equivalents for the built-in functions ISTRUE(), ISNOTTRUE(), ISFALSE(), and ISNOTFALSE().
NONNULLVALUE(type expression)
Purpose: Tests if an expression (of any type) is NULL or not. Returns FALSE if so. The converse of nullvalue().
Return type: BOOLEAN
Usage notes: Primarily for compatibility with code containing industry extensions to SQL.
Added in: CDH 5.4.0 / Impala 2.2.0
NULLIF(expr1, expr2)
Purpose: Returns NULL if the two specified arguments are equal. If the specified arguments are not equal, returns
the value of expr1. The data types of the expressions must be compatible, according to the conversion rules from
Data Types on page 101. You cannot use an expression that evaluates to NULL for expr1; that way, you can distinguish
a return value of NULL from an argument value of NULL, which would never match expr2.
Usage notes: This function is effectively shorthand for a CASE expression of the form:
CASE
WHEN expr1 = expr2 THEN NULL
ELSE expr1
END
460 | Apache Impala Guide
Impala SQL Language Reference
It is commonly used in division expressions, to produce a NULL result instead of a divide-by-zero error when the
divisor is equal to zero:
select 1.0 / nullif(c1,0) as reciprocal from t1;
You might also use it for compatibility with other database systems that support the same NULLIF() function.
Return type: same as the initial argument value, except that integer values are promoted to BIGINT and floating-point
values are promoted to DOUBLE; use CAST() when inserting into a smaller numeric column
Added in: Impala 1.3.0
NULLIFZERO(numeric_expr)
Purpose: Returns NULL if the numeric expression evaluates to 0, otherwise returns the result of the expression.
Usage notes: Used to avoid error conditions such as divide-by-zero in numeric calculations. Serves as shorthand
for a more elaborate CASE expression, to simplify porting SQL with vendor extensions to Impala.
Return type: Same type as the input argument
Added in: Impala 1.3.0
NULLVALUE(expression)
Purpose: Tests if an expression (of any type) is NULL or not. Returns TRUE if so. The converse of nonnullvalue().
Return type: BOOLEAN
Usage notes: Primarily for compatibility with code containing industry extensions to SQL.
Added in: CDH 5.4.0 / Impala 2.2.0
NVL(type a, type ifNull)
Purpose: Alias for the ISNULL() function. Returns the first argument if the first argument is not NULL. Returns the
second argument if the first argument is NULL.
Equivalent to the NVL() function in Oracle Database or IFNULL() in MySQL.
Return type: Same as the first argument value
Added in: Impala 1.1
NVL2(type a, type ifNotNull, type ifNull)
Purpose: Returns the second argument, ifNotNull, if the first argument is not NULL. Returns the third argument,
ifNull, if the first argument is NULL.
Equivalent to the NVL2() function in Oracle Database.
Return type: Same as the first argument value
Added in: CDH 5.12.0 / Impala 2.9.0
Examples:
SELECT NVL2(NULL, 999, 0); -- Returns 0
SELECT NVL2('ABC', 'Is Not Null', 'Is Null'); -- Returns 'Is Not Null'
ZEROIFNULL(numeric_expr)
Purpose: Returns 0 if the numeric expression evaluates to NULL, otherwise returns the result of the expression.
Usage notes: Used to avoid unexpected results due to unexpected propagation of NULL values in numeric calculations.
Serves as shorthand for a more elaborate CASE expression, to simplify porting SQL with vendor extensions to Impala.
Return type: Same type as the input argument
Added in: Impala 1.3.0
Apache Impala Guide | 461
Impala SQL Language Reference
Impala String Functions
String functions are classified as those primarily accepting or returning STRING, VARCHAR, or CHAR data types, for
example to measure the length of a string or concatenate two strings together.
• All the functions that accept STRING arguments also accept the VARCHAR and CHAR types introduced in Impala
2.0.
• Whenever VARCHAR or CHAR values are passed to a function that returns a string value, the return type is normalized
to STRING. For example, a call to concat() with a mix of STRING, VARCHAR, and CHAR arguments produces a
STRING result.
Related information:
The string functions operate mainly on these data types: STRING Data Type on page 123, VARCHAR Data Type (CDH 5.2
or higher only) on page 137, and CHAR Data Type (CDH 5.2 or higher only) on page 107.
Function reference:
Impala supports the following string functions:
• ASCII
• BASE64DECODE
• BASE64ENCODE
• BTRIM
• CHAR_LENGTH
• CHR
• CONCAT
• CONCAT_WS
• FIND_IN_SET
• GROUP_CONCAT
• INITCAP
• INSTR
• LEFT
• LENGTH
• LEVENSHTEIN, LE_DST
• LOCATE
• LOWER, LCASE
• LPAD
• LTRI
• PARSE_URL
• REGEXP_ESCAPE
• REGEXP_EXTRACT
• REGEXP_LIKE
• REGEXP_REPLACE
• REPEAT
• REPLACE
• REVERSE
• RIGHT
• RPAD
• RTRIM
• SPACE
• SPLIT_PART
• STRLEFT
• STRRIGHT
• SUBSTR, SUBSTRING
462 | Apache Impala Guide
Impala SQL Language Reference
• TRANSLATE
• TRIM
• UPPER, UCASE
ASCII(STRING str)
Purpose: Returns the numeric ASCII code of the first character of the argument.
Return type: INT
BASE64DECODE(STRING str)
Purpose:
Return type: STRING
Usage notes:
For general information about Base64 encoding, see Base64 article on Wikipedia.
The functions BASE64ENCODE() and BASE64DECODE() are typically used in combination, to store in an Impala
table string data that is problematic to store or transmit. For example, you could use these functions to store string
data that uses an encoding other than UTF-8, or to transform the values in contexts that require ASCII values, such
as for partition key columns. Keep in mind that base64-encoded values produce different results for string functions
such as LENGTH(), MAX(), and MIN() than when those functions are called with the unencoded string values.
The set of characters that can be generated as output from BASE64ENCODE(), or specified in the argument string
to BASE64DECODE(), are the ASCII uppercase and lowercase letters (A-Z, a-z), digits (0-9), and the punctuation
characters +, /, and =.
All return values produced by BASE64ENCODE() are a multiple of 4 bytes in length. All argument values supplied
to BASE64DECODE() must also be a multiple of 4 bytes in length. If a base64-encoded value would otherwise have
a different length, it can be padded with trailing = characters to reach a length that is a multiple of 4 bytes.
If the argument string to BASE64DECODE() does not represent a valid base64-encoded value, subject to the
constraints of the Impala implementation such as the allowed character set, the function returns NULL.
Examples:
The following examples show how to use BASE64ENCODE() and BASE64DECODE() together to store and retrieve
string values:
-- An arbitrary string can be encoded in base 64.
-- The length of the output is a multiple of 4 bytes,
-- padded with trailing = characters if necessary.
select base64encode('hello world') as encoded,
length(base64encode('hello world')) as length;
+------------------+--------+
| encoded | length |
+------------------+--------+
| aGVsbG8gd29ybGQ= | 16 |
+------------------+--------+
-- Passing an encoded value to base64decode() produces
-- the original value.
select base64decode('aGVsbG8gd29ybGQ=') as decoded;
+-------------+
| decoded |
+-------------+
| hello world |
+-------------+
These examples demonstrate incorrect encoded values that produce NULL return values when decoded:
-- The input value to base64decode() must be a multiple of 4 bytes.
-- In this case, leaving off the trailing = padding character
-- produces a NULL return value.
Apache Impala Guide | 463
Impala SQL Language Reference
select base64decode('aGVsbG8gd29ybGQ') as decoded;
+---------+
| decoded |
+---------+
| NULL |
+---------+
WARNINGS: UDF WARNING: Invalid base64 string; input length is 15,
which is not a multiple of 4.
-- The input to base64decode() can only contain certain characters.
-- The $ character in this case causes a NULL return value.
select base64decode('abc$');
+----------------------+
| base64decode('abc$') |
+----------------------+
| NULL |
+----------------------+
WARNINGS: UDF WARNING: Could not base64 decode input in space 4; actual output length
0
These examples demonstrate “round-tripping” of an original string to an encoded string, and back again. This
technique is applicable if the original source is in an unknown encoding, or if some intermediate processing stage
might cause national characters to be misrepresented:
select 'circumflex accents: â, ê, î, ô, û' as original,
base64encode('circumflex accents: â, ê, î, ô, û') as encoded;
+-----------------------------------+------------------------------------------------------+
| original | encoded
|
+-----------------------------------+------------------------------------------------------+
| circumflex accents: â, ê, î, ô, û | Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s=
|
+-----------------------------------+------------------------------------------------------+
select base64encode('circumflex accents: â, ê, î, ô, û') as encoded,
base64decode(base64encode('circumflex accents: â, ê, î, ô, û')) as decoded;
+------------------------------------------------------+-----------------------------------+
| encoded | decoded
|
+------------------------------------------------------+-----------------------------------+
| Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s= | circumflex accents: â, ê, î,
ô, û |
+------------------------------------------------------+-----------------------------------+
BASE64ENCODE(STRING str)
Purpose:
Return type: STRING
Usage notes:
For general information about Base64 encoding, see Base64 article on Wikipedia.
The functions BASE64ENCODE() and BASE64DECODE() are typically used in combination, to store in an Impala
table string data that is problematic to store or transmit. For example, you could use these functions to store string
data that uses an encoding other than UTF-8, or to transform the values in contexts that require ASCII values, such
as for partition key columns. Keep in mind that base64-encoded values produce different results for string functions
such as LENGTH(), MAX(), and MIN() than when those functions are called with the unencoded string values.
The set of characters that can be generated as output from BASE64ENCODE(), or specified in the argument string
to BASE64DECODE(), are the ASCII uppercase and lowercase letters (A-Z, a-z), digits (0-9), and the punctuation
characters +, /, and =.
All return values produced by BASE64ENCODE() are a multiple of 4 bytes in length. All argument values supplied
to BASE64DECODE() must also be a multiple of 4 bytes in length. If a base64-encoded value would otherwise have
a different length, it can be padded with trailing = characters to reach a length that is a multiple of 4 bytes.
464 | Apache Impala Guide
Examples:
The following examples show how to use BASE64ENCODE() and BASE64DECODE() together to store and retrieve
string values:
Impala SQL Language Reference
-- An arbitrary string can be encoded in base 64.
-- The length of the output is a multiple of 4 bytes,
-- padded with trailing = characters if necessary.
select base64encode('hello world') as encoded,
length(base64encode('hello world')) as length;
+------------------+--------+
| encoded | length |
+------------------+--------+
| aGVsbG8gd29ybGQ= | 16 |
+------------------+--------+
-- Passing an encoded value to base64decode() produces
-- the original value.
select base64decode('aGVsbG8gd29ybGQ=') as decoded;
+-------------+
| decoded |
+-------------+
| hello world |
+-------------+
These examples demonstrate incorrect encoded values that produce NULL return values when decoded:
-- The input value to base64decode() must be a multiple of 4 bytes.
-- In this case, leaving off the trailing = padding character
-- produces a NULL return value.
select base64decode('aGVsbG8gd29ybGQ') as decoded;
+---------+
| decoded |
+---------+
| NULL |
+---------+
WARNINGS: UDF WARNING: Invalid base64 string; input length is 15,
which is not a multiple of 4.
-- The input to base64decode() can only contain certain characters.
-- The $ character in this case causes a NULL return value.
select base64decode('abc$');
+----------------------+
| base64decode('abc$') |
+----------------------+
| NULL |
+----------------------+
WARNINGS: UDF WARNING: Could not base64 decode input in space 4; actual output length
0
These examples demonstrate “round-tripping” of an original string to an encoded string, and back again. This
technique is applicable if the original source is in an unknown encoding, or if some intermediate processing stage
might cause national characters to be misrepresented:
select 'circumflex accents: â, ê, î, ô, û' as original,
base64encode('circumflex accents: â, ê, î, ô, û') as encoded;
+-----------------------------------+------------------------------------------------------+
| original | encoded
|
+-----------------------------------+------------------------------------------------------+
| circumflex accents: â, ê, î, ô, û | Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s=
|
+-----------------------------------+------------------------------------------------------+
select base64encode('circumflex accents: â, ê, î, ô, û') as encoded,
base64decode(base64encode('circumflex accents: â, ê, î, ô, û')) as decoded;
+------------------------------------------------------+-----------------------------------+
| encoded | decoded
Apache Impala Guide | 465
Impala SQL Language Reference
|
+------------------------------------------------------+-----------------------------------+
| Y2lyY3VtZmxleCBhY2NlbnRzOiDDoiwgw6osIMOuLCDDtCwgw7s= | circumflex accents: â, ê, î,
ô, û |
+------------------------------------------------------+-----------------------------------+
BTRIM(STRING a), BTRIM(STRING a, STRING chars_to_trim)
Purpose: Removes all instances of one or more characters from the start and end of a STRING value. By default,
removes only spaces. If a non-NULL optional second argument is specified, the function removes all occurrences
of characters in that second argument from the beginning and end of the string.
Return type: STRING
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
The following examples show the default btrim() behavior, and what changes when you specify the optional
second argument. All the examples bracket the output value with [ ] so that you can see any leading or trailing
spaces in the btrim() result. By default, the function removes and number of both leading and trailing spaces.
When the second argument is specified, any number of occurrences of any character in the second argument are
removed from the start and end of the input string; in this case, spaces are not removed (unless they are part of
the second argument) and any instances of the characters are not removed if they do not come right at the beginning
or end of the string.
-- Remove multiple spaces before and one space after.
select concat('[',btrim(' hello '),']');
+---------------------------------------+
| concat('[', btrim(' hello '), ']') |
+---------------------------------------+
| [hello] |
+---------------------------------------+
-- Remove any instances of x or y or z at beginning or end. Leave spaces alone.
select concat('[',btrim('xy hello zyzzxx','xyz'),']');
+------------------------------------------------------+
| concat('[', btrim('xy hello zyzzxx', 'xyz'), ']') |
+------------------------------------------------------+
| [ hello ] |
+------------------------------------------------------+
-- Remove any instances of x or y or z at beginning or end.
-- Leave x, y, z alone in the middle of the string.
select concat('[',btrim('xyhelxyzlozyzzxx','xyz'),']');
+----------------------------------------------------+
| concat('[', btrim('xyhelxyzlozyzzxx', 'xyz'), ']') |
+----------------------------------------------------+
| [helxyzlo] |
+----------------------------------------------------+
CHAR_LENGTH(STRING a), CHARACTER_LENGTH(STRING a)
Purpose: Returns the length in characters of the argument string. Aliases for the length() function.
Return type: INT
CHR(INT character_code)
Purpose: Returns a character specified by a decimal code point value. The interpretation and display of the resulting
character depends on your system locale. Because consistent processing of Impala string values is only guaranteed
for values within the ASCII range, only use this function for values corresponding to ASCII characters. In particular,
parameter values greater than 255 return an empty string.
Return type: STRING
Usage notes: Can be used as the inverse of the ascii() function, which converts a character to its numeric ASCII
code.
466 | Apache Impala Guide
Impala SQL Language Reference
Added in: CDH 5.5.0 / Impala 2.3.0
Examples:
SELECT chr(65);
+---------+
| chr(65) |
+---------+
| A |
+---------+
SELECT chr(97);
+---------+
| chr(97) |
+---------+
| a |
+---------+
CONCAT(STRING a, STRING b...)
Purpose: Returns a single string representing all the argument values joined together.
Return type: STRING
Usage notes: concat() and concat_ws() are appropriate for concatenating the values of multiple columns within
the same row, while group_concat() joins together values from different rows.
CONCAT_WS(STRING sep, STRING a, STRING b...)
Purpose: Returns a single string representing the second and following argument values joined together, delimited
by a specified separator.
Return type: STRING
Usage notes: concat() and concat_ws() are appropriate for concatenating the values of multiple columns within
the same row, while group_concat() joins together values from different rows.
FIND_IN_SET(STRING str, STRING strList)
Purpose: Returns the position (starting from 1) of the first occurrence of a specified string within a comma-separated
string. Returns NULL if either argument is NULL, 0 if the search string is not found, or 0 if the search string contains
a comma.
Return type: INT
GROUP_CONCAT(STRING s [, STRING sep])
Purpose: Returns a single string representing the argument value concatenated together for each row of the result
set. If the optional separator string is specified, the separator is added between each pair of concatenated values.
Return type: STRING
Usage notes: concat() and concat_ws() are appropriate for concatenating the values of multiple columns within
the same row, while group_concat() joins together values from different rows.
By default, returns a single string covering the whole result set. To include other columns or values in the result
set, or to produce multiple concatenated strings for subsets of rows, include a GROUP BY clause in the query.
Strictly speaking, group_concat() is an aggregate function, not a scalar function like the others in this list. For
additional details and examples, see GROUP_CONCAT Function on page 489.
INITCAP(STRING str)
Purpose: Returns the input string with the first letter of each word capitalized and all other letters in lowercase.
Return type: STRING
Example:
INITCAP("i gOt mY ChiCkeNs in tHe yard.") returns "I Got My Chickens In The Yard.".
Apache Impala Guide | 467
Impala SQL Language Reference
INSTR(STRING str, STRING substr [, BIGINT position [, BIGINT occurrence ] ])
Purpose: Returns the position (starting from 1) of the first occurrence of a substr within a longer string.
Return type: INT
Usage notes:
If the substr is not present in str, the function returns 0.
The optional third and fourth arguments let you find instances of the substr other than the first instance starting
from the left.
• The third argument, position, lets you specify a starting point within the str other than 1.
-- Restricting the search to positions 7..end,
-- the first occurrence of 'b' is at position 9.
select instr('foo bar bletch', 'b', 7);
+---------------------------------+
| instr('foo bar bletch', 'b', 7) |
+---------------------------------+
| 9 |
+---------------------------------+
• If there are no more occurrences after the specified position, the result is 0.
• If position is negative, the search works right-to-left starting that many characters from the right. The return
value still represents the position starting from the left side of str.
-- Scanning right to left, the first occurrence of 'o'
-- is at position 8. (8th character from the left.)
select instr('hello world','o',-1);
+-------------------------------+
| instr('hello world', 'o', -1) |
+-------------------------------+
| 8 |
+-------------------------------+
• The fourth argument, occurrence, lets you specify an occurrence other than the first.
-- 2nd occurrence of 'b' is at position 9.
select instr('foo bar bletch', 'b', 1, 2);
+------------------------------------+
| instr('foo bar bletch', 'b', 1, 2) |
+------------------------------------+
| 9 |
+------------------------------------+
• If occurrence is greater than the number of matching occurrences, the function returns 0.
• occurrence cannot be negative or zero. A non-positive value for this argument causes an error.
• If either of the optional arguments, position or occurrence, is NULL, the function also returns NULL.
LEFT(STRING a, INT num_chars)
See the STRLEFT() function.
LENGTH(STRING a)
Purpose: Returns the length in characters of the argument string.
Return type: INT
LEVENSHTEIN(STRING str1, STRING str2), LE_DST(STRING str1, STRING str2)
Purpose: Returns the Levenshtein distance between two input strings. The Levenshtein distance between two
strings is the minimum number of single-character edits required to transform one string to other. The function
indicates how different the input strings are.
468 | Apache Impala Guide
Impala SQL Language Reference
Return type: INT
Usage notes:
If input strings are equal, the function returns 0.
If either input exceeds 255 characters, the function returns an error.
If either input string is NULL, the function returns NULL.
If the length of one input string is zero, the function returns the length of the other string.
Example:
LEVENSHTEIN ('welcome', 'We come') returns 2, first change to replace 'w' to 'W', and then to replace 'l' to
a space character.
LOCATE(STRING substr, STRING str[, INT pos])
Purpose: Returns the position (starting from 1) of the first occurrence of a substring within a longer string, optionally
after a particular position.
Return type: INT
LOWER(STRING a), LCASE(STRING a)
Purpose: Returns the argument string converted to all-lowercase.
Return type: STRING
Usage notes:
In CDH 5.7 / Impala 2.5 and higher, you can simplify queries that use many UPPER() and LOWER() calls to do
case-insensitive comparisons, by using the ILIKE or IREGEXP operators instead. See ILIKE Operator on page 179
and IREGEXP Operator on page 182 for details.
LPAD(STRING str, INT len, STRING pad)
Purpose: Returns a string of a specified length, based on the first argument string. If the specified string is too short,
it is padded on the left with a repeating sequence of the characters from the pad string. If the specified string is too
long, it is truncated on the right.
Return type: STRING
LTRIM(STRING a [, STRING chars_to_trim])
Purpose: Returns the argument string with all occurrences of characters specified by the second argument removed
from the left side. Removes spaces if the second argument is not specified.
Return type: STRING
PARSE_URL(STRING urlString, STRING partToExtract [, STRING keyToExtract])
Purpose: Returns the portion of a URL corresponding to a specified part. The part argument can be 'PROTOCOL',
'HOST', 'PATH', 'REF', 'AUTHORITY', 'FILE', 'USERINFO', or 'QUERY'. Uppercase is required for these
literal values. When requesting the QUERY portion of the URL, you can optionally specify a key to retrieve just the
associated value from the key-value pairs in the query string.
Return type: STRING
Usage notes: This function is important for the traditional Hadoop use case of interpreting web logs. For example,
if the web traffic data features raw URLs not divided into separate table columns, you can count visitors to a particular
page by extracting the 'PATH' or 'FILE' field, or analyze search terms by extracting the corresponding key from
the 'QUERY' field.
Apache Impala Guide | 469
Impala SQL Language Reference
REGEXP_ESCAPE(STRING source)
Purpose: The REGEXP_ESCAPE function returns a string escaped for the special character in RE2 library so that the
special characters are interpreted literally rather than as special characters. The following special characters are
escaped by the function:
.\+*?[^]$(){}=!<>|:-
Return type: string
In Impala 2.0 and later, the Impala regular expression syntax conforms to the POSIX Extended Regular Expression
syntax used by the Google RE2 library. For details, see the RE2 documentation. It has most idioms familiar from
regular expressions in Perl, Python, and so on, including .*? for non-greedy matches.
In Impala 2.0 and later, a change in the underlying regular expression library could cause changes in the way regular
expressions are interpreted by this function. Test any queries that use regular expressions and adjust the expression
patterns if necessary.
Because the impala-shell interpreter uses the \ character for escaping, use \\ to represent the regular expression
escape character in any regular expressions that you submit through impala-shell . You might prefer to use the
equivalent character class names, such as [[:digit:]] instead of \d which you would have to escape as \\d.
Examples:
This example shows escaping one of special characters in RE2.
+------------------------------------------------------+
| regexp_escape('Hello.world') |
+------------------------------------------------------+
| Hello\.world |
+------------------------------------------------------+
This example shows escaping all the special characters in RE2.
+------------------------------------------------------------+
| regexp_escape('a.b\\c+d*e?f[g]h$i(j)k{l}m=n!oq|r:s-t') |
+------------------------------------------------------------+
| a\.b\\c\+d\*e\?f\[g\]h\$i\(j\)k\{l\}m\=n\!o\
q\|r\:s\-t |
+------------------------------------------------------------+
REGEXP_EXTRACT(STRING subject, STRING pattern, INT index)
Purpose: Returns the specified () group from a string based on a regular expression pattern. Group 0 refers to the
entire extracted string, while group 1, 2, and so on refers to the first, second, and so on (...) portion.
Return type: STRING
In Impala 2.0 and later, the Impala regular expression syntax conforms to the POSIX Extended Regular Expression
syntax used by the Google RE2 library. For details, see the RE2 documentation. It has most idioms familiar from
regular expressions in Perl, Python, and so on, including .*? for non-greedy matches.
In Impala 2.0 and later, a change in the underlying regular expression library could cause changes in the way regular
expressions are interpreted by this function. Test any queries that use regular expressions and adjust the expression
patterns if necessary.
Because the impala-shell interpreter uses the \ character for escaping, use \\ to represent the regular expression
escape character in any regular expressions that you submit through impala-shell . You might prefer to use the
equivalent character class names, such as [[:digit:]] instead of \d which you would have to escape as \\d.
Examples:
470 | Apache Impala Guide
Impala SQL Language Reference
This example shows how group 0 matches the full pattern string, including the portion outside any () group:
[localhost:21000] > select regexp_extract('abcdef123ghi456jkl','.*?(\\d+)',0);
+------------------------------------------------------+
| regexp_extract('abcdef123ghi456jkl', '.*?(\\d+)', 0) |
+------------------------------------------------------+
| abcdef123ghi456 |
+------------------------------------------------------+
Returned 1 row(s) in 0.11s
This example shows how group 1 matches just the contents inside the first () group in the pattern string:
[localhost:21000] > select regexp_extract('abcdef123ghi456jkl','.*?(\\d+)',1);
+------------------------------------------------------+
| regexp_extract('abcdef123ghi456jkl', '.*?(\\d+)', 1) |
+------------------------------------------------------+
| 456 |
+------------------------------------------------------+
Returned 1 row(s) in 0.11s
Unlike in earlier Impala releases, the regular expression library used in Impala 2.0 and later supports the .*? idiom
for non-greedy matches. This example shows how a pattern string starting with .*? matches the shortest possible
portion of the source string, returning the rightmost set of lowercase letters. A pattern string both starting and
ending with .*? finds two potential matches of equal length, and returns the first one found (the leftmost set of
lowercase letters).
[localhost:21000] > select regexp_extract('AbcdBCdefGHI','.*?([[:lower:]]+)',1);
+--------------------------------------------------------+
| regexp_extract('abcdbcdefghi', '.*?([[:lower:]]+)', 1) |
+--------------------------------------------------------+
| def |
+--------------------------------------------------------+
[localhost:21000] > select regexp_extract('AbcdBCdefGHI','.*?([[:lower:]]+).*?',1);
+-----------------------------------------------------------+
| regexp_extract('abcdbcdefghi', '.*?([[:lower:]]+).*?', 1) |
+-----------------------------------------------------------+
| bcd |
+-----------------------------------------------------------+
REGEXP_LIKE(STRING source, STRING pattern[, STRING options])
Purpose: Returns true or false to indicate whether the source string contains anywhere inside it the regular
expression given by the pattern. The optional third argument consists of letter flags that change how the match is
performed, such as i for case-insensitive matching.
Syntax:
The flags that you can include in the optional third argument are:
• c: Case-sensitive matching (the default).
• i: Case-insensitive matching. If multiple instances of c and i are included in the third argument, the last such
option takes precedence.
• m: Multi-line matching. The ^ and $ operators match the start or end of any line within the source string, not
the start and end of the entire string.
• n: Newline matching. The . operator can match the newline character. A repetition operator such as .* can
match a portion of the source string that spans multiple lines.
Return type: BOOLEAN
In Impala 2.0 and later, the Impala regular expression syntax conforms to the POSIX Extended Regular Expression
syntax used by the Google RE2 library. For details, see the RE2 documentation. It has most idioms familiar from
regular expressions in Perl, Python, and so on, including .*? for non-greedy matches.
In Impala 2.0 and later, a change in the underlying regular expression library could cause changes in the way regular
expressions are interpreted by this function. Test any queries that use regular expressions and adjust the expression
patterns if necessary.
Apache Impala Guide | 471
Impala SQL Language Reference
Because the impala-shell interpreter uses the \ character for escaping, use \\ to represent the regular expression
escape character in any regular expressions that you submit through impala-shell . You might prefer to use the
equivalent character class names, such as [[:digit:]] instead of \d which you would have to escape as \\d.
Examples:
This example shows how REGEXP_LIKE() can test for the existence of various kinds of regular expression patterns
within a source string:
-- Matches because the 'f' appears somewhere in 'foo'.
select regexp_like('foo','f');
+-------------------------+
| regexp_like('foo', 'f') |
+-------------------------+
| true |
+-------------------------+
-- Does not match because the comparison is case-sensitive by default.
select regexp_like('foo','F');
+-------------------------+
| regexp_like('foo', 'f') |
+-------------------------+
| false |
+-------------------------+
-- The 3rd argument can change the matching logic, such as 'i' meaning case-insensitive.
select regexp_like('foo','F','i');
+------------------------------+
| regexp_like('foo', 'f', 'i') |
+------------------------------+
| true |
+------------------------------+
-- The familiar regular expression notations work, such as ^ and $ anchors...
select regexp_like('foo','f$');
+--------------------------+
| regexp_like('foo', 'f$') |
+--------------------------+
| false |
+--------------------------+
select regexp_like('foo','o$');
+--------------------------+
| regexp_like('foo', 'o$') |
+--------------------------+
| true |
+--------------------------+
-- ...and repetition operators such as * and +
select regexp_like('foooooobar','fo+b');
+-----------------------------------+
| regexp_like('foooooobar', 'fo+b') |
+-----------------------------------+
| true |
+-----------------------------------+
select regexp_like('foooooobar','fx*y*o*b');
+---------------------------------------+
| regexp_like('foooooobar', 'fx*y*o*b') |
+---------------------------------------+
| true |
+---------------------------------------+
REGEXP_REPLACE(STRING initial, STRING pattern, STRING replacement)
Purpose: Returns the initial argument with the regular expression pattern replaced by the final argument string.
Return type: STRING
472 | Apache Impala Guide
Impala SQL Language Reference
In Impala 2.0 and later, the Impala regular expression syntax conforms to the POSIX Extended Regular Expression
syntax used by the Google RE2 library. For details, see the RE2 documentation. It has most idioms familiar from
regular expressions in Perl, Python, and so on, including .*? for non-greedy matches.
In Impala 2.0 and later, a change in the underlying regular expression library could cause changes in the way regular
expressions are interpreted by this function. Test any queries that use regular expressions and adjust the expression
patterns if necessary.
Because the impala-shell interpreter uses the \ character for escaping, use \\ to represent the regular expression
escape character in any regular expressions that you submit through impala-shell . You might prefer to use the
equivalent character class names, such as [[:digit:]] instead of \d which you would have to escape as \\d.
Examples:
These examples show how you can replace parts of a string matching a pattern with replacement text, which can
include backreferences to any () groups in the pattern string. The backreference numbers start at 1, and any \
characters must be escaped as \\.
Replace a character pattern with new text:
[localhost:21000] > select regexp_replace('aaabbbaaa','b+','xyz');
+------------------------------------------+
| regexp_replace('aaabbbaaa', 'b+', 'xyz') |
+------------------------------------------+
| aaaxyzaaa |
+------------------------------------------+
Returned 1 row(s) in 0.11s
Replace a character pattern with substitution text that includes the original matching text:
[localhost:21000] > select regexp_replace('aaabbbaaa','(b+)','<\\1>');
+----------------------------------------------+
| regexp_replace('aaabbbaaa', '(b+)', '<\\1>') |
+----------------------------------------------+
| aaaaaa |
+----------------------------------------------+
Returned 1 row(s) in 0.11s
Remove all characters that are not digits:
[localhost:21000] > select regexp_replace('123-456-789','[^[:digit:]]','');
+---------------------------------------------------+
| regexp_replace('123-456-789', '[^[:digit:]]', '') |
+---------------------------------------------------+
| 123456789 |
+---------------------------------------------------+
Returned 1 row(s) in 0.12s
REPEAT(STRING str, INT n)
Purpose: Returns the argument string repeated a specified number of times.
Return type: STRING
REPLACE(STRING initial, STRING target, STRING replacement)
Purpose: Returns the initial argument with all occurrences of the target string replaced by the replacement string.
Return type: STRING
Usage notes:
Because this function does not use any regular expression patterns, it is typically faster than REGEXP_REPLACE()
for simple string substitutions.
If any argument is NULL, the return value is NULL.
Matching is case-sensitive.
Apache Impala Guide | 473
Impala SQL Language Reference
If the replacement string contains another instance of the target string, the expansion is only performed once,
instead of applying again to the newly constructed string.
Added in: CDH 5.12.0 / Impala 2.9.0
Examples:
-- Replace one string with another.
select replace('hello world','world','earth');
+------------------------------------------+
| replace('hello world', 'world', 'earth') |
+------------------------------------------+
| hello earth |
+------------------------------------------+
-- All occurrences of the target string are replaced.
select replace('hello world','o','0');
+----------------------------------+
| replace('hello world', 'o', '0') |
+----------------------------------+
| hell0 w0rld |
+----------------------------------+
-- If no match is found, the original string is returned unchanged.
select replace('hello world','xyz','abc');
+--------------------------------------+
| replace('hello world', 'xyz', 'abc') |
+--------------------------------------+
| hello world |
+--------------------------------------+
REVERSE(STRING a)
Purpose: Returns the argument string with characters in reversed order.
Return type: STRING
RIGHT(STRING a, INT num_chars)
See the STRRIGHT function.
RPAD(STRING str, INT len, STRING pad)
Purpose: Returns a string of a specified length, based on the first argument string. If the specified string is too short,
it is padded on the right with a repeating sequence of the characters from the pad string. If the specified string is
too long, it is truncated on the right.
Return type: STRING
RTRIM(STRING a [, STRING chars_to_trim])
Purpose: Returns the argument string with all occurrences of characters specified by the second argument removed
from the right side. Removes spaces if the second argument is not specified.
Return type: STRING
SPACE(INT n)
Purpose: Returns a concatenated string of the specified number of spaces. Shorthand for repeat(' ',n).
Return type: STRING
SPLIT_PART(STRING source, STRING delimiter, BIGINT n)
Purpose: Returns the nth field within a delimited string. The fields are numbered starting from 1. The delimiter can
consist of multiple characters, not just a single character. All matching of the delimiter is done exactly, not using
any regular expression patterns.
Return type: STRING
In Impala 2.0 and later, the Impala regular expression syntax conforms to the POSIX Extended Regular Expression
syntax used by the Google RE2 library. For details, see the RE2 documentation. It has most idioms familiar from
regular expressions in Perl, Python, and so on, including .*? for non-greedy matches.
474 | Apache Impala Guide
Impala SQL Language Reference
In Impala 2.0 and later, a change in the underlying regular expression library could cause changes in the way regular
expressions are interpreted by this function. Test any queries that use regular expressions and adjust the expression
patterns if necessary.
Because the impala-shell interpreter uses the \ character for escaping, use \\ to represent the regular expression
escape character in any regular expressions that you submit through impala-shell . You might prefer to use the
equivalent character class names, such as [[:digit:]] instead of \d which you would have to escape as \\d.
Examples:
These examples show how to retrieve the nth field from a delimited string:
select split_part('x,y,z',',',1);
+-----------------------------+
| split_part('x,y,z', ',', 1) |
+-----------------------------+
| x |
+-----------------------------+
select split_part('x,y,z',',',2);
+-----------------------------+
| split_part('x,y,z', ',', 2) |
+-----------------------------+
| y |
+-----------------------------+
select split_part('x,y,z',',',3);
+-----------------------------+
| split_part('x,y,z', ',', 3) |
+-----------------------------+
| z |
+-----------------------------+
These examples show what happens for out-of-range field positions. Specifying a value less than 1 produces an
error. Specifying a value greater than the number of fields returns a zero-length string (which is not the same as
NULL).
select split_part('x,y,z',',',0);
ERROR: Invalid field position: 0
with t1 as (select split_part('x,y,z',',',4) nonexistent_field)
select
nonexistent_field
, concat('[',nonexistent_field,']')
, length(nonexistent_field);
from t1
+-------------------+-------------------------------------+---------------------------+
| nonexistent_field | concat('[', nonexistent_field, ']') | length(nonexistent_field)
|
+-------------------+-------------------------------------+---------------------------+
| | [] | 0
|
+-------------------+-------------------------------------+---------------------------+
These examples show how the delimiter can be a multi-character value:
select split_part('one***two***three','***',2);
+-------------------------------------------+
| split_part('one***two***three', '***', 2) |
+-------------------------------------------+
| two |
+-------------------------------------------+
Apache Impala Guide | 475
Impala SQL Language Reference
select split_part('one\|/two\|/three','\|/',3);
+-------------------------------------------+
| split_part('one\|/two\|/three', '\|/', 3) |
+-------------------------------------------+
| three |
+-------------------------------------------+
STRLEFT(STRING a, INT num_chars)
Purpose: Returns the leftmost characters of the string. Shorthand for a call to substr() with 2 arguments.
Return type: STRING
STRRIGHT(STRING a, INT num_chars)
Purpose: Returns the rightmost characters of the string. Shorthand for a call to substr() with 2 arguments.
Return type: STRING
SUBSTR(STRING a, INT start [, INT len]), SUBSTRING(STRING a, INT start [, INT len])
Purpose: Returns the portion of the string starting at a specified point, optionally with a specified maximum length.
The characters in the string are indexed starting at 1.
Return type: STRING
TRANSLATE(STRING input, STRING from, STRING to)
Purpose: Returns the input string with each character in the from argument replaced with the corresponding
character in the to argument. The characters are matched in the order they appear in from and to.
For example: translate ('hello world','world','earth') returns 'hetta earth'.
Return type: STRING
Usage notes:
If from contains more characters than to, the from characters that are beyond the length of to are removed in
the result.
For example:
translate('abcdedg', 'bcd', '1') returns 'a1eg'.
translate('Unit Number#2', '# ', '_') returns 'UnitNumber_2'.
If from is NULL, the function returns NULL.
If to contains more characters than from, the extra characters in to are ignored.
If from contains duplicate characters, the duplicate character is replaced with the first matching character in to.
For example: translate ('hello','ll','67') returns 'he66o'.
TRIM(STRING a)
Purpose: Returns the input string with both leading and trailing spaces removed. The same as passing the string
through both LTRIM() and RTRIM().
Usage notes: Often used during data cleansing operations during the ETL cycle, if input values might still have
surrounding spaces. For a more general-purpose function that can remove other leading and trailing characters
besides spaces, see BTRIM().
Return type: STRING
UPPER(STRING a), UCASE(STRING a)
Purpose: Returns the argument string converted to all-uppercase.
Return type: STRING
Usage notes:
476 | Apache Impala Guide
In CDH 5.7 / Impala 2.5 and higher, you can simplify queries that use many UPPER() and LOWER() calls to do
case-insensitive comparisons, by using the ILIKE or IREGEXP operators instead. See ILIKE Operator on page 179
and IREGEXP Operator on page 182 for details.
Impala Miscellaneous Functions
Impala supports the following utility functions that do not operate on a particular column or data type:
Impala SQL Language Reference
• CURRENT_DATABASE
• EFFECTIVE_USER
• GET_JSON_OBJECT
• LOGGED_IN_USER
• PID
• SLEEP
• USER
• UUID
• VERSION
CURREN_DATABASE()
Purpose: Returns the database that the session is currently using, either default if no database has been selected,
or whatever database the session switched to through a USE statement or the impalad -d option.
Return type: STRING
EFFECTIVE_USER()
Purpose: Typically returns the same value as USER(), except if delegation is enabled, in which case it returns the
ID of the delegated user.
Return type: STRING
Added in: CDH 5.4.5 / Impala 2.2.5
GET_JSON_OBJECT(STRING json_str, STRING selector)
Purpose: Extracts JSON object from the json_str based on the selector JSON path and returns the string of the
extracted JSON object.
The function returns NULL if the input json_str is invalid or if nothing is selected based on the selector JSON path.
The following characters are supported in the selector JSON path:
• $ : Denotes the root object
• . : Denotes the child operator
• [] : Denotes the subscript operator for array
• * : Denotes the wildcard for [] or .
Return type: STRING
Examples:
---- QUERY
SELECT GET_JSON_OBJECT ('{"a":true, "b":false, "c":true}', '$.*');
---- RESULTS
[true,false,true]
---- QUERY
SELECT GET_JSON_OBJECT(t.json, '$.a.b.c') FROM (VALUES (
('{"a": {"b": {"c": 1}}}' AS json),
('{"a": {"b": {"c": 2}}}'),
('{"a": {"b": {"c": 3}}}')
)) t
---- RESULTS
'1'
Apache Impala Guide | 477
Impala SQL Language Reference
'2'
'3'
---- QUERY
SELECT GET_JSON_OBJECT(t.json, '$.a'),
GET_JSON_OBJECT(t.json, '$.b'),
GET_JSON_OBJECT(t.json, '$.c')
FROM (VALUES (
('{"a":1, "b":2, "c":3}' AS json),
('{"b":2, "c":3}'),
('{"c":3}')
)) t
---- RESULTS
'1','2','3'
'NULL','2','3'
'NULL','NULL','3'
---- QUERY
SELECT GET_JSON_OBJECT(t.json, '$[1]'),
GET_JSON_OBJECT(t.json, '$[*]')
FROM (VALUES (
('["a", "b", "c"]' AS json),
('["a", "b"]'),
('["a"]')
)) t
---- RESULTS
'b','["a","b","c"]'
'b','["a","b"]'
'NULL','a'
Added in: CDH 6.1
LOGGED_IN_USER()
Purpose: Typically returns the same value as USER(). If delegation is enabled, it returns the ID of the delegated
user.
LOGGED_IN_USER() is an alias of EFFECTIVE_USER().
Return type: STRING
Added in: CDH 6.1
PID()
Purpose: Returns the process ID of the impalad daemon that the session is connected to. You can use it during
low-level debugging, to issue Linux commands that trace, show the arguments, and so on the impalad process.
Return type: INT
SLEEP(INT ms)
Purpose: Pauses the query for a specified number of milliseconds. For slowing down queries with small result sets
enough to monitor runtime execution, memory usage, or other factors that otherwise would be difficult to capture
during the brief interval of query execution. When used in the SELECT list, it is called once for each row in the result
set; adjust the number of milliseconds accordingly. For example, a query SELECT *, sleep(5) FROM
table_with_1000_rows would take at least 5 seconds to complete (5 milliseconds * 1000 rows in result set). To
avoid an excessive number of concurrent queries, use this function for troubleshooting on test and development
systems, not for production queries.
Return type: N/A
USER()
Purpose: Returns the username of the Linux user who is connected to the impalad daemon. Typically called a
single time, in a query without any FROM clause, to understand how authorization settings apply in a security context;
once you know the logged-in username, you can check which groups that user belongs to, and from the list of
groups you can check which roles are available to those groups through the authorization policy file.
478 | Apache Impala Guide
Impala SQL Language Reference
Impala 2.0 and later, USER() returns the full Kerberos principal string, such as user@example.com, in a Kerberized
environment.
When delegation is enabled, consider calling the effective_user() function instead.
Return type: STRING
UUID()
Purpose: Returns a universal unique identifier, a 128-bit value encoded as a string with groups of hexadecimal digits
separated by dashes.
Each call to UUID() produces a new arbitrary value.
If you get a UUID for each row of a result set, you can use it as a unique identifier within a table, or even a unique
ID across tables.
Return type: STRING
Added in: CDH 5.7.0 / Impala 2.5.0
Usage notes:
Ascending numeric sequences of type BIGINT are often used as identifiers within a table, and as join keys across
multiple tables. The uuid() value is a convenient alternative that does not require storing or querying the highest
sequence number. For example, you can use it to quickly construct new unique identifiers during a data import job,
or to combine data from different tables without the likelihood of ID collisions.
VERSION()
Purpose: Returns information such as the precise version number and build date for the impalad daemon that
you are currently connected to. Typically used to confirm that you are connected to the expected level of Impala
to use a particular feature, or to connect to several nodes and confirm they are all running the same level of impalad.
Return type: STRING (with one or more embedded newlines)
COORDINATOR()
Purpose: Returns the name of the host which is running the impalad daemon that is acting as the coordinator
for the current query.
Return type: STRING
Added in: CDH 6.1
Impala Aggregate Functions
Aggregate functions are a special category with different rules. These functions calculate a return value across all the
items in a result set, so they require a FROM clause in the query:
select count(product_id) from product_catalog;
select max(height), avg(height) from census_data where age > 20;
Aggregate functions also ignore NULL values rather than returning a NULL result. For example, if some rows have NULL
for a particular column, those rows are ignored when computing the AVG() for that column. Likewise, specifying
COUNT(col_name) in a query counts only those rows where col_name contains a non-NULL value.
APPX_MEDIAN Function
An aggregate function that returns a value that is approximately the median (midpoint) of values in the set of input
values.
Syntax:
APPX_MEDIAN([DISTINCT | ALL] expression)
Apache Impala Guide | 479
Impala SQL Language Reference
This function works with any input type, because the only requirement is that the type supports less-than and
greater-than comparison operators.
Usage notes:
Because the return value represents the estimated midpoint, it might not reflect the precise midpoint value, especially
if the cardinality of the input values is very high. If the cardinality is low (up to approximately 20,000), the result is
more accurate because the sampling considers all or almost all of the different values.
Return type: Same as the input value, except for CHAR and VARCHAR arguments which produce a STRING result
The return value is always the same as one of the input values, not an “in-between” value produced by averaging.
Restrictions:
This function cannot be used in an analytic context. That is, the OVER() clause is not allowed at all with this function.
The APPX_MEDIAN function returns only the first 10 characters for string values (string, varchar, char). Additional
characters are truncated.
Examples:
The following example uses a table of a million random floating-point numbers ranging up to approximately 50,000.
The average is approximately 25,000. Because of the random distribution, we would expect the median to be close to
this same number. Computing the precise median is a more intensive operation than computing the average, because
it requires keeping track of every distinct value and how many times each occurs. The APPX_MEDIAN() function uses
a sampling algorithm to return an approximate result, which in this case is close to the expected value. To make sure
that the value is not substantially out of range due to a skewed distribution, subsequent queries confirm that there
are approximately 500,000 values higher than the APPX_MEDIAN() value, and approximately 500,000 values lower
than the APPX_MEDIAN() value.
[localhost:21000] > select min(x), max(x), avg(x) from million_numbers;
+-------------------+-------------------+-------------------+
| min(x) | max(x) | avg(x) |
+-------------------+-------------------+-------------------+
| 4.725693727250069 | 49994.56852674231 | 24945.38563793553 |
+-------------------+-------------------+-------------------+
[localhost:21000] > select appx_median(x) from million_numbers;
+----------------+
| appx_median(x) |
+----------------+
| 24721.6 |
+----------------+
[localhost:21000] > select count(x) as higher from million_numbers where x > (select
appx_median(x) from million_numbers);
+--------+
| higher |
+--------+
| 502013 |
+--------+
[localhost:21000] > select count(x) as lower from million_numbers where x < (select
appx_median(x) from million_numbers);
+--------+
| lower |
+--------+
| 497987 |
+--------+
The following example computes the approximate median using a subset of the values from the table, and then confirms
that the result is a reasonable estimate for the midpoint.
[localhost:21000] > select appx_median(x) from million_numbers where x between 1000 and
5000;
+-------------------+
| appx_median(x) |
+-------------------+
| 3013.107787358159 |
+-------------------+
480 | Apache Impala Guide
Impala SQL Language Reference
[localhost:21000] > select count(x) as higher from million_numbers where x between 1000
and 5000 and x > 3013.107787358159;
+--------+
| higher |
+--------+
| 37692 |
+--------+
[localhost:21000] > select count(x) as lower from million_numbers where x between 1000
and 5000 and x < 3013.107787358159;
+-------+
| lower |
+-------+
| 37089 |
+-------+
AVG Function
An aggregate function that returns the average value from a set of numbers or TIMESTAMP values. Its single argument
can be numeric column, or the numeric result of a function or expression applied to the column value. Rows with a
NULL value for the specified column are ignored. If the table is empty, or all the values supplied to AVG are NULL, AVG
returns NULL.
Syntax:
AVG([DISTINCT | ALL] expression) [OVER (analytic_clause)]
When the query contains a GROUP BY clause, returns one value for each combination of grouping values.
Return type: DOUBLE for numeric values; TIMESTAMP for TIMESTAMP values
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
Apache Impala Guide | 481
Impala SQL Language Reference
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Examples:
-- Average all the non-NULL values in a column.
insert overwrite avg_t values (2),(4),(6),(null),(null);
-- The average of the above values is 4: (2+4+6) / 3. The 2 NULL values are ignored.
select avg(x) from avg_t;
-- Average only certain values from the column.
select avg(x) from t1 where month = 'January' and year = '2013';
-- Apply a calculation to the value of the column before averaging.
select avg(x/3) from t1;
-- Apply a function to the value of the column before averaging.
-- Here we are substituting a value of 0 for all NULLs in the column,
-- so that those rows do factor into the return value.
select avg(isnull(x,0)) from t1;
-- Apply some number-returning function to a string column and average the results.
-- If column s contains any NULLs, length(s) also returns NULL and those rows are ignored.
select avg(length(s)) from t1;
-- Can also be used in combination with DISTINCT and/or GROUP BY.
-- Return more than one result.
select month, year, avg(page_visits) from web_stats group by month, year;
-- Filter the input to eliminate duplicates before performing the calculation.
select avg(distinct x) from t1;
-- Filter the output after performing the calculation.
select avg(x) from t1 group by y having avg(x) between 1 and 20;
482 | Apache Impala Guide
Impala SQL Language Reference
The following examples show how to use AVG() in an analytic context. They use a table containing integers from 1 to
10. Notice how the AVG() is reported for each input value, as opposed to the GROUP BY clause which condenses the
result set.
select x, property, avg(x) over (partition by property) as avg from int_t where property
in ('odd','even');
+----+----------+-----+
| x | property | avg |
+----+----------+-----+
| 2 | even | 6 |
| 4 | even | 6 |
| 6 | even | 6 |
| 8 | even | 6 |
| 10 | even | 6 |
| 1 | odd | 5 |
| 3 | odd | 5 |
| 5 | odd | 5 |
| 7 | odd | 5 |
| 9 | odd | 5 |
+----+----------+-----+
Adding an ORDER BY clause lets you experiment with results that are cumulative or apply to a moving set of rows (the
“window”). The following examples use AVG() in an analytic context (that is, with an OVER() clause) to produce a
running average of all the even values, then a running average of all the odd values. The basic ORDER BY x clause
implicitly activates a window clause of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which is
effectively the same as ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, therefore all of these examples
produce the same results:
select x, property,
avg(x) over (partition by property order by x) as 'cumulative average'
from int_t where property in ('odd','even');
+----+----------+--------------------+
| x | property | cumulative average |
+----+----------+--------------------+
| 2 | even | 2 |
| 4 | even | 3 |
| 6 | even | 4 |
| 8 | even | 5 |
| 10 | even | 6 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+--------------------+
select x, property,
avg(x) over
(
partition by property
order by x
range between unbounded preceding and current row
) as 'cumulative average'
from int_t where property in ('odd','even');
+----+----------+--------------------+
| x | property | cumulative average |
+----+----------+--------------------+
| 2 | even | 2 |
| 4 | even | 3 |
| 6 | even | 4 |
| 8 | even | 5 |
| 10 | even | 6 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+--------------------+
select x, property,
Apache Impala Guide | 483
Impala SQL Language Reference
avg(x) over
(
partition by property
order by x
rows between unbounded preceding and current row
) as 'cumulative average'
from int_t where property in ('odd','even');
+----+----------+--------------------+
| x | property | cumulative average |
+----+----------+--------------------+
| 2 | even | 2 |
| 4 | even | 3 |
| 6 | even | 4 |
| 8 | even | 5 |
| 10 | even | 6 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+--------------------+
The following examples show how to construct a moving window, with a running average taking into account 1 row
before and 1 row after the current row, within the same partition (all the even values or all the odd values). Because
of a restriction in the Impala RANGE syntax, this type of moving window is possible with the ROWS BETWEEN clause
but not the RANGE BETWEEN clause:
select x, property,
avg(x) over
(
partition by property
order by x
rows between 1 preceding and 1 following
) as 'moving average'
from int_t where property in ('odd','even');
+----+----------+----------------+
| x | property | moving average |
+----+----------+----------------+
| 2 | even | 3 |
| 4 | even | 4 |
| 6 | even | 6 |
| 8 | even | 8 |
| 10 | even | 9 |
| 1 | odd | 2 |
| 3 | odd | 3 |
| 5 | odd | 5 |
| 7 | odd | 7 |
| 9 | odd | 8 |
+----+----------+----------------+
-- Doesn't work because of syntax restriction on RANGE clause.
select x, property,
avg(x) over
(
partition by property
order by x
range between 1 preceding and 1 following
) as 'moving average'
from int_t where property in ('odd','even');
ERROR: AnalysisException: RANGE is only supported with both the lower and upper bounds
UNBOUNDED or one UNBOUNDED and the other CURRENT ROW.
Restrictions:
Due to the way arithmetic on FLOAT and DOUBLE columns uses high-performance hardware instructions, and distributed
queries can perform these operations in different order for each query, results can vary slightly for aggregate function
calls such as SUM() and AVG() for FLOAT and DOUBLE columns, particularly on large data sets where millions or billions
of values are summed or averaged. For perfect consistency and repeatability, use the DECIMAL data type for such
operations instead of FLOAT or DOUBLE.
484 | Apache Impala Guide
Impala SQL Language Reference
Related information:
Impala Analytic Functions on page 506, MAX Function on page 490, MIN Function on page 493
COUNT Function
An aggregate function that returns the number of rows, or the number of non-NULL rows.
Syntax:
COUNT([DISTINCT | ALL] expression) [OVER (analytic_clause)]
Depending on the argument, COUNT() considers rows that meet certain conditions:
• The notation COUNT(*) includes NULL values in the total.
• The notation COUNT(column_name) only considers rows where the column contains a non-NULL value.
• You can also combine COUNT with the DISTINCT operator to eliminate duplicates before counting, and to count
the combinations of values across multiple columns.
When the query contains a GROUP BY clause, returns one value for each combination of grouping values.
Return type: BIGINT
Usage notes:
If you frequently run aggregate functions such as MIN(), MAX(), and COUNT(DISTINCT) on partition key columns,
consider enabling the OPTIMIZE_PARTITION_KEY_SCANS query option, which optimizes such queries. This feature
is available in CDH 5.7 / Impala 2.5 and higher. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (CDH 5.7 or higher
only) on page 348 for the kinds of queries that this option applies to, and slight differences in how partitions are evaluated
when this query option is enabled.
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
Apache Impala Guide | 485
Impala SQL Language Reference
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Examples:
-- How many rows total are in the table, regardless of NULL values?
select count(*) from t1;
-- How many rows are in the table with non-NULL values for a column?
select count(c1) from t1;
-- Count the rows that meet certain conditions.
-- Again, * includes NULLs, so COUNT(*) might be greater than COUNT(col).
select count(*) from t1 where x > 10;
select count(c1) from t1 where x > 10;
-- Can also be used in combination with DISTINCT and/or GROUP BY.
-- Combine COUNT and DISTINCT to find the number of unique values.
-- Must use column names rather than * with COUNT(DISTINCT ...) syntax.
-- Rows with NULL values are not counted.
select count(distinct c1) from t1;
-- Rows with a NULL value in _either_ column are not counted.
select count(distinct c1, c2) from t1;
-- Return more than one result.
select month, year, count(distinct visitor_id) from web_stats group by month, year;
486 | Apache Impala Guide
Impala SQL Language Reference
The following examples show how to use COUNT() in an analytic context. They use a table containing integers from 1
to 10. Notice how the COUNT() is reported for each input value, as opposed to the GROUP BY clause which condenses
the result set.
select x, property, count(x) over (partition by property) as count from int_t where
property in ('odd','even');
+----+----------+-------+
| x | property | count |
+----+----------+-------+
| 2 | even | 5 |
| 4 | even | 5 |
| 6 | even | 5 |
| 8 | even | 5 |
| 10 | even | 5 |
| 1 | odd | 5 |
| 3 | odd | 5 |
| 5 | odd | 5 |
| 7 | odd | 5 |
| 9 | odd | 5 |
+----+----------+-------+
Adding an ORDER BY clause lets you experiment with results that are cumulative or apply to a moving set of rows (the
“window”). The following examples use COUNT() in an analytic context (that is, with an OVER() clause) to produce a
running count of all the even values, then a running count of all the odd values. The basic ORDER BY x clause implicitly
activates a window clause of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which is effectively
the same as ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, therefore all of these examples produce
the same results:
select x, property,
count(x) over (partition by property order by x) as 'cumulative count'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative count |
+----+----------+------------------+
| 2 | even | 1 |
| 4 | even | 2 |
| 6 | even | 3 |
| 8 | even | 4 |
| 10 | even | 5 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+------------------+
select x, property,
count(x) over
(
partition by property
order by x
range between unbounded preceding and current row
) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative count |
+----+----------+------------------+
| 2 | even | 1 |
| 4 | even | 2 |
| 6 | even | 3 |
| 8 | even | 4 |
| 10 | even | 5 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+------------------+
select x, property,
Apache Impala Guide | 487
Impala SQL Language Reference
count(x) over
(
partition by property
order by x
rows between unbounded preceding and current row
) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative count |
+----+----------+------------------+
| 2 | even | 1 |
| 4 | even | 2 |
| 6 | even | 3 |
| 8 | even | 4 |
| 10 | even | 5 |
| 1 | odd | 1 |
| 3 | odd | 2 |
| 5 | odd | 3 |
| 7 | odd | 4 |
| 9 | odd | 5 |
+----+----------+------------------+
The following examples show how to construct a moving window, with a running count taking into account 1 row
before and 1 row after the current row, within the same partition (all the even values or all the odd values). Therefore,
the count is consistently 3 for rows in the middle of the window, and 2 for rows near the ends of the window, where
there is no preceding or no following row in the partition. Because of a restriction in the Impala RANGE syntax, this
type of moving window is possible with the ROWS BETWEEN clause but not the RANGE BETWEEN clause:
select x, property,
count(x) over
(
partition by property
order by x
rows between 1 preceding and 1 following
) as 'moving total'
from int_t where property in ('odd','even');
+----+----------+--------------+
| x | property | moving total |
+----+----------+--------------+
| 2 | even | 2 |
| 4 | even | 3 |
| 6 | even | 3 |
| 8 | even | 3 |
| 10 | even | 2 |
| 1 | odd | 2 |
| 3 | odd | 3 |
| 5 | odd | 3 |
| 7 | odd | 3 |
| 9 | odd | 2 |
+----+----------+--------------+
-- Doesn't work because of syntax restriction on RANGE clause.
select x, property,
count(x) over
(
partition by property
order by x
range between 1 preceding and 1 following
) as 'moving total'
from int_t where property in ('odd','even');
ERROR: AnalysisException: RANGE is only supported with both the lower and upper bounds
UNBOUNDED or one UNBOUNDED and the other CURRENT ROW.
Related information:
Impala Analytic Functions on page 506
488 | Apache Impala Guide
Impala SQL Language Reference
GROUP_CONCAT Function
An aggregate function that returns a single string representing the argument value concatenated together for each
row of the result set. If the optional separator string is specified, the separator is added between each pair of
concatenated values. The default separator is a comma followed by a space.
Syntax:
GROUP_CONCAT([ALL | DISTINCT] expression [, separator])
Usage notes: concat() and concat_ws() are appropriate for concatenating the values of multiple columns within
the same row, while group_concat() joins together values from different rows.
By default, returns a single string covering the whole result set. To include other columns or values in the result set,
or to produce multiple concatenated strings for subsets of rows, include a GROUP BY clause in the query.
Return type: STRING
This function cannot be used in an analytic context. That is, the OVER() clause is not allowed at all with this function.
Currently, Impala returns an error if the result value grows larger than 1 GiB.
Examples:
The following examples illustrate various aspects of the GROUP_CONCAT() function.
You can call the function directly on a STRING column. To use it with a numeric column, cast the value to STRING.
[localhost:21000] > create table t1 (x int, s string);
[localhost:21000] > insert into t1 values (1, "one"), (3, "three"), (2, "two"), (1,
"one");
[localhost:21000] > select group_concat(s) from t1;
+----------------------+
| group_concat(s) |
+----------------------+
| one, three, two, one |
+----------------------+
[localhost:21000] > select group_concat(cast(x as string)) from t1;
+---------------------------------+
| group_concat(cast(x as string)) |
+---------------------------------+
| 1, 3, 2, 1 |
+---------------------------------+
Specify the DISTINCT keyword to eliminate duplicate values from the concatenated result:
[localhost:21000] > select group_concat(distinct s) from t1;
+--------------------------+
| group_concat(distinct s) |
+--------------------------+
| three, two, one |
+--------------------------+
The optional separator lets you format the result in flexible ways. The separator can be an arbitrary string expression,
not just a single character.
[localhost:21000] > select group_concat(s,"|") from t1;
+----------------------+
| group_concat(s, '|') |
+----------------------+
| one|three|two|one |
+----------------------+
[localhost:21000] > select group_concat(s,'---') from t1;
+-------------------------+
| group_concat(s, '---') |
+-------------------------+
Apache Impala Guide | 489
Impala SQL Language Reference
| one---three---two---one |
+-------------------------+
The default separator is a comma followed by a space. To get a comma-delimited result without extra spaces, specify
a delimiter character that is only a comma.
[localhost:21000] > select group_concat(s,',') from t1;
+----------------------+
| group_concat(s, ',') |
+----------------------+
| one,three,two,one |
+----------------------+
Including a GROUP BY clause lets you produce a different concatenated result for each group in the result set. In this
example, the only X value that occurs more than once is 1, so that is the only row in the result set where
GROUP_CONCAT() returns a delimited value. For groups containing a single value, GROUP_CONCAT() returns the
original value of its STRING argument.
[localhost:21000] > select x, group_concat(s) from t1 group by x;
+---+-----------------+
| x | group_concat(s) |
+---+-----------------+
| 2 | two |
| 3 | three |
| 1 | one, one |
+---+-----------------+
MAX Function
An aggregate function that returns the maximum value from a set of numbers. Opposite of the MIN function. Its single
argument can be numeric column, or the numeric result of a function or expression applied to the column value. Rows
with a NULL value for the specified column are ignored. If the table is empty, or all the values supplied to MAX are NULL,
MAX returns NULL.
Syntax:
MAX([DISTINCT | ALL] expression) [OVER (analytic_clause)]
When the query contains a GROUP BY clause, returns one value for each combination of grouping values.
Restrictions: In Impala 2.0 and higher, this function can be used as an analytic function, but with restrictions on any
window clause. For MAX() and MIN(), the window clause is only allowed if the start bound is UNBOUNDED PRECEDING.
Return type: Same as the input value, except for CHAR and VARCHAR arguments which produce a STRING result
Usage notes:
If you frequently run aggregate functions such as MIN(), MAX(), and COUNT(DISTINCT) on partition key columns,
consider enabling the OPTIMIZE_PARTITION_KEY_SCANS query option, which optimizes such queries. This feature
is available in CDH 5.7 / Impala 2.5 and higher. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (CDH 5.7 or higher
only) on page 348 for the kinds of queries that this option applies to, and slight differences in how partitions are evaluated
when this query option is enabled.
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
490 | Apache Impala Guide
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
Impala SQL Language Reference
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Apache Impala Guide | 491
Impala SQL Language Reference
Examples:
-- Find the largest value for this column in the table.
select max(c1) from t1;
-- Find the largest value for this column from a subset of the table.
select max(c1) from t1 where month = 'January' and year = '2013';
-- Find the largest value from a set of numeric function results.
select max(length(s)) from t1;
-- Can also be used in combination with DISTINCT and/or GROUP BY.
-- Return more than one result.
select month, year, max(purchase_price) from store_stats group by month, year;
-- Filter the input to eliminate duplicates before performing the calculation.
select max(distinct x) from t1;
The following examples show how to use MAX() in an analytic context. They use a table containing integers from 1 to
10. Notice how the MAX() is reported for each input value, as opposed to the GROUP BY clause which condenses the
result set.
select x, property, max(x) over (partition by property) as max from int_t where property
in ('odd','even');
+----+----------+-----+
| x | property | max |
+----+----------+-----+
| 2 | even | 10 |
| 4 | even | 10 |
| 6 | even | 10 |
| 8 | even | 10 |
| 10 | even | 10 |
| 1 | odd | 9 |
| 3 | odd | 9 |
| 5 | odd | 9 |
| 7 | odd | 9 |
| 9 | odd | 9 |
+----+----------+-----+
Adding an ORDER BY clause lets you experiment with results that are cumulative or apply to a moving set of rows (the
“window”). The following examples use MAX() in an analytic context (that is, with an OVER() clause) to display the
smallest value of X encountered up to each row in the result set. The examples use two columns in the ORDER BY
clause to produce a sequence of values that rises and falls, to illustrate how the MAX() result only increases or stays
the same throughout each partition within the result set. The basic ORDER BY x clause implicitly activates a window
clause of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which is effectively the same as ROWS
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, therefore all of these examples produce the same results:
select x, property,
max(x) over (order by property, x desc) as 'maximum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | maximum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
| 5 | prime | 7 |
| 3 | prime | 7 |
| 2 | prime | 7 |
| 9 | square | 9 |
| 4 | square | 9 |
| 1 | square | 9 |
+---+----------+-----------------------+
select x, property,
max(x) over
(
order by property, x desc
rows between unbounded preceding and current row
) as 'maximum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | maximum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
492 | Apache Impala Guide
Impala SQL Language Reference
| 5 | prime | 7 |
| 3 | prime | 7 |
| 2 | prime | 7 |
| 9 | square | 9 |
| 4 | square | 9 |
| 1 | square | 9 |
+---+----------+-----------------------+
select x, property,
max(x) over
(
order by property, x desc
range between unbounded preceding and current row
) as 'maximum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | maximum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
| 5 | prime | 7 |
| 3 | prime | 7 |
| 2 | prime | 7 |
| 9 | square | 9 |
| 4 | square | 9 |
| 1 | square | 9 |
+---+----------+-----------------------+
The following examples show how to construct a moving window, with a running maximum taking into account all
rows before and 1 row after the current row. Because of a restriction in the Impala RANGE syntax, this type of moving
window is possible with the ROWS BETWEEN clause but not the RANGE BETWEEN clause. Because of an extra Impala
restriction on the MAX() and MIN() functions in an analytic context, the lower bound must be UNBOUNDED PRECEDING.
select x, property,
max(x) over
(
order by property, x
rows between unbounded preceding and 1 following
) as 'local maximum'
from int_t where property in ('prime','square');
+---+----------+---------------+
| x | property | local maximum |
+---+----------+---------------+
| 2 | prime | 3 |
| 3 | prime | 5 |
| 5 | prime | 7 |
| 7 | prime | 7 |
| 1 | square | 7 |
| 4 | square | 9 |
| 9 | square | 9 |
+---+----------+---------------+
-- Doesn't work because of syntax restriction on RANGE clause.
select x, property,
max(x) over
(
order by property, x
range between unbounded preceding and 1 following
) as 'local maximum'
from int_t where property in ('prime','square');
ERROR: AnalysisException: RANGE is only supported with both the lower and upper bounds
UNBOUNDED or one UNBOUNDED and the other CURRENT ROW.
Related information:
Impala Analytic Functions on page 506, MIN Function on page 493, AVG Function on page 481
MIN Function
An aggregate function that returns the minimum value from a set of numbers. Opposite of the MAX function. Its single
argument can be numeric column, or the numeric result of a function or expression applied to the column value. Rows
Apache Impala Guide | 493
Impala SQL Language Reference
with a NULL value for the specified column are ignored. If the table is empty, or all the values supplied to MIN are NULL,
MIN returns NULL.
Syntax:
MIN([DISTINCT | ALL] expression) [OVER (analytic_clause)]
When the query contains a GROUP BY clause, returns one value for each combination of grouping values.
Restrictions: In Impala 2.0 and higher, this function can be used as an analytic function, but with restrictions on any
window clause. For MAX() and MIN(), the window clause is only allowed if the start bound is UNBOUNDED PRECEDING.
Return type: Same as the input value, except for CHAR and VARCHAR arguments which produce a STRING result
Usage notes:
If you frequently run aggregate functions such as MIN(), MAX(), and COUNT(DISTINCT) on partition key columns,
consider enabling the OPTIMIZE_PARTITION_KEY_SCANS query option, which optimizes such queries. This feature
is available in CDH 5.7 / Impala 2.5 and higher. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (CDH 5.7 or higher
only) on page 348 for the kinds of queries that this option applies to, and slight differences in how partitions are evaluated
when this query option is enabled.
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
494 | Apache Impala Guide
Impala SQL Language Reference
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Examples:
-- Find the smallest value for this column in the table.
select min(c1) from t1;
-- Find the smallest value for this column from a subset of the table.
select min(c1) from t1 where month = 'January' and year = '2013';
-- Find the smallest value from a set of numeric function results.
select min(length(s)) from t1;
-- Can also be used in combination with DISTINCT and/or GROUP BY.
-- Return more than one result.
select month, year, min(purchase_price) from store_stats group by month, year;
-- Filter the input to eliminate duplicates before performing the calculation.
select min(distinct x) from t1;
The following examples show how to use MIN() in an analytic context. They use a table containing integers from 1 to
10. Notice how the MIN() is reported for each input value, as opposed to the GROUP BY clause which condenses the
result set.
select x, property, min(x) over (partition by property) as min from int_t where property
in ('odd','even');
+----+----------+-----+
| x | property | min |
+----+----------+-----+
| 2 | even | 2 |
| 4 | even | 2 |
| 6 | even | 2 |
| 8 | even | 2 |
| 10 | even | 2 |
| 1 | odd | 1 |
| 3 | odd | 1 |
| 5 | odd | 1 |
| 7 | odd | 1 |
| 9 | odd | 1 |
+----+----------+-----+
Apache Impala Guide | 495
Impala SQL Language Reference
Adding an ORDER BY clause lets you experiment with results that are cumulative or apply to a moving set of rows (the
“window”). The following examples use MIN() in an analytic context (that is, with an OVER() clause) to display the
smallest value of X encountered up to each row in the result set. The examples use two columns in the ORDER BY
clause to produce a sequence of values that rises and falls, to illustrate how the MIN() result only decreases or stays
the same throughout each partition within the result set. The basic ORDER BY x clause implicitly activates a window
clause of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which is effectively the same as ROWS
BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, therefore all of these examples produce the same results:
select x, property, min(x) over (order by property, x desc) as 'minimum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | minimum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
| 5 | prime | 5 |
| 3 | prime | 3 |
| 2 | prime | 2 |
| 9 | square | 2 |
| 4 | square | 2 |
| 1 | square | 1 |
+---+----------+-----------------------+
select x, property,
min(x) over
(
order by property, x desc
range between unbounded preceding and current row
) as 'minimum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | minimum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
| 5 | prime | 5 |
| 3 | prime | 3 |
| 2 | prime | 2 |
| 9 | square | 2 |
| 4 | square | 2 |
| 1 | square | 1 |
+---+----------+-----------------------+
select x, property,
min(x) over
(
order by property, x desc
rows between unbounded preceding and current row
) as 'minimum to this point'
from int_t where property in ('prime','square');
+---+----------+-----------------------+
| x | property | minimum to this point |
+---+----------+-----------------------+
| 7 | prime | 7 |
| 5 | prime | 5 |
| 3 | prime | 3 |
| 2 | prime | 2 |
| 9 | square | 2 |
| 4 | square | 2 |
| 1 | square | 1 |
+---+----------+-----------------------+
The following examples show how to construct a moving window, with a running minimum taking into account all rows
before and 1 row after the current row. Because of a restriction in the Impala RANGE syntax, this type of moving window
is possible with the ROWS BETWEEN clause but not the RANGE BETWEEN clause. Because of an extra Impala restriction
on the MAX() and MIN() functions in an analytic context, the lower bound must be UNBOUNDED PRECEDING.
select x, property,
min(x) over
(
order by property, x desc
rows between unbounded preceding and 1 following
496 | Apache Impala Guide
Impala SQL Language Reference
) as 'local minimum'
from int_t where property in ('prime','square');
+---+----------+---------------+
| x | property | local minimum |
+---+----------+---------------+
| 7 | prime | 5 |
| 5 | prime | 3 |
| 3 | prime | 2 |
| 2 | prime | 2 |
| 9 | square | 2 |
| 4 | square | 1 |
| 1 | square | 1 |
+---+----------+---------------+
-- Doesn't work because of syntax restriction on RANGE clause.
select x, property,
min(x) over
(
order by property, x desc
range between unbounded preceding and 1 following
) as 'local minimum'
from int_t where property in ('prime','square');
ERROR: AnalysisException: RANGE is only supported with both the lower and upper bounds
UNBOUNDED or one UNBOUNDED and the other CURRENT ROW.
Related information:
Impala Analytic Functions on page 506, MAX Function on page 490, AVG Function on page 481
NDV Function
An aggregate function that returns an approximate value similar to the result of COUNT(DISTINCT col), the “number
of distinct values”. It is much faster than the combination of COUNT and DISTINCT, and uses a constant amount of
memory and thus is less memory-intensive for columns with high cardinality.
Syntax:
NDV([DISTINCT | ALL] expression)
Usage notes:
This is the mechanism used internally by the COMPUTE STATS statement for computing the number of distinct values
in a column.
Because this number is an estimate, it might not reflect the precise number of different values in the column, especially
if the cardinality is very low or very high. If the estimated number is higher than the number of rows in the table, Impala
adjusts the value internally during query planning.
Return type: DOUBLE in Impala 2.0 and higher; STRING in earlier releases
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
Apache Impala Guide | 497
Impala SQL Language Reference
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Restrictions:
This function cannot be used in an analytic context. That is, the OVER() clause is not allowed at all with this function.
Examples:
498 | Apache Impala Guide
The following example queries a billion-row table to illustrate the relative performance of COUNT(DISTINCT) and
NDV(). It shows how COUNT(DISTINCT) gives a precise answer, but is inefficient for large-scale data where an
approximate result is sufficient. The NDV() function gives an approximate result but is much faster.
Impala SQL Language Reference
select count(distinct col1) from sample_data;
+---------------------+
| count(distinct col1)|
+---------------------+
| 100000 |
+---------------------+
Fetched 1 row(s) in 20.13s
select cast(ndv(col1) as bigint) as col1 from sample_data;
+----------+
| col1 |
+----------+
| 139017 |
+----------+
Fetched 1 row(s) in 8.91s
The following example shows how you can code multiple NDV() calls in a single query, to easily learn which columns
have substantially more or fewer distinct values. This technique is faster than running a sequence of queries with
COUNT(DISTINCT) calls.
select cast(ndv(col1) as bigint) as col1, cast(ndv(col2) as bigint) as col2,
cast(ndv(col3) as bigint) as col3, cast(ndv(col4) as bigint) as col4
from sample_data;
+----------+-----------+------------+-----------+
| col1 | col2 | col3 | col4 |
+----------+-----------+------------+-----------+
| 139017 | 282 | 46 | 145636240 |
+----------+-----------+------------+-----------+
Fetched 1 row(s) in 34.97s
select count(distinct col1) from sample_data;
+---------------------+
| count(distinct col1)|
+---------------------+
| 100000 |
+---------------------+
Fetched 1 row(s) in 20.13s
select count(distinct col2) from sample_data;
+----------------------+
| count(distinct col2) |
+----------------------+
| 278 |
+----------------------+
Fetched 1 row(s) in 20.09s
select count(distinct col3) from sample_data;
+-----------------------+
| count(distinct col3) |
+-----------------------+
| 46 |
+-----------------------+
Fetched 1 row(s) in 19.12s
select count(distinct col4) from sample_data;
+----------------------+
| count(distinct col4) |
+----------------------+
| 147135880 |
+----------------------+
Fetched 1 row(s) in 266.95s
STDDEV, STDDEV_SAMP, STDDEV_POP Functions
An aggregate function that returns the standard deviation of a set of numbers.
Apache Impala Guide | 499
Impala SQL Language Reference
Syntax:
{ STDDEV | STDDEV_SAMP | STDDEV_POP } ([DISTINCT | ALL] expression)
This function works with any numeric data type.
Return type: DOUBLE in Impala 2.0 and higher; STRING in earlier releases
This function is typically used in mathematical formulas related to probability distributions.
The STDDEV_POP() and STDDEV_SAMP() functions compute the population standard deviation and sample standard
deviation, respectively, of the input values. (STDDEV() is an alias for STDDEV_SAMP().) Both functions evaluate all
input rows matched by the query. The difference is that STDDEV_SAMP() is scaled by 1/(N-1) while STDDEV_POP()
is scaled by 1/N.
If no input rows match the query, the result of any of these functions is NULL. If a single input row matches the query,
the result of any of these functions is "0.0".
Examples:
This example demonstrates how STDDEV() and STDDEV_SAMP() return the same result, while STDDEV_POP() uses
a slightly different calculation to reflect that the input data is considered part of a larger “population”.
[localhost:21000] > select stddev(score) from test_scores;
+---------------+
| stddev(score) |
+---------------+
| 28.5 |
+---------------+
[localhost:21000] > select stddev_samp(score) from test_scores;
+--------------------+
| stddev_samp(score) |
+--------------------+
| 28.5 |
+--------------------+
[localhost:21000] > select stddev_pop(score) from test_scores;
+-------------------+
| stddev_pop(score) |
+-------------------+
| 28.4858 |
+-------------------+
This example demonstrates that, because the return value of these aggregate functions is a STRING, you must currently
convert the result with CAST.
[localhost:21000] > create table score_stats as select cast(stddev(score) as decimal(7,4))
`standard_deviation`, cast(variance(score) as decimal(7,4)) `variance` from test_scores;
+-------------------+
| summary |
+-------------------+
| Inserted 1 row(s) |
+-------------------+
[localhost:21000] > desc score_stats;
+--------------------+--------------+---------+
| name | type | comment |
+--------------------+--------------+---------+
| standard_deviation | decimal(7,4) | |
| variance | decimal(7,4) | |
+--------------------+--------------+---------+
Restrictions:
This function cannot be used in an analytic context. That is, the OVER() clause is not allowed at all with this function.
Related information:
The STDDEV(), STDDEV_POP(), and STDDEV_SAMP() functions compute the standard deviation (square root of the
variance) based on the results of VARIANCE(), VARIANCE_POP(), and VARIANCE_SAMP() respectively. See VARIANCE,
VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP Functions on page 505 for details about the variance property.
500 | Apache Impala Guide
Impala SQL Language Reference
SUM Function
An aggregate function that returns the sum of a set of numbers. Its single argument can be numeric column, or the
numeric result of a function or expression applied to the column value. Rows with a NULL value for the specified column
are ignored. If the table is empty, or all the values supplied to MIN are NULL, SUM returns NULL.
Syntax:
SUM([DISTINCT | ALL] expression) [OVER (analytic_clause)]
When the query contains a GROUP BY clause, returns one value for each combination of grouping values.
Return type: BIGINT for integer arguments, DOUBLE for floating-point arguments
Complex type considerations:
To access a column with a complex type (ARRAY, STRUCT, or MAP) in an aggregation function, you unpack the individual
elements using join notation in the query, and then apply the function to the final scalar item, field, key, or value at
the bottom of any nested type hierarchy in the column. See Complex Types (CDH 5.5 or higher only) on page 139 for
details about using complex types in Impala.
The following example demonstrates calls to several aggregation functions using values from a column containing
nested complex types (an ARRAY of STRUCT items). The array is unpacked inside the query using join notation. The
array elements are referenced using the ITEM pseudocolumn, and the structure fields inside the array elements are
referenced using dot notation. Numeric values such as SUM() and AVG() are computed using the numeric R_NATIONKEY
field, and the general-purpose MAX() and MIN() values are computed from the string N_NAME field.
describe region;
+-------------+-------------------------+---------+
| name | type | comment |
+-------------+-------------------------+---------+
| r_regionkey | smallint | |
| r_name | string | |
| r_comment | string | |
| r_nations | array> | |
+-------------+-------------------------+---------+
select r_name, r_nations.item.n_nationkey
from region, region.r_nations as r_nations
order by r_name, r_nations.item.n_nationkey;
+-------------+------------------+
| r_name | item.n_nationkey |
+-------------+------------------+
| AFRICA | 0 |
| AFRICA | 5 |
| AFRICA | 14 |
| AFRICA | 15 |
| AFRICA | 16 |
| AMERICA | 1 |
| AMERICA | 2 |
| AMERICA | 3 |
| AMERICA | 17 |
| AMERICA | 24 |
| ASIA | 8 |
| ASIA | 9 |
| ASIA | 12 |
| ASIA | 18 |
| ASIA | 21 |
| EUROPE | 6 |
| EUROPE | 7 |
| EUROPE | 19 |
| EUROPE | 22 |
| EUROPE | 23 |
| MIDDLE EAST | 4 |
| MIDDLE EAST | 10 |
Apache Impala Guide | 501
Impala SQL Language Reference
| MIDDLE EAST | 11 |
| MIDDLE EAST | 13 |
| MIDDLE EAST | 20 |
+-------------+------------------+
select
r_name,
count(r_nations.item.n_nationkey) as count,
sum(r_nations.item.n_nationkey) as sum,
avg(r_nations.item.n_nationkey) as avg,
min(r_nations.item.n_name) as minimum,
max(r_nations.item.n_name) as maximum,
ndv(r_nations.item.n_nationkey) as distinct_vals
from
region, region.r_nations as r_nations
group by r_name
order by r_name;
+-------------+-------+-----+------+-----------+----------------+---------------+
| r_name | count | sum | avg | minimum | maximum | distinct_vals |
+-------------+-------+-----+------+-----------+----------------+---------------+
| AFRICA | 5 | 50 | 10 | ALGERIA | MOZAMBIQUE | 5 |
| AMERICA | 5 | 47 | 9.4 | ARGENTINA | UNITED STATES | 5 |
| ASIA | 5 | 68 | 13.6 | CHINA | VIETNAM | 5 |
| EUROPE | 5 | 77 | 15.4 | FRANCE | UNITED KINGDOM | 5 |
| MIDDLE EAST | 5 | 58 | 11.6 | EGYPT | SAUDI ARABIA | 5 |
+-------------+-------+-----+------+-----------+----------------+---------------+
Examples:
The following example shows how to use SUM() to compute the total for all the values in the table, a subset of values,
or the sum for each combination of values in the GROUP BY clause:
-- Total all the values for this column in the table.
select sum(c1) from t1;
-- Find the total for this column from a subset of the table.
select sum(c1) from t1 where month = 'January' and year = '2013';
-- Find the total from a set of numeric function results.
select sum(length(s)) from t1;
-- Often used with functions that return predefined values to compute a score.
select sum(case when grade = 'A' then 1.0 when grade = 'B' then 0.75 else 0) as
class_honors from test_scores;
-- Can also be used in combination with DISTINCT and/or GROUP BY.
-- Return more than one result.
select month, year, sum(purchase_price) from store_stats group by month, year;
-- Filter the input to eliminate duplicates before performing the calculation.
select sum(distinct x) from t1;
The following examples show how to use SUM() in an analytic context. They use a table containing integers from 1 to
10. Notice how the SUM() is reported for each input value, as opposed to the GROUP BY clause which condenses the
result set.
select x, property, sum(x) over (partition by property) as sum from int_t where property
in ('odd','even');
+----+----------+-----+
| x | property | sum |
+----+----------+-----+
| 2 | even | 30 |
| 4 | even | 30 |
| 6 | even | 30 |
| 8 | even | 30 |
| 10 | even | 30 |
| 1 | odd | 25 |
| 3 | odd | 25 |
| 5 | odd | 25 |
| 7 | odd | 25 |
| 9 | odd | 25 |
+----+----------+-----+
Adding an ORDER BY clause lets you experiment with results that are cumulative or apply to a moving set of rows (the
“window”). The following examples use SUM() in an analytic context (that is, with an OVER() clause) to produce a
502 | Apache Impala Guide
running total of all the even values, then a running total of all the odd values. The basic ORDER BY x clause implicitly
activates a window clause of RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, which is effectively
the same as ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, therefore all of these examples produce
the same results:
Impala SQL Language Reference
select x, property,
sum(x) over (partition by property order by x) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative total |
+----+----------+------------------+
| 2 | even | 2 |
| 4 | even | 6 |
| 6 | even | 12 |
| 8 | even | 20 |
| 10 | even | 30 |
| 1 | odd | 1 |
| 3 | odd | 4 |
| 5 | odd | 9 |
| 7 | odd | 16 |
| 9 | odd | 25 |
+----+----------+------------------+
select x, property,
sum(x) over
(
partition by property
order by x
range between unbounded preceding and current row
) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative total |
+----+----------+------------------+
| 2 | even | 2 |
| 4 | even | 6 |
| 6 | even | 12 |
| 8 | even | 20 |
| 10 | even | 30 |
| 1 | odd | 1 |
| 3 | odd | 4 |
| 5 | odd | 9 |
| 7 | odd | 16 |
| 9 | odd | 25 |
+----+----------+------------------+
select x, property,
sum(x) over
(
partition by property
order by x
rows between unbounded preceding and current row
) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative total |
+----+----------+------------------+
| 2 | even | 2 |
| 4 | even | 6 |
| 6 | even | 12 |
| 8 | even | 20 |
| 10 | even | 30 |
| 1 | odd | 1 |
| 3 | odd | 4 |
| 5 | odd | 9 |
| 7 | odd | 16 |
| 9 | odd | 25 |
+----+----------+------------------+
Apache Impala Guide | 503
Impala SQL Language Reference
Changing the direction of the ORDER BY clause causes the intermediate results of the cumulative total to be calculated
in a different order:
select sum(x) over (partition by property order by x desc) as 'cumulative total'
from int_t where property in ('odd','even');
+----+----------+------------------+
| x | property | cumulative total |
+----+----------+------------------+
| 10 | even | 10 |
| 8 | even | 18 |
| 6 | even | 24 |
| 4 | even | 28 |
| 2 | even | 30 |
| 9 | odd | 9 |
| 7 | odd | 16 |
| 5 | odd | 21 |
| 3 | odd | 24 |
| 1 | odd | 25 |
+----+----------+------------------+
The following examples show how to construct a moving window, with a running total taking into account 1 row before
and 1 row after the current row, within the same partition (all the even values or all the odd values). Because of a
restriction in the Impala RANGE syntax, this type of moving window is possible with the ROWS BETWEEN clause but not
the RANGE BETWEEN clause:
select x, property,
sum(x) over
(
partition by property
order by x
rows between 1 preceding and 1 following
) as 'moving total'
from int_t where property in ('odd','even');
+----+----------+--------------+
| x | property | moving total |
+----+----------+--------------+
| 2 | even | 6 |
| 4 | even | 12 |
| 6 | even | 18 |
| 8 | even | 24 |
| 10 | even | 18 |
| 1 | odd | 4 |
| 3 | odd | 9 |
| 5 | odd | 15 |
| 7 | odd | 21 |
| 9 | odd | 16 |
+----+----------+--------------+
-- Doesn't work because of syntax restriction on RANGE clause.
select x, property,
sum(x) over
(
partition by property
order by x
range between 1 preceding and 1 following
) as 'moving total'
from int_t where property in ('odd','even');
ERROR: AnalysisException: RANGE is only supported with both the lower and upper bounds
UNBOUNDED or one UNBOUNDED and the other CURRENT ROW.
Restrictions:
Due to the way arithmetic on FLOAT and DOUBLE columns uses high-performance hardware instructions, and distributed
queries can perform these operations in different order for each query, results can vary slightly for aggregate function
calls such as SUM() and AVG() for FLOAT and DOUBLE columns, particularly on large data sets where millions or billions
of values are summed or averaged. For perfect consistency and repeatability, use the DECIMAL data type for such
operations instead of FLOAT or DOUBLE.
Related information:
504 | Apache Impala Guide
Impala SQL Language Reference
Impala Analytic Functions on page 506
VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP Functions
An aggregate function that returns the variance of a set of numbers. This is a mathematical property that signifies how
far the values spread apart from the mean. The return value can be zero (if the input is a single value, or a set of identical
values), or a positive number otherwise.
Syntax:
{ VARIANCE | VAR[IANCE]_SAMP | VAR[IANCE]_POP } ([DISTINCT | ALL] expression)
This function works with any numeric data type.
Return type: DOUBLE in Impala 2.0 and higher; STRING in earlier releases
This function is typically used in mathematical formulas related to probability distributions.
The VARIANCE_SAMP() and VARIANCE_POP() functions compute the sample variance and population variance,
respectively, of the input values. (VARIANCE() is an alias for VARIANCE_SAMP().) Both functions evaluate all input
rows matched by the query. The difference is that STDDEV_SAMP() is scaled by 1/(N-1) while STDDEV_POP() is
scaled by 1/N.
The functions VAR_SAMP() and VAR_POP() are the same as VARIANCE_SAMP() and VARIANCE_POP(), respectively.
These aliases are available in Impala 2.0 and later.
If no input rows match the query, the result of any of these functions is NULL. If a single input row matches the query,
the result of any of these functions is "0.0".
Examples:
This example demonstrates how VARIANCE() and VARIANCE_SAMP() return the same result, while VARIANCE_POP()
uses a slightly different calculation to reflect that the input data is considered part of a larger “population”.
[localhost:21000] > select variance(score) from test_scores;
+-----------------+
| variance(score) |
+-----------------+
| 812.25 |
+-----------------+
[localhost:21000] > select variance_samp(score) from test_scores;
+----------------------+
| variance_samp(score) |
+----------------------+
| 812.25 |
+----------------------+
[localhost:21000] > select variance_pop(score) from test_scores;
+---------------------+
| variance_pop(score) |
+---------------------+
| 811.438 |
+---------------------+
This example demonstrates that, because the return value of these aggregate functions is a STRING, you convert the
result with CAST if you need to do further calculations as a numeric value.
[localhost:21000] > create table score_stats as select cast(stddev(score) as decimal(7,4))
`standard_deviation`, cast(variance(score) as decimal(7,4)) `variance` from test_scores;
+-------------------+
| summary |
+-------------------+
| Inserted 1 row(s) |
+-------------------+
[localhost:21000] > desc score_stats;
+--------------------+--------------+---------+
| name | type | comment |
+--------------------+--------------+---------+
| standard_deviation | decimal(7,4) | |
Apache Impala Guide | 505
Impala SQL Language Reference
| variance | decimal(7,4) | |
+--------------------+--------------+---------+
Restrictions:
This function cannot be used in an analytic context. That is, the OVER() clause is not allowed at all with this function.
Related information:
The STDDEV(), STDDEV_POP(), and STDDEV_SAMP() functions compute the standard deviation (square root of the
variance) based on the results of VARIANCE(), VARIANCE_POP(), and VARIANCE_SAMP() respectively. See STDDEV,
STDDEV_SAMP, STDDEV_POP Functions on page 499 for details about the standard deviation property.
Impala Analytic Functions
Analytic functions (also known as window functions) are a special category of built-in functions. Like aggregate functions,
they examine the contents of multiple input rows to compute each output value. However, rather than being limited
to one result value per GROUP BY group, they operate on windows where the input rows are ordered and grouped
using flexible conditions expressed through an OVER() clause.
Added in: CDH 5.2.0 / Impala 2.0.0
Some functions, such as LAG() and RANK(), can only be used in this analytic context. Some aggregate functions do
double duty: when you call the aggregation functions such as MAX(), SUM(), AVG(), and so on with an OVER() clause,
they produce an output value for each row, based on computations across other rows in the window.
Although analytic functions often compute the same value you would see from an aggregate function in a GROUP BY
query, the analytic functions produce a value for each row in the result set rather than a single value for each group.
This flexibility lets you include additional columns in the SELECT list, offering more opportunities for organizing and
filtering the result set.
Analytic function calls are only allowed in the SELECT list and in the outermost ORDER BY clause of the query. During
query processing, analytic functions are evaluated after other query stages such as joins, WHERE, and GROUP BY,
The rows that are part of each partition are analyzed by computations across an ordered or unordered set of rows.
For example, COUNT() and SUM() might be applied to all the rows in the partition, in which case the order of analysis
does not matter. The ORDER BY clause might be used inside the OVER() clause to defines the ordering that applies
to functions such as LAG() and FIRST_VALUE().
Analytic functions are frequently used in fields such as finance and science to provide trend, outlier, and bucketed
analysis for large data sets. You might also see the term “window functions” in database literature, referring to the
sequence of rows (the “window”) that the function call applies to, particularly when the OVER clause includes a ROWS
or RANGE keyword.
The following sections describe the analytic query clauses and the pure analytic functions provided by Impala. For
usage information about aggregate functions in an analytic context, see Impala Aggregate Functions on page 479.
OVER Clause
The OVER clause is required for calls to pure analytic functions such as LEAD(), RANK(), and FIRST_VALUE(). When
you include an OVER clause with calls to aggregate functions such as MAX(), COUNT(), or SUM(), they operate as
analytic functions.
Syntax:
function(args) OVER([partition_by_clause] [order_by_clause [window_clause]])
partition_by_clause ::= PARTITION BY expr [, expr ...]
order_by_clause ::= ORDER BY expr [ASC | DESC] [NULLS FIRST | NULLS LAST] [, expr [ASC
| DESC] [NULLS FIRST | NULLS LAST] ...]
window_clause: See Window Clause
PARTITION BY clause:
506 | Apache Impala Guide
Impala SQL Language Reference
The PARTITION BY clause acts much like the GROUP BY clause in the outermost block of a query. It divides the rows
into groups containing identical values in one or more columns. These logical groups are known as partitions. Throughout
the discussion of analytic functions, “partitions” refers to the groups produced by the PARTITION BY clause, not to
partitioned tables. However, note the following limitation that applies specifically to analytic function calls involving
partitioned tables.
In queries involving both analytic functions and partitioned tables, partition pruning only occurs for columns named
in the PARTITION BY clause of the analytic function call. For example, if an analytic function query has a clause such
as WHERE year=2016, the way to make the query prune all other YEAR partitions is to include PARTITION BY year
in the analytic function call; for example, OVER (PARTITION BY year,other_columns
other_analytic_clauses).
The sequence of results from an analytic function “resets” for each new partition in the result set. That is, the set of
preceding or following rows considered by the analytic function always come from a single partition. Any MAX(),
SUM(), ROW_NUMBER(), and so on apply to each partition independently. Omit the PARTITION BY clause to apply
the analytic operation to all the rows in the table.
ORDER BY clause:
The ORDER BY clause works much like the ORDER BY clause in the outermost block of a query. It defines the order in
which rows are evaluated for the entire input set, or for each group produced by a PARTITION BY clause. You can
order by one or multiple expressions, and for each expression optionally choose ascending or descending order and
whether nulls come first or last in the sort order. Because this ORDER BY clause only defines the order in which rows
are evaluated, if you want the results to be output in a specific order, also include an ORDER BY clause in the outer
block of the query.
When the ORDER BY clause is omitted, the analytic function applies to all items in the group produced by the PARTITION
BY clause. When the ORDER BY clause is included, the analysis can apply to all or a subset of the items in the group,
depending on the optional window clause.
The order in which the rows are analyzed is only defined for those columns specified in ORDER BY clauses.
One difference between the analytic and outer uses of the ORDER BY clause: inside the OVER clause, ORDER BY 1 or
other integer value is interpreted as a constant sort value (effectively a no-op) rather than referring to column 1.
Window clause:
The window clause is only allowed in combination with an ORDER BY clause. If the ORDER BY clause is specified but
the window clause is not, the default window is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. See
Window Clause on page 508 for full details.
HBase considerations:
Because HBase tables are optimized for single-row lookups rather than full scans, analytic functions using the OVER()
clause are not recommended for HBase tables. Although such queries work, their performance is lower than on
comparable tables using HDFS data files.
Parquet considerations:
Analytic functions are very efficient for Parquet tables. The data that is examined during evaluation of the OVER()
clause comes from a specified set of columns, and the values for each column are arranged sequentially within each
data file.
Text table considerations:
Analytic functions are convenient to use with text tables for exploratory business intelligence. When the volume of
data is substantial, prefer to use Parquet tables for performance-critical analytic queries.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
Apache Impala Guide | 507
Impala SQL Language Reference
The following example shows how to synthesize a numeric sequence corresponding to all the rows in a table. The new
table has the same columns as the old one, plus an additional column ID containing the integers 1, 2, 3, and so on,
corresponding to the order of a TIMESTAMP column in the original table.
CREATE TABLE events_with_id AS
SELECT
row_number() OVER (ORDER BY date_and_time) AS id,
c1, c2, c3, c4
FROM events;
The following example shows how to determine the number of rows containing each value for a column. Unlike a
corresponding GROUP BY query, this one can analyze a single column and still return all values (not just the distinct
ones) from the other columns.
SELECT x, y, z,
count() OVER (PARTITION BY x) AS how_many_x
FROM t1;
Restrictions:
You cannot directly combine the DISTINCT operator with analytic function calls. You can put the analytic function call
in a WITH clause or an inline view, and apply the DISTINCT operator to its result set.
WITH t1 AS (SELECT x, sum(x) OVER (PARTITION BY x) AS total FROM t1)
SELECT DISTINCT x, total FROM t1;
Window Clause
Certain analytic functions accept an optional window clause, which makes the function analyze only certain rows
“around” the current row rather than all rows in the partition. For example, you can get a moving average by specifying
some number of preceding and following rows, or a running count or running total by specifying all rows up to the
current position. This clause can result in different analytic results for rows within the same partition.
The window clause is supported with the AVG(), COUNT(), FIRST_VALUE(), LAST_VALUE(), and SUM() functions.
For MAX() and MIN(), the window clause only allowed if the start bound is UNBOUNDED PRECEDING
Syntax:
ROWS BETWEEN [ { m | UNBOUNDED } PRECEDING | CURRENT ROW] [ AND [CURRENT ROW | { UNBOUNDED
| n } FOLLOWING] ]
RANGE BETWEEN [ {m | UNBOUNDED } PRECEDING | CURRENT ROW] [ AND [CURRENT ROW | { UNBOUNDED
| n } FOLLOWING] ]
ROWS BETWEEN defines the size of the window in terms of the indexes of the rows in the result set. The size of the
window is predictable based on the clauses the position within the result set.
RANGE BETWEEN does not currently support numeric arguments to define a variable-size sliding window.
Currently, Impala supports only some combinations of arguments to the RANGE clause:
• RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW (the default when ORDER BY is specified and
the window clause is omitted)
• RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING
• RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
When RANGE is used, CURRENT ROW includes not just the current row but all rows that are tied with the current row
based on the ORDER BY expressions.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
508 | Apache Impala Guide
Impala SQL Language Reference
The following examples show financial data for a fictional stock symbol JDR. The closing price moves up and down
each day.
create table stock_ticker (stock_symbol string, closing_price decimal(8,2), closing_date
timestamp);
...load some data...
select * from stock_ticker order by stock_symbol, closing_date
+--------------+---------------+---------------------+
| stock_symbol | closing_price | closing_date |
+--------------+---------------+---------------------+
| JDR | 12.86 | 2014-10-02 00:00:00 |
| JDR | 12.89 | 2014-10-03 00:00:00 |
| JDR | 12.94 | 2014-10-04 00:00:00 |
| JDR | 12.55 | 2014-10-05 00:00:00 |
| JDR | 14.03 | 2014-10-06 00:00:00 |
| JDR | 14.75 | 2014-10-07 00:00:00 |
| JDR | 13.98 | 2014-10-08 00:00:00 |
+--------------+---------------+---------------------+
The queries use analytic functions with window clauses to compute moving averages of the closing price. For example,
ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING produces an average of the value from a 3-day span, producing
a different value for each row. The first row, which has no preceding row, only gets averaged with the row following
it. If the table contained more than one stock symbol, the PARTITION BY clause would limit the window for the
moving average to only consider the prices for a single stock.
select stock_symbol, closing_date, closing_price,
avg(closing_price) over (partition by stock_symbol order by closing_date
rows between 1 preceding and 1 following) as moving_average
from stock_ticker;
+--------------+---------------------+---------------+----------------+
| stock_symbol | closing_date | closing_price | moving_average |
+--------------+---------------------+---------------+----------------+
| JDR | 2014-10-02 00:00:00 | 12.86 | 12.87 |
| JDR | 2014-10-03 00:00:00 | 12.89 | 12.89 |
| JDR | 2014-10-04 00:00:00 | 12.94 | 12.79 |
| JDR | 2014-10-05 00:00:00 | 12.55 | 13.17 |
| JDR | 2014-10-06 00:00:00 | 14.03 | 13.77 |
| JDR | 2014-10-07 00:00:00 | 14.75 | 14.25 |
| JDR | 2014-10-08 00:00:00 | 13.98 | 14.36 |
+--------------+---------------------+---------------+----------------+
The clause ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW produces a cumulative moving average,
from the earliest data up to the value for each day.
select stock_symbol, closing_date, closing_price,
avg(closing_price) over (partition by stock_symbol order by closing_date
rows between unbounded preceding and current row) as moving_average
from stock_ticker;
+--------------+---------------------+---------------+----------------+
| stock_symbol | closing_date | closing_price | moving_average |
+--------------+---------------------+---------------+----------------+
| JDR | 2014-10-02 00:00:00 | 12.86 | 12.86 |
| JDR | 2014-10-03 00:00:00 | 12.89 | 12.87 |
| JDR | 2014-10-04 00:00:00 | 12.94 | 12.89 |
| JDR | 2014-10-05 00:00:00 | 12.55 | 12.81 |
| JDR | 2014-10-06 00:00:00 | 14.03 | 13.05 |
| JDR | 2014-10-07 00:00:00 | 14.75 | 13.33 |
| JDR | 2014-10-08 00:00:00 | 13.98 | 13.42 |
+--------------+---------------------+---------------+----------------+
AVG Function - Analytic Context
You can include an OVER clause with a call to this function to use it as an analytic function. See AVG Function on page
481 for details and examples.
Apache Impala Guide | 509
Impala SQL Language Reference
COUNT Function - Analytic Context
You can include an OVER clause with a call to this function to use it as an analytic function. See COUNT Function on
page 485 for details and examples.
CUME_DIST Function (CDH 5.5 or higher only)
Returns the cumulative distribution of a value. The value for each row in the result set is greater than 0 and less than
or equal to 1.
Syntax:
CUME_DIST (expr)
OVER ([partition_by_clause] order_by_clause)
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
Within each partition of the result set, the CUME_DIST() value represents an ascending sequence that ends at 1. Each
value represents the proportion of rows in the partition whose values are less than or equal to the value in the current
row.
If the sequence of input values contains ties, the CUME_DIST() results are identical for the tied values.
Impala only supports the CUME_DIST() function in an analytic context, not as a regular aggregate function.
Examples:
This example uses a table with 9 rows. The CUME_DIST() function evaluates the entire table because there is no
PARTITION BY clause, with the rows ordered by the weight of the animal. the sequence of values shows that 1/9 of
the values are less than or equal to the lightest animal (mouse), 2/9 of the values are less than or equal to the
second-lightest animal, and so on up to the heaviest animal (elephant), where 9/9 of the rows are less than or equal
to its weight.
create table animals (name string, kind string, kilos decimal(9,3));
insert into animals values
('Elephant', 'Mammal', 4000), ('Giraffe', 'Mammal', 1200), ('Mouse', 'Mammal', 0.020),
('Condor', 'Bird', 15), ('Horse', 'Mammal', 500), ('Owl', 'Bird', 2.5),
('Ostrich', 'Bird', 145), ('Polar bear', 'Mammal', 700), ('Housecat', 'Mammal', 5);
select name, cume_dist() over (order by kilos) from animals;
+------------+-----------------------+
| name | cume_dist() OVER(...) |
+------------+-----------------------+
| Elephant | 1 |
| Giraffe | 0.8888888888888888 |
| Polar bear | 0.7777777777777778 |
| Horse | 0.6666666666666666 |
| Ostrich | 0.5555555555555556 |
| Condor | 0.4444444444444444 |
| Housecat | 0.3333333333333333 |
| Owl | 0.2222222222222222 |
| Mouse | 0.1111111111111111 |
+------------+-----------------------+
Using a PARTITION BY clause produces a separate sequence for each partition group, in this case one for mammals
and one for birds. Because there are 3 birds and 6 mammals, the sequence illustrates how 1/3 of the “Bird” rows have
a kilos value that is less than or equal to the lightest bird, 1/6 of the “Mammal” rows have a kilos value that is less
than or equal to the lightest mammal, and so on until both the heaviest bird and heaviest mammal have a CUME_DIST()
value of 1.
select name, kind, cume_dist() over (partition by kind order by kilos) from animals
+------------+--------+-----------------------+
| name | kind | cume_dist() OVER(...) |
+------------+--------+-----------------------+
510 | Apache Impala Guide
Impala SQL Language Reference
| Ostrich | Bird | 1 |
| Condor | Bird | 0.6666666666666666 |
| Owl | Bird | 0.3333333333333333 |
| Elephant | Mammal | 1 |
| Giraffe | Mammal | 0.8333333333333334 |
| Polar bear | Mammal | 0.6666666666666666 |
| Horse | Mammal | 0.5 |
| Housecat | Mammal | 0.3333333333333333 |
| Mouse | Mammal | 0.1666666666666667 |
+------------+--------+-----------------------+
We can reverse the ordering within each partition group by using an ORDER BY ... DESC clause within the OVER()
clause. Now the lightest (smallest value of kilos) animal of each kind has a CUME_DIST() value of 1.
select name, kind, cume_dist() over (partition by kind order by kilos desc) from animals
+------------+--------+-----------------------+
| name | kind | cume_dist() OVER(...) |
+------------+--------+-----------------------+
| Owl | Bird | 1 |
| Condor | Bird | 0.6666666666666666 |
| Ostrich | Bird | 0.3333333333333333 |
| Mouse | Mammal | 1 |
| Housecat | Mammal | 0.8333333333333334 |
| Horse | Mammal | 0.6666666666666666 |
| Polar bear | Mammal | 0.5 |
| Giraffe | Mammal | 0.3333333333333333 |
| Elephant | Mammal | 0.1666666666666667 |
+------------+--------+-----------------------+
The following example manufactures some rows with identical values in the kilos column, to demonstrate how the
results look in case of tie values. For simplicity, it only shows the CUME_DIST() sequence for the “Bird” rows. Now
with 3 rows all with a value of 15, all of those rows have the same CUME_DIST() value. 4/5 of the rows have a value
for kilos that is less than or equal to 15.
insert into animals values ('California Condor', 'Bird', 15), ('Andean Condor', 'Bird',
15)
select name, kind, cume_dist() over (order by kilos) from animals where kind = 'Bird';
+-------------------+------+-----------------------+
| name | kind | cume_dist() OVER(...) |
+-------------------+------+-----------------------+
| Ostrich | Bird | 1 |
| Condor | Bird | 0.8 |
| California Condor | Bird | 0.8 |
| Andean Condor | Bird | 0.8 |
| Owl | Bird | 0.2 |
+-------------------+------+-----------------------+
The following example shows how to use an ORDER BY clause in the outer block to order the result set in case of ties.
Here, all the “Bird” rows are together, then in descending order by the result of the CUME_DIST() function, and all
tied CUME_DIST() values are ordered by the animal name.
select name, kind, cume_dist() over (partition by kind order by kilos) as ordering
from animals
where
kind = 'Bird'
order by kind, ordering desc, name;
+-------------------+------+----------+
| name | kind | ordering |
+-------------------+------+----------+
| Ostrich | Bird | 1 |
| Andean Condor | Bird | 0.8 |
| California Condor | Bird | 0.8 |
| Condor | Bird | 0.8 |
| Owl | Bird | 0.2 |
+-------------------+------+----------+
Apache Impala Guide | 511
Impala SQL Language Reference
DENSE_RANK Function
Returns an ascending sequence of integers, starting with 1. The output sequence produces duplicate integers for
duplicate values of the ORDER BY expressions. After generating duplicate output values for the “tied” input values,
the function continues the sequence with the next higher integer. Therefore, the sequence contains duplicates but no
gaps when the input contains duplicates. Starts the sequence over for each group produced by the PARTITIONED BY
clause.
Syntax:
DENSE_RANK() OVER([partition_by_clause] order_by_clause)
The PARTITION BY clause is optional. The ORDER BY clause is required. The window clause is not allowed.
Usage notes:
Often used for top-N and bottom-N queries. For example, it could produce a “top 10” report including all the items
with the 10 highest values, even if several items tied for 1st place.
Similar to ROW_NUMBER and RANK. These functions differ in how they treat duplicate combinations of values.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example demonstrates how the DENSE_RANK() function identifies where each value “places” in the
result set, producing the same result for duplicate values, but with a strict sequence from 1 to the number of groups.
For example, when results are ordered by the X column, both 1 values are tied for first; both 2 values are tied for
second; and so on.
select x, dense_rank() over(order by x) as rank, property from int_t;
+----+------+----------+
| x | rank | property |
+----+------+----------+
| 1 | 1 | square |
| 1 | 1 | odd |
| 2 | 2 | even |
| 2 | 2 | prime |
| 3 | 3 | prime |
| 3 | 3 | odd |
| 4 | 4 | even |
| 4 | 4 | square |
| 5 | 5 | odd |
| 5 | 5 | prime |
| 6 | 6 | even |
| 6 | 6 | perfect |
| 7 | 7 | lucky |
| 7 | 7 | lucky |
| 7 | 7 | lucky |
| 7 | 7 | odd |
| 7 | 7 | prime |
| 8 | 8 | even |
| 9 | 9 | square |
| 9 | 9 | odd |
| 10 | 10 | round |
| 10 | 10 | even |
+----+------+----------+
The following examples show how the DENSE_RANK() function is affected by the PARTITION property within the
ORDER BY clause.
Partitioning by the PROPERTY column groups all the even, odd, and so on values together, and DENSE_RANK() returns
the place of each value within the group, producing several ascending sequences.
select x, dense_rank() over(partition by property order by x) as rank, property from
int_t;
+----+------+----------+
| x | rank | property |
512 | Apache Impala Guide
Impala SQL Language Reference
+----+------+----------+
| 2 | 1 | even |
| 4 | 2 | even |
| 6 | 3 | even |
| 8 | 4 | even |
| 10 | 5 | even |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 1 | 1 | odd |
| 3 | 2 | odd |
| 5 | 3 | odd |
| 7 | 4 | odd |
| 9 | 5 | odd |
| 6 | 1 | perfect |
| 2 | 1 | prime |
| 3 | 2 | prime |
| 5 | 3 | prime |
| 7 | 4 | prime |
| 10 | 1 | round |
| 1 | 1 | square |
| 4 | 2 | square |
| 9 | 3 | square |
+----+------+----------+
Partitioning by the X column groups all the duplicate numbers together and returns the place each value within the
group; because each value occurs only 1 or 2 times, DENSE_RANK() designates each X value as either first or second
within its group.
select x, dense_rank() over(partition by x order by property) as rank, property from
int_t;
+----+------+----------+
| x | rank | property |
+----+------+----------+
| 1 | 1 | odd |
| 1 | 2 | square |
| 2 | 1 | even |
| 2 | 2 | prime |
| 3 | 1 | odd |
| 3 | 2 | prime |
| 4 | 1 | even |
| 4 | 2 | square |
| 5 | 1 | odd |
| 5 | 2 | prime |
| 6 | 1 | even |
| 6 | 2 | perfect |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 2 | odd |
| 7 | 3 | prime |
| 8 | 1 | even |
| 9 | 1 | odd |
| 9 | 2 | square |
| 10 | 1 | even |
| 10 | 2 | round |
+----+------+----------+
The following example shows how DENSE_RANK() produces a continuous sequence while still allowing for ties. In this
case, Croesus and Midas both have the second largest fortune, while Crassus has the third largest. (In RANK Function
on page 521, you see a similar query with the RANK() function that shows that while Crassus has the third largest
fortune, he is the fourth richest person.)
select dense_rank() over (order by net_worth desc) as placement, name, net_worth from
wealth order by placement, name;
+-----------+---------+---------------+
| placement | name | net_worth |
+-----------+---------+---------------+
| 1 | Solomon | 2000000000.00 |
| 2 | Croesus | 1000000000.00 |
Apache Impala Guide | 513
Impala SQL Language Reference
| 2 | Midas | 1000000000.00 |
| 3 | Crassus | 500000000.00 |
| 4 | Scrooge | 80000000.00 |
+-----------+---------+---------------+
Related information:
RANK Function on page 521, ROW_NUMBER Function on page 523
FIRST_VALUE Function
Returns the expression value from the first row in the window. If your table has null values, you can use the IGNORE
NULLS clause to return the first non-null value from the window. This same value is repeated for all result rows for the
group. The return value is NULL if the input expression is NULL.
Syntax:
FIRST_VALUE(expr [IGNORE NULLS]) OVER([partition_by_clause] order_by_clause
[window_clause])
The PARTITION BY clause is optional. The ORDER BY clause is required. The window clause is optional.
Usage notes:
If any duplicate values occur in the tuples evaluated by the ORDER BY clause, the result of this function is not
deterministic. Consider adding additional ORDER BY columns to ensure consistent ordering.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example shows a table with a wide variety of country-appropriate greetings. For consistency, we want
to standardize on a single greeting for each country. The FIRST_VALUE() function helps to produce a mail merge
report where every person from the same country is addressed with the same greeting.
select name, country, greeting from mail_merge;
+---------+---------+--------------+
| name | country | greeting |
+---------+---------+--------------+
| Pete | USA | Hello |
| John | USA | Hi |
| Boris | Germany | Guten tag |
| Michael | Germany | Guten morgen |
| Bjorn | Sweden | Hej |
| Mats | Sweden | Tja |
+---------+---------+--------------+
select country, name,
first_value(greeting)
over (partition by country order by name, greeting) as greeting
from mail_merge;
+---------+---------+-----------+
| country | name | greeting |
+---------+---------+-----------+
| Germany | Boris | Guten tag |
| Germany | Michael | Guten tag |
| Sweden | Bjorn | Hej |
| Sweden | Mats | Hej |
| USA | John | Hi |
| USA | Pete | Hi |
+---------+---------+-----------+
Changing the order in which the names are evaluated changes which greeting is applied to each group.
select country, name,
first_value(greeting)
over (partition by country order by name desc, greeting) as greeting
from mail_merge;
514 | Apache Impala Guide
Impala SQL Language Reference
+---------+---------+--------------+
| country | name | greeting |
+---------+---------+--------------+
| Germany | Michael | Guten morgen |
| Germany | Boris | Guten morgen |
| Sweden | Mats | Tja |
| Sweden | Bjorn | Tja |
| USA | Pete | Hello |
| USA | John | Hello |
+---------+---------+--------------+
If you introduce null values in the mail_merge table, the FIRST_VALUE() function will produce a different result
with the IGNORE NULLS clause.
select * from mail_merge;
+---------+---------+--------------+
| name | country | greeting |
+---------+---------+--------------+
| Boris | Germany | Guten tag |
| Peng | China | Nihao |
| Mats | Sweden | Tja |
| Bjorn | Sweden | Hej |
| Kei | Japan | NULL |
| Li | China | NULL |
| John | USA | Hi |
| Pete | USA | Hello |
| Michael | Germany | Guten morgen |
+---------+---------+--------------+
select country, name,
first_value(greeting ignore nulls)
over (partition by country order by name,greeting) as greeting
from mail_merge;
+---------+---------+-----------+
| country | name | greeting |
+---------+---------+-----------+
| Japan | Kei | NULL |
| Germany | Boris | Guten tag |
| Germany | Michael | Guten tag |
| China | Li | NULL |
| China | Peng | Nihao |
| Sweden | Bjorn | Hej |
| Sweden | Mats | Hej |
| USA | John | Hi |
| USA | Pete | Hi |
+---------+---------+-----------+
Changing the order in which the names are evaluated changes the result because null values are now encountered in
a different order.
select country, name,
first_value(greeting ignore nulls)
over (partition by country order by name desc, greeting) as greeting
from mail_merge
+---------+---------+--------------+
| country | name | greeting |
+---------+---------+--------------+
| Japan | Kei | NULL |
| China | Peng | Nihao |
| China | Li | Nihao |
| Sweden | Mats | Tja |
| Sweden | Bjorn | Tja |
| USA | Pete | Hello |
| USA | John | Hello |
| Germany | Michael | Guten morgen |
| Germany | Boris | Guten morgen |
+---------+---------+--------------+
Related information:
LAST_VALUE Function on page 517
Apache Impala Guide | 515
Impala SQL Language Reference
LAG Function
This function returns the value of an expression using column values from a preceding row. You specify an integer
offset, which designates a row position some number of rows previous to the current row. Any column references in
the expression argument refer to column values from that prior row. Typically, the table contains a time sequence or
numeric sequence column that clearly distinguishes the ordering of the rows.
Syntax:
LAG (expr [, offset] [, default])
OVER ([partition_by_clause] order_by_clause)
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
Sometimes used an an alternative to doing a self-join.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example uses the same stock data created in Window Clause on page 508. For each day, the query prints
the closing price alongside the previous day's closing price. The first row for each stock symbol has no previous row,
so that LAG() value is NULL.
select stock_symbol, closing_date, closing_price,
lag(closing_price,1) over (partition by stock_symbol order by closing_date) as
"yesterday closing"
from stock_ticker
order by closing_date;
+--------------+---------------------+---------------+-------------------+
| stock_symbol | closing_date | closing_price | yesterday closing |
+--------------+---------------------+---------------+-------------------+
| JDR | 2014-09-13 00:00:00 | 12.86 | NULL |
| JDR | 2014-09-14 00:00:00 | 12.89 | 12.86 |
| JDR | 2014-09-15 00:00:00 | 12.94 | 12.89 |
| JDR | 2014-09-16 00:00:00 | 12.55 | 12.94 |
| JDR | 2014-09-17 00:00:00 | 14.03 | 12.55 |
| JDR | 2014-09-18 00:00:00 | 14.75 | 14.03 |
| JDR | 2014-09-19 00:00:00 | 13.98 | 14.75 |
+--------------+---------------------+---------------+-------------------+
The following example does an arithmetic operation between the current row and a value from the previous row, to
produce a delta value for each day. This example also demonstrates how ORDER BY works independently in the
different parts of the query. The ORDER BY closing_date in the OVER clause makes the query analyze the rows in
chronological order. Then the outer query block uses ORDER BY closing_date DESC to present the results with
the most recent date first.
select stock_symbol, closing_date, closing_price,
cast(
closing_price - lag(closing_price,1) over
(partition by stock_symbol order by closing_date)
as decimal(8,2)
)
as "change from yesterday"
from stock_ticker
order by closing_date desc;
+--------------+---------------------+---------------+-----------------------+
| stock_symbol | closing_date | closing_price | change from yesterday |
+--------------+---------------------+---------------+-----------------------+
| JDR | 2014-09-19 00:00:00 | 13.98 | -0.76 |
| JDR | 2014-09-18 00:00:00 | 14.75 | 0.72 |
| JDR | 2014-09-17 00:00:00 | 14.03 | 1.47 |
| JDR | 2014-09-16 00:00:00 | 12.55 | -0.38 |
| JDR | 2014-09-15 00:00:00 | 12.94 | 0.04 |
| JDR | 2014-09-14 00:00:00 | 12.89 | 0.03 |
516 | Apache Impala Guide
Impala SQL Language Reference
| JDR | 2014-09-13 00:00:00 | 12.86 | NULL |
+--------------+---------------------+---------------+-----------------------+
Related information:
This function is the converse of LEAD Function on page 518.
LAST_VALUE Function
Returns the expression value from the last row in the window. If your table has null values, you can use the IGNORE
NULLS clause to return the last non-null value from the window. This same value is repeated for all result rows for the
group. The return value is NULL if the input expression is NULL.
Syntax:
LAST_VALUE(expr [IGNORE NULLS]) OVER([partition_by_clause] order_by_clause
[window_clause])
The PARTITION BY clause is optional. The ORDER BY clause is required. The window clause is optional.
Usage notes:
If any duplicate values occur in the tuples evaluated by the ORDER BY clause, the result of this function is not
deterministic. Consider adding additional ORDER BY columns to ensure consistent ordering.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example uses the same MAIL_MERGE table as in the example for FIRST_VALUE Function on page 514.
Because the default window when ORDER BY is used is BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW, the
query requires the UNBOUNDED FOLLOWING to look ahead to subsequent rows and find the last value for each country.
select country, name,
last_value(greeting) over (
partition by country order by name, greeting
rows between unbounded preceding and unbounded following
) as greeting
from mail_merge
+---------+---------+--------------+
| country | name | greeting |
+---------+---------+--------------+
| Germany | Boris | Guten morgen |
| Germany | Michael | Guten morgen |
| Sweden | Bjorn | Tja |
| Sweden | Mats | Tja |
| USA | John | Hello |
| USA | Pete | Hello |
+---------+---------+--------------+
Introducing null values into the MAIL_MERGE table as in the example for FIRST_VALUE Function on page 514, the result
set changes when you use the IGNORE NULLS clause.
select * from mail_merge;
+---------+---------+--------------+
| name | country | greeting |
+---------+---------+--------------+
| Kei | Japan | NULL |
| Boris | Germany | Guten tag |
| Li | China | NULL |
| Michael | Germany | Guten morgen |
| Bjorn | Sweden | Hej |
| Peng | China | Nihao |
| Pete | USA | Hello |
| Mats | Sweden | Tja |
| John | USA | Hi |
+---------+---------+--------------+
Apache Impala Guide | 517
Impala SQL Language Reference
select country, name,
last_value(greeting ignore nulls) over (
partition by country order by name, greeting
rows between unbounded preceding and unbounded following
) as greeting
from mail_merge;
+---------+---------+--------------+
| country | name | greeting |
+---------+---------+--------------+
| Japan | Kei | NULL |
| Germany | Boris | Guten morgen |
| Germany | Michael | Guten morgen |
| China | Li | Nihao |
| China | Peng | Nihao |
| Sweden | Bjorn | Tja |
| Sweden | Mats | Tja |
| USA | John | Hello |
| USA | Pete | Hello |
+---------+---------+--------------+
Related information:
FIRST_VALUE Function on page 514
LEAD Function
This function returns the value of an expression using column values from a following row. You specify an integer
offset, which designates a row position some number of rows after to the current row. Any column references in the
expression argument refer to column values from that later row. Typically, the table contains a time sequence or
numeric sequence column that clearly distinguishes the ordering of the rows.
Syntax:
LEAD (expr [, offset] [, default])
OVER ([partition_by_clause] order_by_clause)
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
Sometimes used an an alternative to doing a self-join.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example uses the same stock data created in Window Clause on page 508. The query analyzes the closing
price for a stock symbol, and for each day evaluates if the closing price for the following day is higher or lower.
select stock_symbol, closing_date, closing_price,
case
(lead(closing_price,1)
over (partition by stock_symbol order by closing_date)
- closing_price) > 0
when true then "higher"
when false then "flat or lower"
end as "trending"
from stock_ticker
order by closing_date;
+--------------+---------------------+---------------+---------------+
| stock_symbol | closing_date | closing_price | trending |
+--------------+---------------------+---------------+---------------+
| JDR | 2014-09-13 00:00:00 | 12.86 | higher |
| JDR | 2014-09-14 00:00:00 | 12.89 | higher |
| JDR | 2014-09-15 00:00:00 | 12.94 | flat or lower |
| JDR | 2014-09-16 00:00:00 | 12.55 | higher |
| JDR | 2014-09-17 00:00:00 | 14.03 | higher |
| JDR | 2014-09-18 00:00:00 | 14.75 | flat or lower |
518 | Apache Impala Guide
Impala SQL Language Reference
| JDR | 2014-09-19 00:00:00 | 13.98 | NULL |
+--------------+---------------------+---------------+---------------+
Related information:
This function is the converse of LAG Function on page 516.
MAX Function - Analytic Context
You can include an OVER clause with a call to this function to use it as an analytic function. See MAX Function on page
490 for details and examples.
MIN Function - Analytic Context
You can include an OVER clause with a call to this function to use it as an analytic function. See MIN Function on page
493 for details and examples.
NTILE Function (CDH 5.5 or higher only)
Returns the “bucket number” associated with each row, between 1 and the value of an expression. For example,
creating 100 buckets puts the lowest 1% of values in the first bucket, while creating 10 buckets puts the lowest 10%
of values in the first bucket. Each partition can have a different number of buckets.
Syntax:
NTILE (expr [, offset ...]
OVER ([partition_by_clause] order_by_clause)
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
The “ntile” name is derived from the practice of dividing result sets into fourths (quartile), tenths (decile), and so on.
The NTILE() function divides the result set based on an arbitrary percentile value.
The number of buckets must be a positive integer.
The number of items in each bucket is identical or almost so, varying by at most 1. If the number of items does not
divide evenly between the buckets, the remaining N items are divided evenly among the first N buckets.
If the number of buckets N is greater than the number of input rows in the partition, then the first N buckets each
contain one item, and the remaining buckets are empty.
Examples:
The following example shows divides groups of animals into 4 buckets based on their weight. The ORDER BY ...
DESC clause in the OVER() clause means that the heaviest 25% are in the first group, and the lightest 25% are in the
fourth group. (The ORDER BY in the outermost part of the query shows how you can order the final result set
independently from the order in which the rows are evaluated by the OVER() clause.) Because there are 9 rows in the
group, divided into 4 buckets, the first bucket receives the extra item.
create table animals (name string, kind string, kilos decimal(9,3));
insert into animals values
('Elephant', 'Mammal', 4000), ('Giraffe', 'Mammal', 1200), ('Mouse', 'Mammal', 0.020),
('Condor', 'Bird', 15), ('Horse', 'Mammal', 500), ('Owl', 'Bird', 2.5),
('Ostrich', 'Bird', 145), ('Polar bear', 'Mammal', 700), ('Housecat', 'Mammal', 5);
select name, ntile(4) over (order by kilos desc) as quarter
from animals
order by quarter desc;
+------------+---------+
| name | quarter |
+------------+---------+
| Owl | 4 |
Apache Impala Guide | 519
Impala SQL Language Reference
| Mouse | 4 |
| Condor | 3 |
| Housecat | 3 |
| Horse | 2 |
| Ostrich | 2 |
| Elephant | 1 |
| Giraffe | 1 |
| Polar bear | 1 |
+------------+---------+
The following examples show how the PARTITION clause works for the NTILE() function. Here, we divide each kind
of animal (mammal or bird) into 2 buckets, the heavier half and the lighter half.
select name, kind, ntile(2) over (partition by kind order by kilos desc) as half
from animals
order by kind;
+------------+--------+------+
| name | kind | half |
+------------+--------+------+
| Ostrich | Bird | 1 |
| Condor | Bird | 1 |
| Owl | Bird | 2 |
| Elephant | Mammal | 1 |
| Giraffe | Mammal | 1 |
| Polar bear | Mammal | 1 |
| Horse | Mammal | 2 |
| Housecat | Mammal | 2 |
| Mouse | Mammal | 2 |
+------------+--------+------+
Again, the result set can be ordered independently from the analytic evaluation. This next example lists all the animals
heaviest to lightest, showing that elephant and giraffe are in the “top half” of mammals by weight, while housecat and
mouse are in the “bottom half”.
select name, kind, ntile(2) over (partition by kind order by kilos desc) as half
from animals
order by kilos desc;
+------------+--------+------+
| name | kind | half |
+------------+--------+------+
| Elephant | Mammal | 1 |
| Giraffe | Mammal | 1 |
| Polar bear | Mammal | 1 |
| Horse | Mammal | 2 |
| Ostrich | Bird | 1 |
| Condor | Bird | 1 |
| Housecat | Mammal | 2 |
| Owl | Bird | 2 |
| Mouse | Mammal | 2 |
+------------+--------+------+
PERCENT_RANK Function (CDH 5.5 or higher only)
Syntax:
PERCENT_RANK (expr)
OVER ([partition_by_clause] order_by_clause)
Calculates the rank, expressed as a percentage, of each row within a group of rows. If rank is the value for that same
row from the RANK() function (from 1 to the total number of rows in the partition group), then the PERCENT_RANK()
value is calculated as (rank - 1) / (rows_in_group - 1) . If there is only a single item in the partition group,
its PERCENT_RANK() value is 0.
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
520 | Apache Impala Guide
Impala SQL Language Reference
This function is similar to the RANK and CUME_DIST() functions: it returns an ascending sequence representing the
position of each row within the rows of the same partition group. The actual numeric sequence is calculated differently,
and the handling of duplicate (tied) values is different.
The return values range from 0 to 1 inclusive. The first row in each partition group always has the value 0. A NULL value
is considered the lowest possible value. In the case of duplicate input values, all the corresponding rows in the result
set have an identical value: the lowest PERCENT_RANK() value of those tied rows. (In contrast to CUME_DIST(),
where all tied rows have the highest CUME_DIST() value.)
Examples:
The following example uses the same ANIMALS table as the examples for CUME_DIST() and NTILE(), with a few
additional rows to illustrate the results where some values are NULL or there is only a single row in a partition group.
insert into animals values ('Komodo dragon', 'Reptile', 70);
insert into animals values ('Unicorn', 'Mythical', NULL);
insert into animals values ('Fire-breathing dragon', 'Mythical', NULL);
As with CUME_DIST(), there is an ascending sequence for each kind of animal. For example, the “Birds” and “Mammals”
rows each have a PERCENT_RANK() sequence that ranges from 0 to 1. The “Reptile” row has a PERCENT_RANK() of
0 because that partition group contains only a single item. Both “Mythical” animals have a PERCENT_RANK() of 0
because a NULL is considered the lowest value within its partition group.
select name, kind, percent_rank() over (partition by kind order by kilos) from animals;
+-----------------------+----------+--------------------------+
| name | kind | percent_rank() OVER(...) |
+-----------------------+----------+--------------------------+
| Mouse | Mammal | 0 |
| Housecat | Mammal | 0.2 |
| Horse | Mammal | 0.4 |
| Polar bear | Mammal | 0.6 |
| Giraffe | Mammal | 0.8 |
| Elephant | Mammal | 1 |
| Komodo dragon | Reptile | 0 |
| Owl | Bird | 0 |
| California Condor | Bird | 0.25 |
| Andean Condor | Bird | 0.25 |
| Condor | Bird | 0.25 |
| Ostrich | Bird | 1 |
| Fire-breathing dragon | Mythical | 0 |
| Unicorn | Mythical | 0 |
+-----------------------+----------+--------------------------+
RANK Function
Returns an ascending sequence of integers, starting with 1. The output sequence produces duplicate integers for
duplicate values of the ORDER BY expressions. After generating duplicate output values for the “tied” input values,
the function increments the sequence by the number of tied values. Therefore, the sequence contains both duplicates
and gaps when the input contains duplicates. Starts the sequence over for each group produced by the PARTITIONED
BY clause.
Syntax:
RANK() OVER([partition_by_clause] order_by_clause)
The PARTITION BY clause is optional. The ORDER BY clause is required. The window clause is not allowed.
Usage notes:
Often used for top-N and bottom-N queries. For example, it could produce a “top 10” report including several items
that were tied for 10th place.
Similar to ROW_NUMBER and DENSE_RANK. These functions differ in how they treat duplicate combinations of values.
Added in: CDH 5.2.0 / Impala 2.0.0
Apache Impala Guide | 521
Impala SQL Language Reference
Examples:
The following example demonstrates how the RANK() function identifies where each value “places” in the result set,
producing the same result for duplicate values, and skipping values in the sequence to account for the number of
duplicates. For example, when results are ordered by the X column, both 1 values are tied for first; both 2 values are
tied for third; and so on.
select x, rank() over(order by x) as rank, property from int_t;
+----+------+----------+
| x | rank | property |
+----+------+----------+
| 1 | 1 | square |
| 1 | 1 | odd |
| 2 | 3 | even |
| 2 | 3 | prime |
| 3 | 5 | prime |
| 3 | 5 | odd |
| 4 | 7 | even |
| 4 | 7 | square |
| 5 | 9 | odd |
| 5 | 9 | prime |
| 6 | 11 | even |
| 6 | 11 | perfect |
| 7 | 13 | lucky |
| 7 | 13 | lucky |
| 7 | 13 | lucky |
| 7 | 13 | odd |
| 7 | 13 | prime |
| 8 | 18 | even |
| 9 | 19 | square |
| 9 | 19 | odd |
| 10 | 21 | round |
| 10 | 21 | even |
+----+------+----------+
The following examples show how the RANK() function is affected by the PARTITION property within the ORDER BY
clause.
Partitioning by the PROPERTY column groups all the even, odd, and so on values together, and RANK() returns the
place of each value within the group, producing several ascending sequences.
select x, rank() over(partition by property order by x) as rank, property from int_t;
+----+------+----------+
| x | rank | property |
+----+------+----------+
| 2 | 1 | even |
| 4 | 2 | even |
| 6 | 3 | even |
| 8 | 4 | even |
| 10 | 5 | even |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 1 | 1 | odd |
| 3 | 2 | odd |
| 5 | 3 | odd |
| 7 | 4 | odd |
| 9 | 5 | odd |
| 6 | 1 | perfect |
| 2 | 1 | prime |
| 3 | 2 | prime |
| 5 | 3 | prime |
| 7 | 4 | prime |
| 10 | 1 | round |
| 1 | 1 | square |
| 4 | 2 | square |
| 9 | 3 | square |
+----+------+----------+
522 | Apache Impala Guide
Impala SQL Language Reference
Partitioning by the X column groups all the duplicate numbers together and returns the place each value within the
group; because each value occurs only 1 or 2 times, RANK() designates each X value as either first or second within
its group.
select x, rank() over(partition by x order by property) as rank, property from int_t;
+----+------+----------+
| x | rank | property |
+----+------+----------+
| 1 | 1 | odd |
| 1 | 2 | square |
| 2 | 1 | even |
| 2 | 2 | prime |
| 3 | 1 | odd |
| 3 | 2 | prime |
| 4 | 1 | even |
| 4 | 2 | square |
| 5 | 1 | odd |
| 5 | 2 | prime |
| 6 | 1 | even |
| 6 | 2 | perfect |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 1 | lucky |
| 7 | 4 | odd |
| 7 | 5 | prime |
| 8 | 1 | even |
| 9 | 1 | odd |
| 9 | 2 | square |
| 10 | 1 | even |
| 10 | 2 | round |
+----+------+----------+
The following example shows how a magazine might prepare a list of history's wealthiest people. Croesus and Midas
are tied for second, then Crassus is fourth.
select rank() over (order by net_worth desc) as rank, name, net_worth from wealth order
by rank, name;
+------+---------+---------------+
| rank | name | net_worth |
+------+---------+---------------+
| 1 | Solomon | 2000000000.00 |
| 2 | Croesus | 1000000000.00 |
| 2 | Midas | 1000000000.00 |
| 4 | Crassus | 500000000.00 |
| 5 | Scrooge | 80000000.00 |
+------+---------+---------------+
Related information:
DENSE_RANK Function on page 512, ROW_NUMBER Function on page 523
ROW_NUMBER Function
Returns an ascending sequence of integers, starting with 1. Starts the sequence over for each group produced by the
PARTITIONED BY clause. The output sequence includes different values for duplicate input values. Therefore, the
sequence never contains any duplicates or gaps, regardless of duplicate input values.
Syntax:
ROW_NUMBER() OVER([partition_by_clause] order_by_clause)
The ORDER BY clause is required. The PARTITION BY clause is optional. The window clause is not allowed.
Usage notes:
Often used for top-N and bottom-N queries where the input values are known to be unique, or precisely N rows are
needed regardless of duplicate values.
Apache Impala Guide | 523
Impala SQL Language Reference
Because its result value is different for each row in the result set (when used without a PARTITION BY clause),
ROW_NUMBER() can be used to synthesize unique numeric ID values, for example for result sets involving unique values
or tuples.
Similar to RANK and DENSE_RANK. These functions differ in how they treat duplicate combinations of values.
Added in: CDH 5.2.0 / Impala 2.0.0
Examples:
The following example demonstrates how ROW_NUMBER() produces a continuous numeric sequence, even though
some values of X are repeated.
select x, row_number() over(order by x, property) as row_number, property from int_t;
+----+------------+----------+
| x | row_number | property |
+----+------------+----------+
| 1 | 1 | odd |
| 1 | 2 | square |
| 2 | 3 | even |
| 2 | 4 | prime |
| 3 | 5 | odd |
| 3 | 6 | prime |
| 4 | 7 | even |
| 4 | 8 | square |
| 5 | 9 | odd |
| 5 | 10 | prime |
| 6 | 11 | even |
| 6 | 12 | perfect |
| 7 | 13 | lucky |
| 7 | 14 | lucky |
| 7 | 15 | lucky |
| 7 | 16 | odd |
| 7 | 17 | prime |
| 8 | 18 | even |
| 9 | 19 | odd |
| 9 | 20 | square |
| 10 | 21 | even |
| 10 | 22 | round |
+----+------------+----------+
The following example shows how a financial institution might assign customer IDs to some of history's wealthiest
figures. Although two of the people have identical net worth figures, unique IDs are required for this purpose.
ROW_NUMBER() produces a sequence of five different values for the five input rows.
select row_number() over (order by net_worth desc) as account_id, name, net_worth
from wealth order by account_id, name;
+------------+---------+---------------+
| account_id | name | net_worth |
+------------+---------+---------------+
| 1 | Solomon | 2000000000.00 |
| 2 | Croesus | 1000000000.00 |
| 3 | Midas | 1000000000.00 |
| 4 | Crassus | 500000000.00 |
| 5 | Scrooge | 80000000.00 |
+------------+---------+---------------+
Related information:
RANK Function on page 521, DENSE_RANK Function on page 512
SUM Function - Analytic Context
You can include an OVER clause with a call to this function to use it as an analytic function. See SUM Function on page
501 for details and examples.
524 | Apache Impala Guide
Impala SQL Language Reference
User-Defined Functions (UDFs)
User-defined functions (frequently abbreviated as UDFs) let you code your own application logic for processing column
values during an Impala query. For example, a UDF could perform calculations using an external math library, combine
several column values into one, do geospatial calculations, or other kinds of tests and transformations that are outside
the scope of the built-in SQL operators and functions.
You can use UDFs to simplify query logic when producing reports, or to transform data in flexible ways when copying
from one table to another with the INSERT ... SELECT syntax.
You might be familiar with this feature from other database products, under names such as stored functions or stored
routines.
Impala support for UDFs is available in Impala 1.2 and higher:
• In Impala 1.1, using UDFs in a query required using the Hive shell. (Because Impala and Hive share the same
metastore database, you could switch to Hive to run just those queries requiring UDFs, then switch back to Impala.)
• Starting in Impala 1.2, Impala can run both high-performance native code UDFs written in C++, and Java-based
Hive UDFs that you might already have written.
• Impala can run scalar UDFs that return a single value for each row of the result set, and user-defined aggregate
functions (UDAFs) that return a value based on a set of rows. Currently, Impala does not support user-defined
table functions (UDTFs) or window functions.
UDF Concepts
Depending on your use case, you might write all-new functions, reuse Java UDFs that you have already written for
Hive, or port Hive Java UDF code to higher-performance native Impala UDFs in C++. You can code either scalar functions
for producing results one row at a time, or more complex aggregate functions for doing analysis across. The following
sections discuss these different aspects of working with UDFs.
UDFs and UDAFs
Depending on your use case, the user-defined functions (UDFs) you write might accept or produce different numbers
of input and output values:
• The most general kind of user-defined function (the one typically referred to by the abbreviation UDF) takes a
single input value and produces a single output value. When used in a query, it is called once for each row in the
result set. For example:
select customer_name, is_frequent_customer(customer_id) from customers;
select obfuscate(sensitive_column) from sensitive_data;
• A user-defined aggregate function (UDAF) accepts a group of values and returns a single value. You use UDAFs to
summarize and condense sets of rows, in the same style as the built-in COUNT, MAX(), SUM(), and AVG() functions.
When called in a query that uses the GROUP BY clause, the function is called once for each combination of GROUP
BY values. For example:
-- Evaluates multiple rows but returns a single value.
select closest_restaurant(latitude, longitude) from places;
-- Evaluates batches of rows and returns a separate value for each batch.
select most_profitable_location(store_id, sales, expenses, tax_rate, depreciation) from
franchise_data group by year;
• Currently, Impala does not support other categories of user-defined functions, such as user-defined table functions
(UDTFs) or window functions.
Apache Impala Guide | 525
Impala SQL Language Reference
Native Impala UDFs
Impala supports UDFs written in C++, in addition to supporting existing Hive UDFs written in Java. Cloudera recommends
using C++ UDFs because the compiled native code can yield higher performance, with UDF execution time often 10x
faster for a C++ UDF than the equivalent Java UDF.
Using Hive UDFs with Impala
Impala can run Java-based user-defined functions (UDFs), originally written for Hive, with no changes, subject to the
following conditions:
• The parameters and return value must all use scalar data types supported by Impala. For example, complex or
nested types are not supported.
• Hive/Java UDFs must extend org.apache.hadoop.hive.ql.exec.UDF class.
• Currently, Hive UDFs that accept or return the TIMESTAMP type are not supported.
• Prior to CDH 5.7 / Impala 2.5 the return type must be a “Writable” type such as Text or IntWritable, rather
than a Java primitive type such as String or int. Otherwise, the UDF returns NULL. In CDH 5.7 / Impala 2.5 and
higher, this restriction is lifted, and both UDF arguments and return values can be Java primitive types.
• Hive UDAFs and UDTFs are not supported.
• Typically, a Java UDF will execute several times slower in Impala than the equivalent native UDF written in C++.
• In CDH 5.7 / Impala 2.5 and higher, you can transparently call Hive Java UDFs through Impala, or call Impala Java
UDFs through Hive. This feature does not apply to built-in Hive functions. Any Impala Java UDFs created with older
versions must be re-created using new CREATE FUNCTION syntax, without any signature for arguments or the
return value.
To take full advantage of the Impala architecture and performance features, you can also write Impala-specific UDFs
in C++.
For background about Java-based Hive UDFs, see the Hive documentation for UDFs. For examples or tutorials for writing
such UDFs, search the web for related blog posts.
The ideal way to understand how to reuse Java-based UDFs (originally written for Hive) with Impala is to take some of
the Hive built-in functions (implemented as Java UDFs) and take the applicable JAR files through the UDF deployment
process for Impala, creating new UDFs with different names:
1. Take a copy of the Hive JAR file containing the Hive built-in functions. For example, the path might be like
/usr/lib/hive/lib/hive-exec-0.10.0-cdh4.2.0.jar, with different version numbers corresponding to
your specific level of CDH.
2. Use jar tf jar_file to see a list of the classes inside the JAR. You will see names like
org/apache/hadoop/hive/ql/udf/UDFLower.class and
org/apache/hadoop/hive/ql/udf/UDFOPNegative.class. Make a note of the names of the functions you
want to experiment with. When you specify the entry points for the Impala CREATE FUNCTION statement, change
the slash characters to dots and strip off the .class suffix, for example
org.apache.hadoop.hive.ql.udf.UDFLower and org.apache.hadoop.hive.ql.udf.UDFOPNegative.
3. Copy that file to an HDFS location that Impala can read. (In the examples here, we renamed the file to
hive-builtins.jar in HDFS for simplicity.)
4. For each Java-based UDF that you want to call through Impala, issue a CREATE FUNCTION statement, with a
LOCATION clause containing the full HDFS path of the JAR file, and a SYMBOL clause with the fully qualified name
of the class, using dots as separators and without the .class extension. Remember that user-defined functions
are associated with a particular database, so issue a USE statement for the appropriate database first, or specify
the SQL function name as db_name.function_name. Use completely new names for the SQL functions, because
Impala UDFs cannot have the same name as Impala built-in functions.
5. Call the function from your queries, passing arguments of the correct type to match the function signature. These
arguments could be references to columns, arithmetic or other kinds of expressions, the results of CAST functions
to ensure correct data types, and so on.
526 | Apache Impala Guide
Impala SQL Language Reference
Note:
In CDH 5.12 / Impala 2.9 and higher, you can refresh the user-defined functions (UDFs) that Impala
recognizes, at the database level, by running the REFRESH FUNCTIONS statement with the database
name as an argument. Java-based UDFs can be added to the metastore database through Hive CREATE
FUNCTION statements, and made visible to Impala by subsequently running REFRESH FUNCTIONS.
For example:
CREATE DATABASE shared_udfs;
USE shared_udfs;
...use CREATE FUNCTION statements in Hive to create some Java-based UDFs
that Impala is not initially aware of...
REFRESH FUNCTIONS shared_udfs;
SELECT udf_created_by_hive(c1) FROM ...
Java UDF Example: Reusing lower() Function
For example, the following impala-shell session creates an Impala UDF my_lower() that reuses the Java code for
the Hive lower(): built-in function. We cannot call it lower() because Impala does not allow UDFs to have the same
name as built-in functions. From SQL, we call the function in a basic way (in a query with no WHERE clause), directly on
a column, and on the results of a string expression:
[localhost:21000] > create database udfs;
[localhost:21000] > use udfs;
localhost:21000] > create function lower(string) returns string location
'/user/hive/udfs/hive.jar' symbol='org.apache.hadoop.hive.ql.udf.UDFLower';
ERROR: AnalysisException: Function cannot have the same name as a builtin: lower
[localhost:21000] > create function my_lower(string) returns string location
'/user/hive/udfs/hive.jar' symbol='org.apache.hadoop.hive.ql.udf.UDFLower';
[localhost:21000] > select my_lower('Some String NOT ALREADY LOWERCASE');
+----------------------------------------------------+
| udfs.my_lower('some string not already lowercase') |
+----------------------------------------------------+
| some string not already lowercase |
+----------------------------------------------------+
Returned 1 row(s) in 0.11s
[localhost:21000] > create table t2 (s string);
[localhost:21000] > insert into t2 values ('lower'),('UPPER'),('Init cap'),('CamelCase');
Inserted 4 rows in 2.28s
[localhost:21000] > select * from t2;
+-----------+
| s |
+-----------+
| lower |
| UPPER |
| Init cap |
| CamelCase |
+-----------+
Returned 4 row(s) in 0.47s
[localhost:21000] > select my_lower(s) from t2;
+------------------+
| udfs.my_lower(s) |
+------------------+
| lower |
| upper |
| init cap |
| camelcase |
+------------------+
Returned 4 row(s) in 0.54s
[localhost:21000] > select my_lower(concat('ABC ',s,' XYZ')) from t2;
+------------------------------------------+
| udfs.my_lower(concat('abc ', s, ' xyz')) |
+------------------------------------------+
| abc lower xyz |
| abc upper xyz |
| abc init cap xyz |
Apache Impala Guide | 527
Impala SQL Language Reference
| abc camelcase xyz |
+------------------------------------------+
Returned 4 row(s) in 0.22s
Java UDF Example: Reusing negative() Function
Here is an example that reuses the Hive Java code for the negative() built-in function. This example demonstrates
how the data types of the arguments must match precisely with the function signature. At first, we create an Impala
SQL function that can only accept an integer argument. Impala cannot find a matching function when the query passes
a floating-point argument, although we can call the integer version of the function by casting the argument. Then we
overload the same function name to also accept a floating-point argument.
[localhost:21000] > create table t (x int);
[localhost:21000] > insert into t values (1), (2), (4), (100);
Inserted 4 rows in 1.43s
[localhost:21000] > create function my_neg(bigint) returns bigint location
'/user/hive/udfs/hive.jar' symbol='org.apache.hadoop.hive.ql.udf.UDFOPNegative';
[localhost:21000] > select my_neg(4);
+----------------+
| udfs.my_neg(4) |
+----------------+
| -4 |
+----------------+
[localhost:21000] > select my_neg(x) from t;
+----------------+
| udfs.my_neg(x) |
+----------------+
| -2 |
| -4 |
| -100 |
+----------------+
Returned 3 row(s) in 0.60s
[localhost:21000] > select my_neg(4.0);
ERROR: AnalysisException: No matching function with signature: udfs.my_neg(FLOAT).
[localhost:21000] > select my_neg(cast(4.0 as int));
+-------------------------------+
| udfs.my_neg(cast(4.0 as int)) |
+-------------------------------+
| -4 |
+-------------------------------+
Returned 1 row(s) in 0.11s
[localhost:21000] > create function my_neg(double) returns double location
'/user/hive/udfs/hive.jar' symbol='org.apache.hadoop.hive.ql.udf.UDFOPNegative';
[localhost:21000] > select my_neg(4.0);
+------------------+
| udfs.my_neg(4.0) |
+------------------+
| -4 |
+------------------+
Returned 1 row(s) in 0.11s
You can find the sample files mentioned here in the Impala github repo.
Runtime Environment for UDFs
By default, Impala copies UDFs into /tmp, and you can configure this location through the --local_library_dir
startup flag for the impalad daemon.
Installing the UDF Development Package
To develop UDFs for Impala, download and install the impala-udf-devel package (RHEL-based distributions) or
impala-udf-dev (Ubuntu and Debian). This package contains header files, sample source, and build configuration
files.
1. Start at:
• https://archive.cloudera.com/cdh5/ for CDH 5
• https://archive.cloudera.com/cdh6/ for CDH 6
528 | Apache Impala Guide
Impala SQL Language Reference
2. Locate the appropriate .repo or list file for your operating system version, such as the .repo file for CDH 5 on
RHEL 7.
3. Use the yum, zypper, or apt-get commands depending on your operating system. For the package name, specify
impala-udf-devel (RHEL-based distributions) or impala-udf-dev (Ubuntu and Debian).
Note: The UDF development code does not rely on Impala being installed on the same machine. You
can write and compile UDFs on a minimal development system, then deploy them on a different one
for use with Impala. If you develop UDFs on a server managed by Cloudera Manager through the
parcel mechanism, you still install the UDF development kit through the package mechanism; this
small standalone package does not interfere with the parcels containing the main Impala code.
When you are ready to start writing your own UDFs, download the sample code and build scripts from the Cloudera
sample UDF github. Then see Writing User-Defined Functions (UDFs) on page 529 for how to code UDFs, and Examples
of Creating and Using UDFs on page 535 for how to build and run UDFs.
Writing User-Defined Functions (UDFs)
Before starting UDF development, make sure to install the development package and download the UDF code samples,
as described in Installing the UDF Development Package on page 528.
When writing UDFs:
• Keep in mind the data type differences as you transfer values from the high-level SQL to your lower-level UDF
code. For example, in the UDF code you might be much more aware of how many bytes different kinds of integers
require.
• Use best practices for function-oriented programming: choose arguments carefully, avoid side effects, make each
function do a single thing, and so on.
Getting Started with UDF Coding
To understand the layout and member variables and functions of the predefined UDF data types, examine the header
file /usr/include/impala_udf/udf.h:
// This is the only Impala header required to develop UDFs and UDAs. This header
// contains the types that need to be used and the FunctionContext object. The context
// object serves as the interface object between the UDF/UDA and the impala process.
For the basic declarations needed to write a scalar UDF, see the header file udf-sample.h within the sample build
environment, which defines a simple function named AddUdf():
#ifndef IMPALA_UDF_SAMPLE_UDF_H
#define IMPALA_UDF_SAMPLE_UDF_H
#include
using namespace impala_udf;
IntVal AddUdf(FunctionContext* context, const IntVal& arg1, const IntVal& arg2);
#endif
For sample C++ code for a simple function named AddUdf(), see the source file udf-sample.cc within the sample
build environment:
#include "udf-sample.h"
// In this sample we are declaring a UDF that adds two ints and returns an int.
IntVal AddUdf(FunctionContext* context, const IntVal& arg1, const IntVal& arg2) {
if (arg1.is_null || arg2.is_null) return IntVal::null();
return IntVal(arg1.val + arg2.val);
}
Apache Impala Guide | 529
Impala SQL Language Reference
// Multiple UDFs can be defined in the same file
Data Types for Function Arguments and Return Values
Each value that a user-defined function can accept as an argument or return as a result value must map to a SQL data
type that you could specify for a table column.
Currently, Impala UDFs cannot accept arguments or return values of the Impala complex types (STRUCT, ARRAY, or
MAP).
Each data type has a corresponding structure defined in the C++ and Java header files, with two member fields and
some predefined comparison operators and constructors:
• is_null indicates whether the value is NULL or not. val holds the actual argument or return value when it is
non-NULL.
• Each struct also defines a null() member function that constructs an instance of the struct with the is_null
flag set.
• The built-in SQL comparison operators and clauses such as <, >=, BETWEEN, and ORDER BY all work automatically
based on the SQL return type of each UDF. For example, Impala knows how to evaluate BETWEEN 1 AND
udf_returning_int(col1) or ORDER BY udf_returning_string(col2) without you declaring any
comparison operators within the UDF itself.
For convenience within your UDF code, each struct defines == and != operators for comparisons with other structs
of the same type. These are for typical C++ comparisons within your own code, not necessarily reproducing SQL
semantics. For example, if the is_null flag is set in both structs, they compare as equal. That behavior of null
comparisons is different from SQL (where NULL == NULL is NULL rather than true), but more in line with typical
C++ behavior.
• Each kind of struct has one or more constructors that define a filled-in instance of the struct, optionally with
default values.
• Impala cannot process UDFs that accept the composite or nested types as arguments or return them as result
values. This limitation applies both to Impala UDFs written in C++ and Java-based Hive UDFs.
• You can overload functions by creating multiple functions with the same SQL name but different argument types.
For overloaded functions, you must use different C++ or Java entry point names in the underlying functions.
The data types defined on the C++ side (in /usr/include/impala_udf/udf.h) are:
• IntVal represents an INT column.
• BigIntVal represents a BIGINT column. Even if you do not need the full range of a BIGINT value, it can be
useful to code your function arguments as BigIntVal to make it convenient to call the function with different
kinds of integer columns and expressions as arguments. Impala automatically casts smaller integer types to larger
ones when appropriate, but does not implicitly cast large integer types to smaller ones.
• SmallIntVal represents a SMALLINT column.
• TinyIntVal represents a TINYINT column.
• StringVal represents a STRING column. It has a len field representing the length of the string, and a ptr field
pointing to the string data. It has constructors that create a new StringVal struct based on a null-terminated
C-style string, or a pointer plus a length; these new structs still refer to the original string data rather than allocating
a new buffer for the data. It also has a constructor that takes a pointer to a FunctionContext struct and a length,
that does allocate space for a new copy of the string data, for use in UDFs that return string values.
• BooleanVal represents a BOOLEAN column.
• FloatVal represents a FLOAT column.
530 | Apache Impala Guide
Impala SQL Language Reference
• DoubleVal represents a DOUBLE column.
• TimestampVal represents a TIMESTAMP column. It has a date field, a 32-bit integer representing the Gregorian
date, that is, the days past the epoch date. It also has a time_of_day field, a 64-bit integer representing the
current time of day in nanoseconds.
Variable-Length Argument Lists
UDFs typically take a fixed number of arguments, with each one named explicitly in the signature of your C++ function.
Your function can also accept additional optional arguments, all of the same type. For example, you can concatenate
two strings, three strings, four strings, and so on. Or you can compare two numbers, three numbers, four numbers,
and so on.
To accept a variable-length argument list, code the signature of your function like this:
StringVal Concat(FunctionContext* context, const StringVal& separator,
int num_var_args, const StringVal* args);
In the CREATE FUNCTION statement, after the type of the first optional argument, include ... to indicate it could be
followed by more arguments of the same type. For example, the following function accepts a STRING argument,
followed by one or more additional STRING arguments:
[localhost:21000] > create function my_concat(string, string ...) returns string location
'/user/test_user/udfs/sample.so' symbol='Concat';
The call from the SQL query must pass at least one argument to the variable-length portion of the argument list.
When Impala calls the function, it fills in the initial set of required arguments, then passes the number of extra arguments
and a pointer to the first of those optional arguments.
Handling NULL Values
For correctness, performance, and reliability, it is important for each UDF to handle all situations where any NULL
values are passed to your function. For example, when passed a NULL, UDFs typically also return NULL. In an aggregate
function, which could be passed a combination of real and NULL values, you might make the final value into a NULL
(as in CONCAT()), ignore the NULL value (as in AVG()), or treat it the same as a numeric zero or empty string.
Each parameter type, such as IntVal or StringVal, has an is_null Boolean member. Test this flag immediately
for each argument to your function, and if it is set, do not refer to the val field of the argument structure. The val
field is undefined when the argument is NULL, so your function could go into an infinite loop or produce incorrect
results if you skip the special handling for NULL.
If your function returns NULL when passed a NULL value, or in other cases such as when a search string is not found,
you can construct a null instance of the return type by using its null() member function.
Memory Allocation for UDFs
By default, memory allocated within a UDF is deallocated when the function exits, which could be before the query is
finished. The input arguments remain allocated for the lifetime of the function, so you can refer to them in the
expressions for your return values. If you use temporary variables to construct all-new string values, use the
StringVal() constructor that takes an initial FunctionContext* argument followed by a length, and copy the data
into the newly allocated memory buffer.
Thread-Safe Work Area for UDFs
One way to improve performance of UDFs is to specify the optional PREPARE_FN and CLOSE_FN clauses on the CREATE
FUNCTION statement. The “prepare” function sets up a thread-safe data structure in memory that you can use as a
work area. The “close” function deallocates that memory. Each subsequent call to the UDF within the same thread can
access that same memory area. There might be several such memory areas allocated on the same host, as UDFs are
parallelized using multiple threads.
Apache Impala Guide | 531
Impala SQL Language Reference
Within this work area, you can set up predefined lookup tables, or record the results of complex operations on data
types such as STRING or TIMESTAMP. Saving the results of previous computations rather than repeating the computation
each time is an optimization known as http://en.wikipedia.org/wiki/Memoization. For example, if your UDF performs
a regular expression match or date manipulation on a column that repeats the same value over and over, you could
store the last-computed value or a hash table of already-computed values, and do a fast lookup to find the result for
subsequent iterations of the UDF.
Each such function must have the signature:
void function_name(impala_udf::FunctionContext*,
impala_udf::FunctionContext::FunctionScope)
Currently, only THREAD_SCOPE is implemented, not FRAGMENT_SCOPE. See udf.h for details about the scope values.
Error Handling for UDFs
To handle errors in UDFs, you call functions that are members of the initial FunctionContext* argument passed to
your function.
A UDF can record one or more warnings, for conditions that indicate minor, recoverable problems that do not cause
the query to stop. The signature for this function is:
bool AddWarning(const char* warning_msg);
For a serious problem that requires cancelling the query, a UDF can set an error flag that prevents the query from
returning any results. The signature for this function is:
void SetError(const char* error_msg);
Writing User-Defined Aggregate Functions (UDAFs)
User-defined aggregate functions (UDAFs or UDAs) are a powerful and flexible category of user-defined functions. If
a query processes N rows, calling a UDAF during the query condenses the result set, anywhere from a single value
(such as with the SUM or MAX functions), or some number less than or equal to N (as in queries using the GROUP BY or
HAVING clause).
The Underlying Functions for a UDA
A UDAF must maintain a state value across subsequent calls, so that it can accumulate a result across a set of calls,
rather than derive it purely from one set of arguments. For that reason, a UDAF is represented by multiple underlying
functions:
• An initialization function that sets any counters to zero, creates empty buffers, and does any other one-time setup
for a query.
• An update function that processes the arguments for each row in the query result set and accumulates an
intermediate result for each node. For example, this function might increment a counter, append to a string buffer,
or set flags.
• A merge function that combines the intermediate results from two different nodes.
• A serialize function that flattens any intermediate values containing pointers, and frees any memory allocated
during the init, update, and merge phases.
• A finalize function that either passes through the combined result unchanged, or does one final transformation.
In the SQL syntax, you create a UDAF by using the statement CREATE AGGREGATE FUNCTION. You specify the entry
points of the underlying C++ functions using the clauses INIT_FN, UPDATE_FN, MERGE_FN, SERIALIZE_FN, and
FINALIZE_FN.
For convenience, you can use a naming convention for the underlying functions and Impala automatically recognizes
those entry points. Specify the UPDATE_FN clause, using an entry point name containing the string update or Update.
When you omit the other _FN clauses from the SQL statement, Impala looks for entry points with names formed by
substituting the update or Update portion of the specified name.
532 | Apache Impala Guide
Impala SQL Language Reference
uda-sample.h:
See this file online at: uda-sample.h
uda-sample.cc:
See this file online at: uda-sample.cc
Intermediate Results for UDAs
A user-defined aggregate function might produce and combine intermediate results during some phases of processing,
using a different data type than the final return value. For example, if you implement a function similar to the built-in
AVG() function, it must keep track of two values, the number of values counted and the sum of those values. Or, you
might accumulate a string value over the course of a UDA, then in the end return a numeric or Boolean result.
In such a case, specify the data type of the intermediate results using the optional INTERMEDIATE type_name clause
of the CREATE AGGREGATE FUNCTION statement. If the intermediate data is a typeless byte array (for example, to
represent a C++ struct or array), specify the type name as CHAR(n), with n representing the number of bytes in the
intermediate result buffer.
For an example of this technique, see the trunc_sum() aggregate function, which accumulates intermediate results
of type DOUBLE and returns BIGINT at the end. View the appropriate CREATE FUNCTION statement and the
implementation of the underlying TruncSum*() functions on Github.
Building and Deploying UDFs
This section explains the steps to compile Impala UDFs from C++ source code, and deploy the resulting libraries for
use in Impala queries.
Impala UDF development package ships with a sample build environment for UDFs, that you can study, experiment
with, and adapt for your own use.
To build the sample environment:
1. Install the Impala UDF development package as described in Installing the UDF Development Package on page 528
2. Run the following commands:
cmake .
make
The cmake configuration command reads the file CMakeLists.txt and generates a Makefile customized for your
particular directory paths. Then the make command runs the actual build steps based on the rules in the Makefile.
Impala loads the shared library from an HDFS location. After building a shared library containing one or more UDFs,
use hdfs dfs or hadoop fs commands to copy the binary file to an HDFS location readable by Impala.
The final step in deployment is to issue a CREATE FUNCTION statement in the impala-shell interpreter to make
Impala aware of the new function. See CREATE FUNCTION Statement on page 228 for syntax details. Because each
function is associated with a particular database, always issue a USE statement to the appropriate database before
creating a function, or specify a fully qualified name, that is, CREATE FUNCTION db_name.function_name.
As you update the UDF code and redeploy updated versions of a shared library, use DROP FUNCTION and CREATE
FUNCTION to let Impala pick up the latest version of the code.
Apache Impala Guide | 533
Impala SQL Language Reference
Note:
In CDH 5.7 / Impala 2.5 and higher, Impala UDFs and UDAs written in C++ are persisted in the metastore
database. Java UDFs are also persisted, if they were created with the new CREATE FUNCTION syntax
for Java UDFs, where the Java function argument and return types are omitted. Java-based UDFs
created with the old CREATE FUNCTION syntax do not persist across restarts because they are held
in the memory of the catalogd daemon. Until you re-create such Java UDFs using the new CREATE
FUNCTION syntax, you must reload those Java-based UDFs by running the original CREATE FUNCTION
statements again each time you restart the catalogd daemon. Prior to CDH 5.7 / Impala 2.5 the
requirement to reload functions after a restart applied to both C++ and Java functions.
See CREATE FUNCTION Statement on page 228 and DROP FUNCTION Statement on page 263 for the
new syntax for the persistent Java UDFs.
Prerequisites for the build environment are:
1. Install the packages using the appropriate package installation command for your Linux distribution.
sudo yum install gcc-c++ cmake boost-devel
sudo yum install impala-udf-devel
# The package name on Ubuntu and Debian is impala-udf-dev.
2. Download the UDF sample code:
git clone https://github.com/cloudera/impala-udf-samples
cd impala-udf-samples && cmake . && make
3. Unpack the sample code in udf_samples.tar.gz and use that as a template to set up your build environment.
To build the original samples:
# Process CMakeLists.txt and set up appropriate Makefiles.
cmake .
# Generate shared libraries from UDF and UDAF sample code,
# udf_samples/libudfsample.so and udf_samples/libudasample.so
make
The sample code to examine, experiment with, and adapt is in these files:
• udf-sample.h: Header file that declares the signature for a scalar UDF (AddUDF).
• udf-sample.cc: Sample source for a simple UDF that adds two integers. Because Impala can reference multiple
function entry points from the same shared library, you could add other UDF functions in this file and add their
signatures to the corresponding header file.
• udf-sample-test.cc: Basic unit tests for the sample UDF.
• uda-sample.h: Header file that declares the signature for sample aggregate functions. The SQL functions will
be called COUNT, AVG, and STRINGCONCAT. Because aggregate functions require more elaborate coding to handle
the processing for multiple phases, there are several underlying C++ functions such as CountInit, AvgUpdate,
and StringConcatFinalize.
• uda-sample.cc: Sample source for simple UDAFs that demonstrate how to manage the state transitions as the
underlying functions are called during the different phases of query processing.
– The UDAF that imitates the COUNT function keeps track of a single incrementing number; the merge functions
combine the intermediate count values from each Impala node, and the combined number is returned
verbatim by the finalize function.
– The UDAF that imitates the AVG function keeps track of two numbers, a count of rows processed and the
sum of values for a column. These numbers are updated and merged as with COUNT, then the finalize function
divides them to produce and return the final average value.
534 | Apache Impala Guide
Impala SQL Language Reference
– The UDAF that concatenates string values into a comma-separated list demonstrates how to manage storage
for a string that increases in length as the function is called for multiple rows.
• uda-sample-test.cc: basic unit tests for the sample UDAFs.
Performance Considerations for UDFs
Because a UDF typically processes each row of a table, potentially being called billions of times, the performance of
each UDF is a critical factor in the speed of the overall ETL or ELT pipeline. Tiny optimizations you can make within the
function body can pay off in a big way when the function is called over and over when processing a huge result set.
Examples of Creating and Using UDFs
This section demonstrates how to create and use all kinds of user-defined functions (UDFs).
For downloadable examples that you can experiment with, adapt, and use as templates for your own functions, see
the Cloudera sample UDF github. You must have already installed the appropriate header files, as explained in Installing
the UDF Development Package on page 528.
Sample C++ UDFs: HasVowels, CountVowels, StripVowels
This example shows 3 separate UDFs that operate on strings and return different data types. In the C++ code, the
functions are HasVowels() (checks if a string contains any vowels), CountVowels() (returns the number of vowels
in a string), and StripVowels() (returns a new string with vowels removed).
First, we add the signatures for these functions to udf-sample.h in the demo build environment:
BooleanVal HasVowels(FunctionContext* context, const StringVal& input);
IntVal CountVowels(FunctionContext* context, const StringVal& arg1);
StringVal StripVowels(FunctionContext* context, const StringVal& arg1);
Then, we add the bodies of these functions to udf-sample.cc:
BooleanVal HasVowels(FunctionContext* context, const StringVal& input)
{
if (input.is_null) return BooleanVal::null();
int index;
uint8_t *ptr;
for (ptr = input.ptr, index = 0; index <= input.len; index++, ptr++)
{
uint8_t c = tolower(*ptr);
if (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u')
{
return BooleanVal(true);
}
}
return BooleanVal(false);
}
IntVal CountVowels(FunctionContext* context, const StringVal& arg1)
{
if (arg1.is_null) return IntVal::null();
int count;
int index;
uint8_t *ptr;
for (ptr = arg1.ptr, count = 0, index = 0; index <= arg1.len; index++, ptr++)
{
uint8_t c = tolower(*ptr);
if (c == 'a' || c == 'e' || c == 'i' || c == 'o' || c == 'u')
{
count++;
}
}
return IntVal(count);
Apache Impala Guide | 535
Impala SQL Language Reference
}
StringVal StripVowels(FunctionContext* context, const StringVal& arg1)
{
if (arg1.is_null) return StringVal::null();
int index;
std::string original((const char *)arg1.ptr,arg1.len);
std::string shorter("");
for (index = 0; index < original.length(); index++)
{
uint8_t c = original[index];
uint8_t l = tolower(c);
if (l == 'a' || l == 'e' || l == 'i' || l == 'o' || l == 'u')
{
;
}
else
{
shorter.append(1, (char)c);
}
}
// The modified string is stored in 'shorter', which is destroyed when this function
ends. We need to make a string val
// and copy the contents.
StringVal result(context, shorter.size()); // Only the version of the ctor that
takes a context object allocates new memory
memcpy(result.ptr, shorter.c_str(), shorter.size());
return result;
}
We build a shared library, libudfsample.so, and put the library file into HDFS where Impala can read it:
$ make
[ 0%] Generating udf_samples/uda-sample.ll
[ 16%] Built target uda-sample-ir
[ 33%] Built target udasample
[ 50%] Built target uda-sample-test
[ 50%] Generating udf_samples/udf-sample.ll
[ 66%] Built target udf-sample-ir
Scanning dependencies of target udfsample
[ 83%] Building CXX object CMakeFiles/udfsample.dir/udf-sample.o
Linking CXX shared library udf_samples/libudfsample.so
[ 83%] Built target udfsample
Linking CXX executable udf_samples/udf-sample-test
[100%] Built target udf-sample-test
$ hdfs dfs -put ./udf_samples/libudfsample.so /user/hive/udfs/libudfsample.so
Finally, we go into the impala-shell interpreter where we set up some sample data, issue CREATE FUNCTION
statements to set up the SQL function names, and call the functions in some queries:
[localhost:21000] > create database udf_testing;
[localhost:21000] > use udf_testing;
[localhost:21000] > create function has_vowels (string) returns boolean location
'/user/hive/udfs/libudfsample.so' symbol='HasVowels';
[localhost:21000] > select has_vowels('abc');
+------------------------+
| udfs.has_vowels('abc') |
+------------------------+
| true |
+------------------------+
Returned 1 row(s) in 0.13s
[localhost:21000] > select has_vowels('zxcvbnm');
+----------------------------+
| udfs.has_vowels('zxcvbnm') |
+----------------------------+
| false |
+----------------------------+
536 | Apache Impala Guide
Impala SQL Language Reference
Returned 1 row(s) in 0.12s
[localhost:21000] > select has_vowels(null);
+-----------------------+
| udfs.has_vowels(null) |
+-----------------------+
| NULL |
+-----------------------+
Returned 1 row(s) in 0.11s
[localhost:21000] > select s, has_vowels(s) from t2;
+-----------+--------------------+
| s | udfs.has_vowels(s) |
+-----------+--------------------+
| lower | true |
| UPPER | true |
| Init cap | true |
| CamelCase | true |
+-----------+--------------------+
Returned 4 row(s) in 0.24s
[localhost:21000] > create function count_vowels (string) returns int location
'/user/hive/udfs/libudfsample.so' symbol='CountVowels';
[localhost:21000] > select count_vowels('cat in the hat');
+-------------------------------------+
| udfs.count_vowels('cat in the hat') |
+-------------------------------------+
| 4 |
+-------------------------------------+
Returned 1 row(s) in 0.12s
[localhost:21000] > select s, count_vowels(s) from t2;
+-----------+----------------------+
| s | udfs.count_vowels(s) |
+-----------+----------------------+
| lower | 2 |
| UPPER | 2 |
| Init cap | 3 |
| CamelCase | 4 |
+-----------+----------------------+
Returned 4 row(s) in 0.23s
[localhost:21000] > select count_vowels(null);
+-------------------------+
| udfs.count_vowels(null) |
+-------------------------+
| NULL |
+-------------------------+
Returned 1 row(s) in 0.12s
[localhost:21000] > create function strip_vowels (string) returns string location
'/user/hive/udfs/libudfsample.so' symbol='StripVowels';
[localhost:21000] > select strip_vowels('abcdefg');
+------------------------------+
| udfs.strip_vowels('abcdefg') |
+------------------------------+
| bcdfg |
+------------------------------+
Returned 1 row(s) in 0.11s
[localhost:21000] > select strip_vowels('ABCDEFG');
+------------------------------+
| udfs.strip_vowels('abcdefg') |
+------------------------------+
| BCDFG |
+------------------------------+
Returned 1 row(s) in 0.12s
[localhost:21000] > select strip_vowels(null);
+-------------------------+
| udfs.strip_vowels(null) |
+-------------------------+
| NULL |
+-------------------------+
Returned 1 row(s) in 0.16s
[localhost:21000] > select s, strip_vowels(s) from t2;
+-----------+----------------------+
| s | udfs.strip_vowels(s) |
+-----------+----------------------+
Apache Impala Guide | 537
Impala SQL Language Reference
| lower | lwr |
| UPPER | PPR |
| Init cap | nt cp |
| CamelCase | CmlCs |
+-----------+----------------------+
Returned 4 row(s) in 0.24s
Sample C++ UDA: SumOfSquares
This example demonstrates a user-defined aggregate function (UDA) that produces the sum of the squares of its input
values.
The coding for a UDA is a little more involved than a scalar UDF, because the processing is split into several phases,
each implemented by a different function. Each phase is relatively straightforward: the “update” and “merge” phases,
where most of the work is done, read an input value and combine it with some accumulated intermediate value.
As in our sample UDF from the previous example, we add function signatures to a header file (in this case,
uda-sample.h). Because this is a math-oriented UDA, we make two versions of each function, one accepting an
integer value and the other accepting a floating-point value.
void SumOfSquaresInit(FunctionContext* context, BigIntVal* val);
void SumOfSquaresInit(FunctionContext* context, DoubleVal* val);
void SumOfSquaresUpdate(FunctionContext* context, const BigIntVal& input, BigIntVal*
val);
void SumOfSquaresUpdate(FunctionContext* context, const DoubleVal& input, DoubleVal*
val);
void SumOfSquaresMerge(FunctionContext* context, const BigIntVal& src, BigIntVal* dst);
void SumOfSquaresMerge(FunctionContext* context, const DoubleVal& src, DoubleVal* dst);
BigIntVal SumOfSquaresFinalize(FunctionContext* context, const BigIntVal& val);
DoubleVal SumOfSquaresFinalize(FunctionContext* context, const DoubleVal& val);
We add the function bodies to a C++ source file (in this case, uda-sample.cc):
void SumOfSquaresInit(FunctionContext* context, BigIntVal* val) {
val->is_null = false;
val->val = 0;
}
void SumOfSquaresInit(FunctionContext* context, DoubleVal* val) {
val->is_null = false;
val->val = 0.0;
}
void SumOfSquaresUpdate(FunctionContext* context, const BigIntVal& input, BigIntVal*
val) {
if (input.is_null) return;
val->val += input.val * input.val;
}
void SumOfSquaresUpdate(FunctionContext* context, const DoubleVal& input, DoubleVal*
val) {
if (input.is_null) return;
val->val += input.val * input.val;
}
void SumOfSquaresMerge(FunctionContext* context, const BigIntVal& src, BigIntVal* dst)
{
dst->val += src.val;
}
void SumOfSquaresMerge(FunctionContext* context, const DoubleVal& src, DoubleVal* dst)
{
dst->val += src.val;
}
BigIntVal SumOfSquaresFinalize(FunctionContext* context, const BigIntVal& val) {
return val;
}
DoubleVal SumOfSquaresFinalize(FunctionContext* context, const DoubleVal& val) {
538 | Apache Impala Guide
Impala SQL Language Reference
return val;
}
As with the sample UDF, we build a shared library and put it into HDFS:
$ make
[ 0%] Generating udf_samples/uda-sample.ll
[ 16%] Built target uda-sample-ir
Scanning dependencies of target udasample
[ 33%] Building CXX object CMakeFiles/udasample.dir/uda-sample.o
Linking CXX shared library udf_samples/libudasample.so
[ 33%] Built target udasample
Scanning dependencies of target uda-sample-test
[ 50%] Building CXX object CMakeFiles/uda-sample-test.dir/uda-sample-test.o
Linking CXX executable udf_samples/uda-sample-test
[ 50%] Built target uda-sample-test
[ 50%] Generating udf_samples/udf-sample.ll
[ 66%] Built target udf-sample-ir
[ 83%] Built target udfsample
[100%] Built target udf-sample-test
$ hdfs dfs -put ./udf_samples/libudasample.so /user/hive/udfs/libudasample.so
To create the SQL function, we issue a CREATE AGGREGATE FUNCTION statement and specify the underlying C++
function names for the different phases:
[localhost:21000] > use udf_testing;
[localhost:21000] > create table sos (x bigint, y double);
[localhost:21000] > insert into sos values (1, 1.1), (2, 2.2), (3, 3.3), (4, 4.4);
Inserted 4 rows in 1.10s
[localhost:21000] > create aggregate function sum_of_squares(bigint) returns bigint
> location '/user/hive/udfs/libudasample.so'
> init_fn='SumOfSquaresInit'
> update_fn='SumOfSquaresUpdate'
> merge_fn='SumOfSquaresMerge'
> finalize_fn='SumOfSquaresFinalize';
[localhost:21000] > -- Compute the same value using literals or the UDA;
[localhost:21000] > select 1*1 + 2*2 + 3*3 + 4*4;
+-------------------------------+
| 1 * 1 + 2 * 2 + 3 * 3 + 4 * 4 |
+-------------------------------+
| 30 |
+-------------------------------+
Returned 1 row(s) in 0.12s
[localhost:21000] > select sum_of_squares(x) from sos;
+------------------------+
| udfs.sum_of_squares(x) |
+------------------------+
| 30 |
+------------------------+
Returned 1 row(s) in 0.35s
Until we create the overloaded version of the UDA, it can only handle a single data type. To allow it to handle DOUBLE
as well as BIGINT, we issue another CREATE AGGREGATE FUNCTION statement:
[localhost:21000] > select sum_of_squares(y) from sos;
ERROR: AnalysisException: No matching function with signature:
udfs.sum_of_squares(DOUBLE).
[localhost:21000] > create aggregate function sum_of_squares(double) returns double
> location '/user/hive/udfs/libudasample.so'
> init_fn='SumOfSquaresInit'
> update_fn='SumOfSquaresUpdate'
> merge_fn='SumOfSquaresMerge'
> finalize_fn='SumOfSquaresFinalize';
[localhost:21000] > -- Compute the same value using literals or the UDA;
Apache Impala Guide | 539
Impala SQL Language Reference
[localhost:21000] > select 1.1*1.1 + 2.2*2.2 + 3.3*3.3 + 4.4*4.4;
+-----------------------------------------------+
| 1.1 * 1.1 + 2.2 * 2.2 + 3.3 * 3.3 + 4.4 * 4.4 |
+-----------------------------------------------+
| 36.3 |
+-----------------------------------------------+
Returned 1 row(s) in 0.12s
[localhost:21000] > select sum_of_squares(y) from sos;
+------------------------+
| udfs.sum_of_squares(y) |
+------------------------+
| 36.3 |
+------------------------+
Returned 1 row(s) in 0.35s
Typically, you use a UDA in queries with GROUP BY clauses, to produce a result set with a separate aggregate value
for each combination of values from the GROUP BY clause. Let's change our sample table to use 0 to indicate rows
containing even values, and 1 to flag rows containing odd values. Then the GROUP BY query can return two values,
the sum of the squares for the even values, and the sum of the squares for the odd values:
[localhost:21000] > insert overwrite sos values (1, 1), (2, 0), (3, 1), (4, 0);
Inserted 4 rows in 1.24s
[localhost:21000] > -- Compute 1 squared + 3 squared, and 2 squared + 4 squared;
[localhost:21000] > select y, sum_of_squares(x) from sos group by y;
+---+------------------------+
| y | udfs.sum_of_squares(x) |
+---+------------------------+
| 1 | 10 |
| 0 | 20 |
+---+------------------------+
Returned 2 row(s) in 0.43s
Security Considerations for User-Defined Functions
When the Impala authorization feature is enabled:
• To call a UDF in a query, you must have the required read privilege for any databases and tables used in the query.
• The CREATE FUNCTION statement requires:
– The CREATE privilege on the database.
– The ALL privilege on two URIs where the URIs are:
– The JAR file on the file system. For example:
GRANT ALL ON URI 'file:///path_to_my.jar' TO ROLE my_role;
– The JAR on HDFS. For example:
GRANT ALL ON URI 'hdfs:///path/to/jar' TO ROLE my_role
See Enabling Sentry Authorization for Impala on page 87 for details about authorization in Impala.
Limitations and Restrictions for Impala UDFs
The following limitations and restrictions apply to Impala UDFs in the current release:
• Impala does not support Hive UDFs that accept or return composite or nested types, or other types not available
in Impala tables.
• The Hive current_user() function cannot be called from a Java UDF through Impala.
540 | Apache Impala Guide
Impala SQL Language Reference
• All Impala UDFs must be deterministic, that is, produce the same output each time when passed the same argument
values. For example, an Impala UDF must not call functions such as rand() to produce different values for each
invocation. It must not retrieve data from external sources, such as from disk or over the network.
• An Impala UDF must not spawn other threads or processes.
• Prior to CDH 5.7 / Impala 2.5 when the catalogd process is restarted, all UDFs become undefined and must be
reloaded. In CDH 5.7 / Impala 2.5 and higher, this limitation only applies to older Java UDFs. Re-create those UDFs
using the new CREATE FUNCTION syntax for Java UDFs, which excludes the function signature, to remove the
limitation entirely.
• Impala currently does not support user-defined table functions (UDTFs).
• The CHAR and VARCHAR types cannot be used as input arguments or return values for UDFs.
Converting Legacy UDFs During Upgrade to CDH 5.12 or Higher
In CDH 5.7 / Impala 2.5 and higher, the CREATE FUNCTION Statement on page 228 is available for creating Java-based
UDFs. UDFs created with the new syntax persist across Impala restarts, and are more compatible with Hive UDFs.
Because the replication features in CDH 5.12 and higher only work with the new-style syntax, convert any older Java
UDFs to use the new syntax at the same time you upgrade to CDH 5.12 or higher.
Follow these steps to convert old-style Java UDFs to the new persistent kind:
• Use SHOW FUNCTIONS to identify all UDFs and UDAs.
• For each function, use SHOW CREATE FUNCTION and save the statement in a script file.
• For Java UDFs, change the output of SHOW CREATE FUNCTION to use the new CREATE FUNCTION syntax (without
argument types), which makes the UDF persistent.
• For each function, drop it and re-create it, using the new CREATE FUNCTION syntax for all Java UDFs.
SQL Differences Between Impala and Hive
Impala's SQL syntax follows the SQL-92 standard, and includes many industry extensions in areas such as built-in
functions. See Porting SQL from Other Database Systems to Impala on page 543 for a general discussion of adapting
SQL code from a variety of database systems to Impala.
Because Impala and Hive share the same metastore database and their tables are often used interchangeably, the
following section covers differences between Impala and Hive in detail.
HiveQL Features not Available in Impala
The current release of Impala does not support the following SQL features that you might be familiar with from HiveQL:
• Extensibility mechanisms such as TRANSFORM, custom file formats, or custom SerDes.
• The DATE data type.
• The BINARY data type.
• XML functions.
• Certain aggregate functions from HiveQL: covar_pop, covar_samp, corr, percentile, percentile_approx,
histogram_numeric, collect_set; Impala supports the set of aggregate functions listed in Impala Aggregate
Functions on page 479 and analytic functions listed in Impala Analytic Functions on page 506.
• Sampling.
• Lateral views. In CDH 5.5 / Impala 2.3 and higher, Impala supports queries on complex types (STRUCT, ARRAY, or
MAP), using join notation rather than the EXPLODE() keyword. See Complex Types (CDH 5.5 or higher only) on
page 139 for details about Impala support for complex types.
User-defined functions (UDFs) are supported starting in Impala 1.2. See User-Defined Functions (UDFs) on page 525 for
full details on Impala UDFs.
• Impala supports high-performance UDFs written in C++, as well as reusing some Java-based Hive UDFs.
• Impala supports scalar UDFs and user-defined aggregate functions (UDAFs). Impala does not currently support
user-defined table generating functions (UDTFs).
Apache Impala Guide | 541
Impala SQL Language Reference
• Only Impala-supported column types are supported in Java-based UDFs.
• The Hive current_user() function cannot be called from a Java UDF through Impala.
Impala does not currently support these HiveQL statements:
• ANALYZE TABLE (the Impala equivalent is COMPUTE STATS)
• DESCRIBE COLUMN
• DESCRIBE DATABASE
• EXPORT TABLE
• IMPORT TABLE
• SHOW TABLE EXTENDED
• SHOW TBLPROPERTIES
• SHOW INDEXES
• SHOW COLUMNS
• INSERT OVERWRITE DIRECTORY; use INSERT OVERWRITE table_name or CREATE TABLE AS SELECT to
materialize query results into the HDFS directory associated with an Impala table.
Impala respects the serialization.null.format table property only for TEXT tables and ignores the property for
Parquet and other formats. Hive respects the serialization.null.format property for Parquet and other formats
and converts matching values to NULL during the scan. See Data Files for Text Tables on page 638 for using the table
property in Impala.
Semantic Differences Between Impala and HiveQL Features
This section covers instances where Impala and Hive have similar functionality, sometimes including the same syntax,
but there are differences in the runtime semantics of those features.
Security:
Impala utilizes the Apache Sentry authorization framework, which provides fine-grained role-based access control to
protect data against unauthorized access or tampering.
The Hive component included in CDH 5.1 and higher now includes Sentry-enabled GRANT, REVOKE, and CREATE/DROP
ROLE statements. Earlier Hive releases had a privilege system with GRANT and REVOKE statements that were primarily
intended to prevent accidental deletion of data, rather than a security mechanism to protect against malicious users.
Impala can make use of privileges set up through Hive GRANT and REVOKE statements. Impala has its own GRANT and
REVOKE statements in Impala 2.0 and higher. See Enabling Sentry Authorization for Impala on page 87 for the details
of authorization in Impala, including how to switch from the original policy file-based privilege model to the Sentry
service using privileges stored in the metastore database.
SQL statements and clauses:
The semantics of Impala SQL statements varies from HiveQL in some cases where they use similar SQL statement and
clause names:
• Impala uses different syntax and names for query hints, [SHUFFLE] and [NOSHUFFLE] rather than MapJoin or
StreamJoin. See Joins in Impala SELECT Statements on page 296 for the Impala details.
• Impala does not expose MapReduce specific features of SORT BY, DISTRIBUTE BY, or CLUSTER BY.
• Impala does not require queries to include a FROM clause.
Data types:
• Impala supports a limited set of implicit casts. This can help avoid undesired results from unexpected casting
behavior.
– Impala does not implicitly cast between string and numeric or Boolean types. Always use CAST() for these
conversions.
542 | Apache Impala Guide
Impala SQL Language Reference
– Impala does perform implicit casts among the numeric types, when going from a smaller or less precise type
to a larger or more precise one. For example, Impala will implicitly convert a SMALLINT to a BIGINT or FLOAT,
but to convert from DOUBLE to FLOAT or INT to TINYINT requires a call to CAST() in the query.
– Impala does perform implicit casts from string to timestamp. Impala has a restricted set of literal formats for
the TIMESTAMP data type and the from_unixtime() format string; see TIMESTAMP Data Type on page 130
for details.
See the topics under Data Types on page 101 for full details on implicit and explicit casting for each data type, and
Impala Type Conversion Functions on page 423 for details about the CAST() function.
• Impala does not store or interpret timestamps using the local timezone, to avoid undesired results from unexpected
time zone issues. Timestamps are stored and interpreted relative to UTC. This difference can produce different
results for some calls to similarly named date/time functions between Impala and Hive. See Impala Date and Time
Functions on page 424 for details about the Impala functions. See TIMESTAMP Data Type on page 130 for a discussion
of how Impala handles time zones, and configuration options you can use to make Impala match the Hive behavior
more closely when dealing with Parquet-encoded TIMESTAMP data or when converting between the local time
zone and UTC.
• The Impala TIMESTAMP type can represent dates ranging from 1400-01-01 to 9999-12-31. This is different from
the Hive date range, which is 0000-01-01 to 9999-12-31.
• Impala does not return column overflows as NULL, so that customers can distinguish between NULL data and
overflow conditions similar to how they do so with traditional database systems. Impala returns the largest or
smallest value in the range for the type. For example, valid values for a tinyint range from -128 to 127. In Impala,
a tinyint with a value of -200 returns -128 rather than NULL. A tinyint with a value of 200 returns 127.
Miscellaneous features:
• Impala does not provide virtual columns.
• Impala does not expose locking.
• Impala does not expose some configuration properties.
Porting SQL from Other Database Systems to Impala
Although Impala uses standard SQL for queries, you might need to modify SQL source when bringing applications to
Impala, due to variations in data types, built-in functions, vendor language extensions, and Hadoop-specific syntax.
Even when SQL is working correctly, you might make further minor modifications for best performance.
Porting DDL and DML Statements
When adapting SQL code from a traditional database system to Impala, expect to find a number of differences in the
DDL statements that you use to set up the schema. Clauses related to physical layout of files, tablespaces, and indexes
have no equivalent in Impala. You might restructure your schema considerably to account for the Impala partitioning
scheme and Hadoop file formats.
Expect SQL queries to have a much higher degree of compatibility. With modest rewriting to address vendor extensions
and features not yet supported in Impala, you might be able to run identical or almost-identical query text on both
systems.
Therefore, consider separating out the DDL into a separate Impala-specific setup script. Focus your reuse and ongoing
tuning efforts on the code for SQL queries.
Porting Data Types from Other Database Systems
• Change any VARCHAR, VARCHAR2, and CHAR columns to STRING. Remove any length constraints from the column
declarations; for example, change VARCHAR(32) or CHAR(1) to STRING. Impala is very flexible about the length
of string values; it does not impose any length constraints or do any special processing (such as blank-padding)
for STRING columns. (In Impala 2.0 and higher, there are data types VARCHAR and CHAR, with length constraints
for both types and blank-padding for CHAR. However, for performance reasons, it is still preferable to use STRING
columns where practical.)
Apache Impala Guide | 543
Impala SQL Language Reference
• For national language character types such as NCHAR, NVARCHAR, or NCLOB, be aware that while Impala can store
and query UTF-8 character data, currently some string manipulation operations only work correctly with ASCII
data. See STRING Data Type on page 123 for details.
• Change any DATE, DATETIME, or TIME columns to TIMESTAMP. Remove any precision constraints. Remove any
timezone clauses, and make sure your application logic or ETL process accounts for the fact that Impala expects
all TIMESTAMP values to be in Coordinated Universal Time (UTC). See TIMESTAMP Data Type on page 130 for
information about the TIMESTAMP data type, and Impala Date and Time Functions on page 424 for conversion
functions for different date and time formats.
You might also need to adapt date- and time-related literal values and format strings to use the supported Impala
date and time formats. If you have date and time literals with different separators or different numbers of YY,
MM, and so on placeholders than Impala expects, consider using calls to regexp_replace() to transform those
values to the Impala-compatible format. See TIMESTAMP Data Type on page 130 for information about the allowed
formats for date and time literals, and Impala String Functions on page 462 for string conversion functions such
as regexp_replace().
Instead of SYSDATE, call the function NOW().
Instead of adding or subtracting directly from a date value to produce a value N days in the past or future, use an
INTERVAL expression, for example NOW() + INTERVAL 30 DAYS.
• Although Impala supports INTERVAL expressions for datetime arithmetic, as shown in TIMESTAMP Data Type on
page 130, INTERVAL is not available as a column data type in Impala. For any INTERVAL values stored in tables,
convert them to numeric values that you can add or subtract using the functions in Impala Date and Time Functions
on page 424. For example, if you had a table DEADLINES with an INT column TIME_PERIOD, you could construct
dates N days in the future like so:
SELECT NOW() + INTERVAL time_period DAYS from deadlines;
• For YEAR columns, change to the smallest Impala integer type that has sufficient range. See Data Types on page
101 for details about ranges, casting, and so on for the various numeric data types.
• Change any DECIMAL and NUMBER types. If fixed-point precision is not required, you can use FLOAT or DOUBLE
on the Impala side depending on the range of values. For applications that require precise decimal values, such
as financial data, you might need to make more extensive changes to table structure and application logic, such
as using separate integer columns for dollars and cents, or encoding numbers as string values and writing UDFs
to manipulate them. See Data Types on page 101 for details about ranges, casting, and so on for the various numeric
data types.
• FLOAT, DOUBLE, and REAL types are supported in Impala. Remove any precision and scale specifications. (In
Impala, REAL is just an alias for DOUBLE; columns declared as REAL are turned into DOUBLE behind the scenes.)
See Data Types on page 101 for details about ranges, casting, and so on for the various numeric data types.
• Most integer types from other systems have equivalents in Impala, perhaps under different names such as BIGINT
instead of INT8. For any that are unavailable, for example MEDIUMINT, switch to the smallest Impala integer type
that has sufficient range. Remove any precision specifications. See Data Types on page 101 for details about ranges,
casting, and so on for the various numeric data types.
• Remove any UNSIGNED constraints. All Impala numeric types are signed. See Data Types on page 101 for details
about ranges, casting, and so on for the various numeric data types.
• For any types holding bitwise values, use an integer type with enough range to hold all the relevant bits within a
positive integer. See Data Types on page 101 for details about ranges, casting, and so on for the various numeric
data types.
For example, TINYINT has a maximum positive value of 127, not 256, so to manipulate 8-bit bitfields as positive
numbers switch to the next largest type SMALLINT.
[localhost:21000] > select cast(127*2 as tinyint);
+--------------------------+
544 | Apache Impala Guide
Impala SQL Language Reference
| cast(127 * 2 as tinyint) |
+--------------------------+
| -2 |
+--------------------------+
[localhost:21000] > select cast(128 as tinyint);
+----------------------+
| cast(128 as tinyint) |
+----------------------+
| -128 |
+----------------------+
[localhost:21000] > select cast(127*2 as smallint);
+---------------------------+
| cast(127 * 2 as smallint) |
+---------------------------+
| 254 |
+---------------------------+
Impala does not support notation such as b'0101' for bit literals.
• For BLOB values, use STRING to represent CLOB or TEXT types (character based large objects) up to 32 KB in size.
Binary large objects such as BLOB, RAW BINARY, and VARBINARY do not currently have an equivalent in Impala.
• For Boolean-like types such as BOOL, use the Impala BOOLEAN type.
• Because Impala currently does not support composite or nested types, any spatial data types in other database
systems do not have direct equivalents in Impala. You could represent spatial values in string format and write
UDFs to process them. See User-Defined Functions (UDFs) on page 525 for details. Where practical, separate spatial
types into separate tables so that Impala can still work with the non-spatial data.
• Take out any DEFAULT clauses. Impala can use data files produced from many different sources, such as Pig, Hive,
or MapReduce jobs. The fast import mechanisms of LOAD DATA and external tables mean that Impala is flexible
about the format of data files, and Impala does not necessarily validate or cleanse data before querying it. When
copying data through Impala INSERT statements, you can use conditional functions such as CASE or NVL to
substitute some other value for NULL fields; see Impala Conditional Functions on page 457 for details.
• Take out any constraints from your CREATE TABLE and ALTER TABLE statements, for example PRIMARY KEY,
FOREIGN KEY, UNIQUE, NOT NULL, UNSIGNED, or CHECK constraints. Impala can use data files produced from
many different sources, such as Pig, Hive, or MapReduce jobs. Therefore, Impala expects initial data validation to
happen earlier during the ETL or ELT cycle. After data is loaded into Impala tables, you can perform queries to test
for NULL values. When copying data through Impala INSERT statements, you can use conditional functions such
as CASE or NVL to substitute some other value for NULL fields; see Impala Conditional Functions on page 457 for
details.
Do as much verification as practical before loading data into Impala. After data is loaded into Impala, you can do
further verification using SQL queries to check if values have expected ranges, if values are NULL or not, and so
on. If there is a problem with the data, you will need to re-run earlier stages of the ETL process, or do an INSERT
... SELECT statement in Impala to copy the faulty data to a new table and transform or filter out the bad values.
• Take out any CREATE INDEX, DROP INDEX, and ALTER INDEX statements, and equivalent ALTER TABLE
statements. Remove any INDEX, KEY, or PRIMARY KEY clauses from CREATE TABLE and ALTER TABLE statements.
Impala is optimized for bulk read operations for data warehouse-style queries, and therefore does not support
indexes for its tables.
• Calls to built-in functions with out-of-range or otherwise incorrect arguments, return NULL in Impala as opposed
to raising exceptions. (This rule applies even when the ABORT_ON_ERROR=true query option is in effect.) Run
small-scale queries using representative data to doublecheck that calls to built-in functions are returning expected
values rather than NULL. For example, unsupported CAST operations do not raise an error in Impala:
select cast('foo' as int);
+--------------------+
| cast('foo' as int) |
+--------------------+
| NULL |
+--------------------+
Apache Impala Guide | 545
Impala SQL Language Reference
• For any other type not supported in Impala, you could represent their values in string format and write UDFs to
process them. See User-Defined Functions (UDFs) on page 525 for details.
• To detect the presence of unsupported or unconvertable data types in data files, do initial testing with the
ABORT_ON_ERROR=true query option in effect. This option causes queries to fail immediately if they encounter
disallowed type conversions. See ABORT_ON_ERROR Query Option on page 323 for details. For example:
set abort_on_error=true;
select count(*) from (select * from t1);
-- The above query will fail if the data files for T1 contain any
-- values that can't be converted to the expected Impala data types.
-- For example, if T1.C1 is defined as INT but the column contains
-- floating-point values like 1.1, the query will return an error.
SQL Statements to Remove or Adapt
The following SQL statements or clauses are not currently supported or supported with limitations in Impala:
• Impala supports the DELETE statement only for Kudu tables.
Impala is intended for data warehouse-style operations where you do bulk moves and transforms of large quantities
of data. When not using Kudu tables, instead of DELETE, use INSERT OVERWRITE to entirely replace the contents
of a table or partition, or use INSERT ... SELECT to copy a subset of data (everything but the rows you intended
to delete) from one table to another. See DML Statements on page 204 for an overview of Impala DML statements.
• Impala supports the UPDATE statement only for Kudu tables.
When not using Kudu tables, instead of UPDATE, do all necessary transformations early in the ETL process, such
as in the job that generates the original data, or when copying from one table to another to convert to a particular
file format or partitioning scheme. See DML Statements on page 204 for an overview of Impala DML statements.
• Impala has no transactional statements, such as COMMIT or ROLLBACK.
Impala effectively works like the AUTOCOMMIT mode in some database systems, where changes take effect as
soon as they are made.
• If your database, table, column, or other names conflict with Impala reserved words, use different names or quote
the names with backticks.
See Impala Reserved Words on page 745 for the current list of Impala reserved words.
Conversely, if you use a keyword that Impala does not recognize, it might be interpreted as a table or column
alias.
For example, in SELECT * FROM t1 NATURAL JOIN t2, Impala does not recognize the NATURAL keyword and
interprets it as an alias for the table t1. If you experience any unexpected behavior with queries, check the list of
reserved words to make sure all keywords in join and WHERE clauses are supported keywords in Impala.
• Impala has some restrictions on subquery support. See Subqueries in Impala SELECT Statements for the current
details.
• Impala supports UNION and UNION ALL set operators, but not INTERSECT.
Prefer UNION ALL over UNION when you know the data sets are disjoint or duplicate values are not a problem;
UNION ALL is more efficient because it avoids materializing and sorting the entire result set to eliminate duplicate
values.
• Impala requires query aliases for the subqueries used as inline views in the FROM clause.
For example, without the alias contents_of_t1 at the end, the following query gives a syntax error:
SELECT COUNT(*) FROM (SELECT * FROM t1) contents_of_t1;
546 | Apache Impala Guide
Impala SQL Language Reference
Aliases are not required for the subqueries used in other parts of queries. For example:
SELECT * FROM functional.alltypes WHERE id = (SELECT MIN(id) FROM functional.alltypes);
• When an alias is declared for an expression in a query, that alias cannot be referenced again within the same
SELECT list.
For example, the average alias cannot be referenced twice in the SELECT list as below. You will receive an error:
SELECT AVG(x) AS average, average+1 FROM t1 GROUP BY x;
An alias can be referenced again in the same query if not in the SELECT list. For example, the average alias can
be referenced twice as shown below:
SELECT AVG(x) AS average FROM t1 GROUP BY x HAVING average > 3;
• Impala does not support NATURAL JOIN, and it does not support the USING clause in joins. See Joins in Impala
SELECT Statements on page 296 for details on the syntax for Impala join clauses.
• Impala supports a limited choice of partitioning types.
Partitions are defined based on each distinct combination of values for one or more partition key columns. Impala
does not redistribute or check data to create evenly distributed partitions. You must choose partition key columns
based on your knowledge of the data volume and distribution. Adapt any tables that use range, list, hash, or key
partitioning to use the Impala partition syntax for CREATE TABLE and ALTER TABLE statements.
Impala partitioning is similar to range partitioning where every range has exactly one value, or key partitioning
where the hash function produces a separate bucket for every combination of key values. See Partitioning for
Impala Tables on page 625 for usage details, and CREATE TABLE Statement on page 234 and ALTER TABLE Statement
on page 205 for syntax.
Note: Because the number of separate partitions is potentially higher than in other database
systems, keep a close eye on the number of partitions and the volume of data in each one; scale
back the number of partition key columns if you end up with too many partitions with a small
volume of data in each one.
To distribute work for a query across a cluster, you need at least one HDFS block per node. HDFS
blocks are typically multiple megabytes, especially for Parquet files. Therefore, if each partition
holds only a few megabytes of data, you are unlikely to see much parallelism in the query because
such a small amount of data is typically processed by a single node.
• For the “top-N” queries, Impala uses the LIMIT clause rather than comparing against a pseudo column named
ROWNUM or ROW_NUM.
See LIMIT Clause on page 308 for details.
SQL Constructs to Double-check
Some SQL constructs that are supported have behavior or defaults more oriented towards convenience than optimal
performance. Also, sometimes machine-generated SQL, perhaps issued through JDBC or ODBC applications, might
have inefficiencies or exceed internal Impala limits. As you port SQL code, examine and possibly update the following
where appropriate:
• A CREATE TABLE statement with no STORED AS clause creates data files in plain text format, which is convenient
for data interchange but not a good choice for high-volume data with high-performance queries. See How Impala
Works with Hadoop File Formats on page 634 for why and how to use specific file formats for compact data and
high-performance queries. Especially see Using the Parquet File Format with Impala Tables on page 643, for details
about the file format most heavily optimized for large-scale data warehouse queries.
Apache Impala Guide | 547
Impala SQL Language Reference
• Adapting tables that were already partitioned in a different database system could produce an Impala table with
a high number of partitions and not enough data in each one, leading to underutilization of Impala's parallel query
features.
See Partitioning for Impala Tables on page 625 for details about setting up partitioning and tuning the performance
of queries on partitioned tables.
• The INSERT ... VALUES syntax is suitable for setting up toy tables with a few rows for functional testing when
used with HDFS. Each such statement creates a separate tiny file in HDFS, and it is not a scalable technique for
loading megabytes or gigabytes (let alone petabytes) of data.
Consider revising your data load process to produce raw data files outside of Impala, then setting up Impala
external tables or using the LOAD DATA statement to use those data files instantly in Impala tables, with no
conversion or indexing stage. See External Tables on page 197 and LOAD DATA Statement on page 288 for details
about the Impala techniques for working with data files produced outside of Impala; see Data Loading and Querying
Examples on page 52 for examples of ETL workflow for Impala.
INSERT works fine for Kudu tables even though not particularly fast.
• If your ETL process is not optimized for Hadoop, you might end up with highly fragmented small data files, or a
single giant data file that cannot take advantage of distributed parallel queries or partitioning. In this case, use an
INSERT ... SELECT statement to copy the data into a new table and reorganize into a more efficient layout in
the same operation. See INSERT Statement on page 277 for details about the INSERT statement.
You can do INSERT ... SELECT into a table with a more efficient file format (see How Impala Works with
Hadoop File Formats on page 634) or from an unpartitioned table into a partitioned one. See Partitioning for Impala
Tables on page 625.
• Complex queries may have high codegen time. As a workaround, set the query option DISABLE_CODEGEN=true
if queries fail for this reason. See DISABLE_CODEGEN Query Option on page 327 for details.
• If practical, rewrite UNION queries to use the UNION ALL operator instead. Prefer UNION ALL over UNION when
you know the data sets are disjoint or duplicate values are not a problem; UNION ALL is more efficient because
it avoids materializing and sorting the entire result set to eliminate duplicate values.
Next Porting Steps after Verifying Syntax and Semantics
Some of the decisions you make during the porting process can have an impact on performance. After your SQL code
is ported and working correctly, examine the performance-related aspects of your schema design, physical layout, and
queries to make sure that the ported application is taking full advantage of Impala's parallelism, performance-related
SQL features, and integration with Hadoop components. The following are a few of the areas you should examine:
• For the optimal performance, we recommend that you run COMPUTE STATS on all tables.
• Use the most efficient file format for your data volumes, table structure, and query characteristics.
• Partition on columns that are often used for filtering in WHERE clauses.
• Your ETL process should produce a relatively small number of multi-megabyte data files rather than a huge number
of small files.
See Tuning Impala for Performance on page 565 for details about the performance tuning process.
548 | Apache Impala Guide
Resource Management
Resource Management
Impala includes the features that balance and maximize resources in CDH clusters. This topic describes how you can
improve efficiency of your CDH cluster using those features.
• Static service pools
Use the static service pools to allocate dedicated resources for Impala to manage and prioritize workloads on
clusters.
• Admission control
Within the constraints of the static service pool, you can further subdivide Impala's resources using dynamic
resource pools and admission control.
See Admission Control and Query Queuing on page 549 for an overview and guidelines on admission control.
See Configuring Resource Pools and Admission Control on page 554 for configuring resource pools and managing
admission control.
Admission Control and Query Queuing
Admission control is an Impala feature that imposes limits on concurrent SQL queries, to avoid resource usage spikes
and out-of-memory conditions on busy CDH clusters. The admission control feature lets you set an upper limit on the
number of concurrent Impala queries and on the memory used by those queries. Any additional queries are queued
until the earlier ones finish, rather than being cancelled or running slowly and causing contention. As other queries
finish, the queued queries are allowed to proceed.
In CDH 5.7 / Impala 2.5 and higher, you can specify these limits and thresholds for each pool rather than globally. That
way, you can balance the resource usage and throughput between steady well-defined workloads, rare resource-intensive
queries, and ad-hoc exploratory queries.
In addition to the threshold values for currently executing queries, you can place limits on the maximum number of
queries that are queued (waiting) and a limit on the amount of time they might wait before returning with an error.
These queue settings let you ensure that queries do not wait indefinitely so that you can detect and correct “starvation”
scenarios.
Queries, DML statements, and some DDL statements, including CREATE TABLE AS SELECT and COMPUTE STATS
are affected by admission control.
On a busy CDH cluster, you might find there is an optimal number of Impala queries that run concurrently. For example,
when the I/O capacity is fully utilized by I/O-intensive queries, you might not find any throughput benefit in running
more concurrent queries. By allowing some queries to run at full speed while others wait, rather than having all queries
contend for resources and run slowly, admission control can result in higher overall throughput.
For another example, consider a memory-bound workload such as many large joins or aggregation queries. Each such
query could briefly use many gigabytes of memory to process intermediate results. Because Impala by default cancels
queries that exceed the specified memory limit, running multiple large-scale queries at once might require re-running
some queries that are cancelled. In this case, admission control improves the reliability and stability of the overall
workload by only allowing as many concurrent queries as the overall memory of the cluster can accommodate.
Concurrent Queries and Admission Control
One way to limit resource usage through admission control is to set an upper limit on the number of concurrent queries.
This is the initial technique you might use when you do not have extensive information about memory usage for your
workload. The settings can be specified separately for each dynamic resource pool.
Apache Impala Guide | 549
Resource Management
Max Running Queries
Maximum number of concurrently running queries in this pool. The default value is unlimited for CDH 5.7 or higher.
(optional)
The maximum number of queries that can run concurrently in this pool. The default value is unlimited. Any queries
for this pool that exceed Max Running Queries are added to the admission control queue until other queries finish.
You can use Max Running Queries in the early stages of resource management, when you do not have extensive
data about query memory usage, to determine if the cluster performs better overall if throttling is applied to Impala
queries.
For a workload with many small queries, you typically specify a high value for this setting, or leave the default setting
of “unlimited”. For a workload with expensive queries, where some number of concurrent queries saturate the
memory, I/O, CPU, or network capacity of the cluster, set the value low enough that the cluster resources are not
overcommitted for Impala.
Once you have enabled memory-based admission control using other pool settings, you can still use Max Running
Queries as a safeguard. If queries exceed either the total estimated memory or the maximum number of concurrent
queries, they are added to the queue.
Max Queued Queries
Maximum number of queries that can be queued in this pool. The default value is 200 for CDH 5.3 or higher and
50 for previous versions of Impala. (optional)
Queue Timeout
The amount of time, in milliseconds, that a query waits in the admission control queue for this pool before being
canceled. The default value is 60,000 milliseconds.
In the following cases, Queue Timeout is not significant, and you can specify a high value to avoid canceling queries
unexpectedly:
• In a low-concurrency workload where few or no queries are queued
• In an environment without a strict SLA, where it does not matter if queries occasionally take longer than usual
because they are held in admission control
You might also need to increase the value to use Impala with some business intelligence tools that have their own
timeout intervals for queries.
In a high-concurrency workload, especially for queries with a tight SLA, long wait times in admission control can
cause a serious problem. For example, if a query needs to run in 10 seconds, and you have tuned it so that it runs
in 8 seconds, it violates its SLA if it waits in the admission control queue longer than 2 seconds. In a case like this,
set a low timeout value and monitor how many queries are cancelled because of timeouts. This technique helps
you to discover capacity, tuning, and scaling problems early, and helps avoid wasting resources by running expensive
queries that have already missed their SLA.
If you identify some queries that can have a high timeout value, and others that benefit from a low timeout value,
you can create separate pools with different values for this setting.
You can combine these settings with the memory-based approach described in Memory Limits and Admission Control
on page 550. If either the maximum number of or the expected memory usage of the concurrent queries is exceeded,
subsequent queries are queued until the concurrent workload falls below the threshold again.
Memory Limits and Admission Control
Each dynamic resource pool can have an upper limit on the cluster-wide memory used by queries executing in that
pool.
Use the following settings to manage memory-based admission control.
550 | Apache Impala Guide
Resource Management
Max Memory
The maximum amount of aggregate memory available across the cluster to all queries executing in this pool. This
should be a portion of the aggregate configured memory for Impala daemons, which will be shown in the settings
dialog next to this option for convenience. Setting this to a non-zero value enables memory based admission control.
Impala determines the expected maximum memory used by all queries in the pool and holds back any further
queries that would result in Max Memory being exceeded.
If you specify Max Memory, you should specify the amount of memory to allocate to each query in this pool. You
can do this in two ways:
• By setting Maximum Query Memory Limit and Minimum Query Memory Limit. This is preferred in CDH 6.1
/ Impala 3.1 and greater and gives Impala flexibility to set aside more memory to queries that are expected to
be memory-hungry.
• By setting Default Query Memory Limit to the exact amount of memory that Impala should set aside for queries
in that pool.
Note that in the following cases, Impala will rely entirely on memory estimates to determine how much memory
to set aside for each query. This is not recommended because it can result in queries not running or being starved
for memory if the estimates are inaccurate. And it can affect other queries running on the same node.
• Max Memory, Maximum Query Memory Limit, and Minimum Query Memory Limit are not set, and the
MEM_LIMIT query option is not set for the query.
• Default Query Memory Limit is set to 0, and the MEM_LIMIT query option is not set for the query.
Minimum Query Memory Limit and Maximum Query Memory Limit
These two options determine the minimum and maximum per-host memory limit that will be chosen by Impala
Admission control for queries in this resource pool. If set, Impala admission control will choose a memory limit
between the minimum and maximum values based on the per-host memory estimate for the query. The memory
limit chosen determines the amount of memory that Impala admission control will set aside for this query on each
host that the query is running on. The aggregate memory across all of the hosts that the query is running on is
counted against the pool’s Max Memory.
Minimum Query Memory Limit must be less than or equal to Maximum Query Memory Limit and Max Memory.
A user can override Impala’s choice of memory limit by setting the MEM_LIMIT query option. If the Clamp MEM_LIMIT
Query Option setting is set to TRUE and the user sets MEM_LIMIT to a value that is outside of the range specified
by these two options, then the effective memory limit will be either the minimum or maximum, depending on
whether MEM_LIMIT is lower than or higher the range.
For example, assume a resource pool with the following parameters set:
• Minimum Query Memory Limit = 2GB
• Maximum Query Memory Limit = 10GB
If a user tries to submit a query with the MEM_LIMIT query option set to 14 GB, the following would happen:
• If Clamp MEM_LIMIT Query Option = true, admission controller would override MEM_LIMIT with 10 GB and
attempt admission using that value.
• If Clamp MEM_LIMIT Query Option = false, the admission controller will retain the MEM_LIMIT of 14 GB set
by the user and will attempt admission using the value.
Default Query Memory Limit
The default memory limit applied to queries executing in this pool when no explicit MEM_LIMIT query option is set.
The memory limit chosen determines the amount of memory that Impala Admission control will set aside for this
query on each host that the query is running on. The aggregate memory across all of the hosts that the query is
running on is counted against the pool’s Max Memory. This option is deprecated in CDH 6.1 / Impala 3.1 and higher
and is replaced by Maximum Query Memory Limit and Minimum Query Memory Limit.
Do not set this if either Maximum Query Memory Limit or Minimum Query Memory Limit is set.
Apache Impala Guide | 551
Resource Management
Clamp MEM_LIMIT Query Option
If this field is not selected, the MEM_LIMIT query option will not be bounded by the Maximum Query Memory
Limit and the Minimum Query Memory Limit values specified for this resource pool. By default, this field is selected
in CDH 6.1 and higher. The field is disabled if both Minimum Query Memory Limit and Maximum Query Memory
Limit are not set.
For example, consider the following scenario:
• The cluster is running impalad daemons on five hosts.
• A dynamic resource pool has Max Memory set to 100 GB.
• The Maximum Query Memory Limit for the pool is 10 GB and Minimum Query Memory Limit is 2 GB. Therefore,
any query running in this pool could use up to 50 GB of memory (Maximum Query Memory Limit * number of
Impala nodes).
• Impala will execute varying numbers of queries concurrently because queries may be given memory limits anywhere
between 2 GB and 10 GB, depending on the estimated memory requirements. For example, Impala may execute
up to 10 small queries with 2 GB memory limits or two large queries with 10 GB memory limits because that is
what will fit in the 100 GB cluster-wide limit when executing on five hosts.
• The executing queries may use less memory than the per-host memory limit or the Max Memory cluster-wide
limit if they do not need that much memory. In general this is not a problem so long as you are able to execute
enough queries concurrently to meet your needs.
You can combine the memory-based settings with the upper limit on concurrent queries described in Concurrent
Queries and Admission Control on page 549. If either the maximum number of or the expected memory usage of the
concurrent queries is exceeded, subsequent queries are queued until the concurrent workload falls below the threshold
again.
How Impala Admission Control Relates to Other Resource Management Tools
The admission control feature is similar in some ways to the Cloudera Manager static partitioning feature, as well as
the YARN resource management framework. These features can be used separately or together. This section describes
some similarities and differences, to help you decide which combination of resource management features to use for
Impala.
Admission control is a lightweight, decentralized system that is suitable for workloads consisting primarily of Impala
queries and other SQL statements. It sets “soft” limits that smooth out Impala memory usage during times of heavy
load, rather than taking an all-or-nothing approach that cancels jobs that are too resource-intensive.
Because the admission control system does not interact with other Hadoop workloads such as MapReduce jobs, you
might use YARN with static service pools on CDH clusters where resources are shared between Impala and other
Hadoop components. This configuration is recommended when using Impala in a multitenant cluster. Devote a
percentage of cluster resources to Impala, and allocate another percentage for MapReduce and other batch-style
workloads. Let admission control handle the concurrency and memory usage for the Impala work within the cluster,
and let YARN manage the work for other components within the cluster. In this scenario, Impala's resources are not
managed by YARN.
The Impala admission control feature uses the same configuration mechanism as the YARN resource manager to map
users to pools and authenticate them.
Although the Impala admission control feature uses a fair-scheduler.xml configuration file behind the scenes,
this file does not depend on which scheduler is used for YARN. You still use this file, and Cloudera Manager can generate
it for you, even when YARN is using the capacity scheduler.
How Impala Schedules and Enforces Limits on Concurrent Queries
The admission control system is decentralized, embedded in each Impala daemon and communicating through the
statestore mechanism. Although the limits you set for memory usage and number of concurrent queries apply
cluster-wide, each Impala daemon makes its own decisions about whether to allow each query to run immediately or
to queue it for a less-busy time. These decisions are fast, meaning the admission control mechanism is low-overhead,
but might be imprecise during times of heavy load across many coordinators. There could be times when the more
552 | Apache Impala Guide
Resource Management
queries were queued (in aggregate across the cluster) than the specified limit, or when number of admitted queries
exceeds the expected number. Thus, you typically err on the high side for the size of the queue, because there is not
a big penalty for having a large number of queued queries; and you typically err on the low side for configuring memory
resources, to leave some headroom in case more queries are admitted than expected, without running out of memory
and being cancelled as a result.
To avoid a large backlog of queued requests, you can set an upper limit on the size of the queue for queries that are
queued. When the number of queued queries exceeds this limit, further queries are cancelled rather than being queued.
You can also configure a timeout period per pool, after which queued queries are cancelled, to avoid indefinite waits.
If a cluster reaches this state where queries are cancelled due to too many concurrent requests or long waits for query
execution to begin, that is a signal for an administrator to take action, either by provisioning more resources, scheduling
work on the cluster to smooth out the load, or by doing Impala performance tuning to enable higher throughput.
How Admission Control works with Impala Clients (JDBC, ODBC, HiveServer2)
Most aspects of admission control work transparently with client interfaces such as JDBC and ODBC:
• If a SQL statement is put into a queue rather than running immediately, the API call blocks until the statement is
dequeued and begins execution. At that point, the client program can request to fetch results, which might also
block until results become available.
• If a SQL statement is cancelled because it has been queued for too long or because it exceeded the memory limit
during execution, the error is returned to the client program with a descriptive error message.
In Impala 2.0 and higher, you can submit a SQL SET statement from the client application to change the REQUEST_POOL
query option. This option lets you submit queries to different resource pools, as described in REQUEST_POOL Query
Option on page 355.
At any time, the set of queued queries could include queries submitted through multiple different Impala daemon
hosts. All the queries submitted through a particular host will be executed in order, so a CREATE TABLE followed by
an INSERT on the same table would succeed. Queries submitted through different hosts are not guaranteed to be
executed in the order they were received. Therefore, if you are using load-balancing or other round-robin scheduling
where different statements are submitted through different hosts, set up all table structures ahead of time so that
the statements controlled by the queuing system are primarily queries, where order is not significant. Or, if a sequence
of statements needs to happen in strict order (such as an INSERT followed by a SELECT), submit all those statements
through a single session, while connected to the same Impala daemon host.
Admission control has the following limitations or special behavior when used with JDBC or ODBC applications:
• The other resource-related query options, RESERVATION_REQUEST_TIMEOUT and V_CPU_CORES, are no longer
used. Those query options only applied to using Impala with Llama, which is no longer supported.
SQL and Schema Considerations for Admission Control
When queries complete quickly and are tuned for optimal memory usage, there is less chance of performance or
capacity problems during times of heavy load. Before setting up admission control, tune your Impala queries to ensure
that the query plans are efficient and the memory estimates are accurate. Understanding the nature of your workload,
and which queries are the most resource-intensive, helps you to plan how to divide the queries into different pools
and decide what limits to define for each pool.
For large tables, especially those involved in join queries, keep their statistics up to date after loading substantial
amounts of new data or adding new partitions. Use the COMPUTE STATS statement for unpartitioned tables, and
COMPUTE INCREMENTAL STATS for partitioned tables.
When you use dynamic resource pools with a Max Memory setting enabled, you typically override the memory
estimates that Impala makes based on the statistics from the COMPUTE STATS statement. You either set the MEM_LIMIT
query option within a particular session to set an upper memory limit for queries within that session, or a default
MEM_LIMIT setting for all queries processed by the impalad instance, or a default MEM_LIMIT setting for all queries
assigned to a particular dynamic resource pool. By designating a consistent memory limit for a set of similar queries
that use the same resource pool, you avoid unnecessary query queuing or out-of-memory conditions that can arise
during high-concurrency workloads when memory estimates for some queries are inaccurate.
Apache Impala Guide | 553
Resource Management
Follow other steps from Tuning Impala for Performance on page 565 to tune your queries.
Guidelines for Using Admission Control
The limits imposed by admission control are de-centrally managed “soft” limits. Each Impala coordinator node makes
its own decisions about whether to allow queries to run immediately or to queue them. These decisions rely on
information passed back and forth between nodes by the StateStore service. If a sudden surge in requests causes more
queries than anticipated to run concurrently, then the throughput could decrease due to queries spilling to disk or
contending for resources. Or queries could be cancelled if they exceed the MEM_LIMIT setting while running.
In impala-shell, you can also specify which resource pool to direct queries to by setting the REQUEST_POOL query
option.
If you set up different resource pools for different users and groups, consider reusing any classifications you developed
for use with Sentry security. See Enabling Sentry Authorization for Impala on page 87 for details.
Where practical, use Cloudera Manager to configure the admission control parameters. The Cloudera Manager GUI is
much simpler than editing the configuration files directly.
To see how admission control works for particular queries, examine the profile output or the summary output for the
query.
• Profile
The information is available through the PROFILE statement in impala-shell immediately after running a query
in the shell, on the queries page of the Impala debug web UI, or in the Impala log file (basic information at log
level 1, more detailed information at log level 2).
The profile output contains details about the admission decision, such as whether the query was queued or not
and which resource pool it was assigned to. It also includes the estimated and actual memory usage for the query,
so you can fine-tune the configuration for the memory limits of the resource pools.
• Summary
Starting in CDH 6.1, the information is available in impala-shell when the LIVE_PROGRESS or LIVE_SUMMARY
query option is set to TRUE.
You can also start an impala-shell session with the --live_progress or --live_summary flags to monitor
all queries in that impala-shell session.
The summary output includes the queuing status consisting of whether the query was queued and what was the
latest queuing reason.
For details about all the Fair Scheduler configuration settings, see Fair Scheduler Configuration, in particular the tags
such as and to map users and groups to particular resource pools (queues).
Configuring Resource Pools and Admission Control
Impala includes features that balance and maximize resources in your CDH cluster. This topic describes how you can
improve efficiency of your a CDH cluster using those features.
A typical deployment uses the following:
• Creating Static Service Pools
• Using Admission Control
– Setting Per-query Memory Limits
– Creating Dynamic Resource Pools
554 | Apache Impala Guide
Creating Static Service Pools
To manage and prioritize workloads on clusters, use the static service pools to allocate dedicated resources for Impala
for predictable resource availability. When static service pools are used, Cloudera Manager creates a cgroup in which
Impala runs. This cgroup limits memory, CPU and Disk I/O according to the static partitioning policy.
The following is a sample configuration for Static Service Pools shown in Cloudera Manager.
Resource Management
• HDFS always needs to have a minimum of 5-10% of the resources.
• Generally, YARN and Impala split the rest of the resources.
– For mostly batch workloads, you might allocate YARN 60%, Impala 30%, and HDFS 10%.
– For mostly ad-hoc query workloads, you might allocate Impala 60%, YARN 30%, and HDFS 10%.
Using Admission Control
Within the constraints of the static service pool, using dynamic resource pools and the admission control, you can
further subdivide Impala's resource usage among dynamic resource pools in multitenant use cases.
Allocating resources judiciously allows your most important queries to run faster and more reliably.
Note: Impala dynamic resource pools are different than the default YARN dynamic resource pools.
You can turn on dynamic resource pools that are exclusively for use by Impala.
Enabling or Disabling Impala Admission Control in Cloudera Manager
We recommend enabling admission control on all production clusters to alleviate possible capacity issues. The capacity
issues could be because of a high volume of concurrent queries, because of heavy-duty join and aggregation queries
that require large amounts of memory, or because Impala is being used alongside other Hadoop data management
components and the resource usage of Impala must be constrained to work well in a multitenant deployment.
Important:
In CDH 5.8 and higher, admission control and dynamic resource pools are enabled by default. However,
until you configure the settings for the dynamic resource pools, the admission control feature is
effectively not enabled as, with the default the settings, each dynamic pool will allow all queries of
any memory requirement to execute in the pool.
1. Go to the Impala service.
Apache Impala Guide | 555
Resource Management
2. In the Configuration tab, select Category > Admission Control.
3. Select or clear both the Enable Impala Admission Control checkbox and the Enable Dynamic Resource Pools
checkbox.
4. Enter a Reason for change, and then click Save Changes to commit the changes.
5. Restart the Impala service.
After completing this task, for further configuration settings, customize the configuration settings for the dynamic
resource pools, as described in below.
Creating an Impala Dynamic Resource Pool
There is always a resource pool designated as root.default. By default, all Impala queries run in this pool when the
dynamic resource pool feature is enabled for Impala. You create additional pools when your workload includes
identifiable groups of queries (such as from a particular application, or a particular group within your organization)
that have their own requirements for concurrency, memory use, or service level agreement (SLA). Each pool has its
own settings related to memory, number of queries, and timeout interval.
1. Select Clusters > Cluster name > Dynamic Resource Pool Configuration. If the cluster has an Impala service, the
Resource Pools tab displays under the Impala Admission Control tab.
2. Click the Impala Admission Control tab.
3. Click Create Resource Pool.
4. Specify a name and resource limits for the pool:
• In the Resource Pool Name field, type a unique name containing only alphanumeric characters.
• Optionally, click the Submission Access Control tab to specify which users and groups can submit queries.
By default, anyone can submit queries. To restrict this permission, select the Allow these users and groups
option and provide a comma-delimited list of users and groups in the Users and Groups fields respectively.
5. Click Create.
6. Click Refresh Dynamic Resource Pools.
Configuration Settings for Impala Dynamic Resource Pool
Impala dynamic resource pools support the following settings.
Max Memory
Maximum amount of aggregate memory available across the cluster to all queries executing in this pool. This should
be a portion of the aggregate configured memory for Impala daemons, which will be shown in the settings dialog
next to this option for convenience. Setting this to a non-zero value enables memory based admission control.
Impala determines the expected maximum memory used by all queries in the pool and holds back any further
queries that would result in Max Memory being exceeded.
If you specify Max Memory, you should specify the amount of memory to allocate to each query in this pool. You
can do this in two ways:
• By setting Maximum Query Memory Limit and Minimum Query Memory Limit. This is preferred in CDH 6.1
and higher and gives Impala flexibility to set aside more memory to queries that are expected to be
memory-hungry.
• By setting Default Query Memory Limit to the exact amount of memory that Impala should set aside for queries
in that pool.
Note that if you do not set any of the above options, or set Default Query Memory Limit to 0, Impala will rely
entirely on memory estimates to determine how much memory to set aside for each query. This is not recommended
because it can result in queries not running or being starved for memory if the estimates are inaccurate.
For example, consider the following scenario:
• The cluster is running impalad daemons on five hosts.
• A dynamic resource pool has Max Memory set to 100 GB.
556 | Apache Impala Guide
Resource Management
• The Maximum Query Memory Limit for the pool is 10 GB and Minimum Query Memory Limit is 2 GB. Therefore,
any query running in this pool could use up to 50 GB of memory (Maximum Query Memory Limit * number
of Impala nodes).
• Impala will execute varying numbers of queries concurrently because queries may be given memory limits
anywhere between 2 GB and 10 GB, depending on the estimated memory requirements. For example, Impala
may execute up to 10 small queries with 2 GB memory limits or two large queries with 10 GB memory limits
because that is what will fit in the 100 GB cluster-wide limit when executing on five hosts.
• The executing queries may use less memory than the per-host memory limit or the Max Memory cluster-wide
limit if they do not need that much memory. In general this is not a problem so long as you are able to execute
enough queries concurrently to meet your needs.
Minimum Query Memory Limit and Maximum Query Memory Limit
These two options determine the minimum and maximum per-host memory limit that will be chosen by Impala
Admission control for queries in this resource pool. If set, Impala admission control will choose a memory limit
between the minimum and maximum value based on the per-host memory estimate for the query. The memory
limit chosen determines the amount of memory that Impala admission control will set aside for this query on each
host that the query is running on. The aggregate memory across all of the hosts that the query is running on is
counted against the pool’s Max Memory.
Minimum Query Memory Limit must be less than or equal to Maximum Query Memory Limit and Max Memory.
You can override Impala’s choice of memory limit by setting the MEM_LIMIT query option. If the Clamp MEM_LIMIT
Query Option is selected and the user sets MEM_LIMIT to a value that is outside of the range specified by these
two options, then the effective memory limit will be either the minimum or maximum, depending on whether
MEM_LIMIT is lower than or higher than the range.
Default Query Memory Limit
The default memory limit applied to queries executing in this pool when no explicit MEM_LIMIT query option is set.
The memory limit chosen determines the amount of memory that Impala Admission control will set aside for this
query on each host that the query is running on. The aggregate memory across all of the hosts that the query is
running on is counted against the pool’s Max Memory.
This option is deprecated from CDH 6.1 and higher and is replaced by Maximum Query Memory Limit and Minimum
Query Memory Limit. Do not set this field if either Maximum Query Memory Limit or Minimum Query Memory
Limit is set.
Max Running Queries
Maximum number of concurrently running queries in this pool. The default value is unlimited for CDH 5.7 or higher.
(optional)
The maximum number of queries that can run concurrently in this pool. The default value is unlimited. Any queries
for this pool that exceed Max Running Queries are added to the admission control queue until other queries finish.
You can use Max Running Queries in the early stages of resource management, when you do not have extensive
data about query memory usage, to determine if the cluster performs better overall if throttling is applied to Impala
queries.
For a workload with many small queries, you typically specify a high value for this setting, or leave the default setting
of “unlimited”. For a workload with expensive queries, where some number of concurrent queries saturate the
memory, I/O, CPU, or network capacity of the cluster, set the value low enough that the cluster resources are not
overcommitted for Impala.
Once you have enabled memory-based admission control using other pool settings, you can still use Max Running
Queries as a safeguard. If queries exceed either the total estimated memory or the maximum number of concurrent
queries, they are added to the queue.
Max Queued Queries
Maximum number of queries that can be queued in this pool. The default value is 200 for CDH 5.3 or higher and
50 for previous versions of Impala. (optional)
Apache Impala Guide | 557
Resource Management
Queue Timeout
The amount of time, in milliseconds, that a query waits in the admission control queue for this pool before being
canceled. The default value is 60,000 milliseconds.
It the following cases, Queue Timeout is not significant, and you can specify a high value to avoid canceling queries
unexpectedly:
• In a low-concurrency workload where few or no queries are queued
• In an environment without a strict SLA, where it does not matter if queries occasionally take longer than usual
because they are held in admission control
You might also need to increase the value to use Impala with some business intelligence tools that have their own
timeout intervals for queries.
In a high-concurrency workload, especially for queries with a tight SLA, long wait times in admission control can
cause a serious problem. For example, if a query needs to run in 10 seconds, and you have tuned it so that it runs
in 8 seconds, it violates its SLA if it waits in the admission control queue longer than 2 seconds. In a case like this,
set a low timeout value and monitor how many queries are cancelled because of timeouts. This technique helps
you to discover capacity, tuning, and scaling problems early, and helps avoid wasting resources by running expensive
queries that have already missed their SLA.
If you identify some queries that can have a high timeout value, and others that benefit from a low timeout value,
you can create separate pools with different values for this setting.
Clamp MEM_LIMIT Query Option
If this field is not selected, the MEM_LIMIT query option will not be bounded by the Maximum Query Memory
Limit and the Minimum Query Memory Limit values specified for this resource pool. By default, this field is selected
in CDH 6.1 and higher. The field is disabled if both Minimum Query Memory Limit and Maximum Query Memory
Limit are not set.
Setting Per-query Memory Limits
Use per-query memory limits to prevent queries from consuming excessive memory resources that impact other
queries. Cloudera recommends that you set the query memory limits whenever possible.
If you set the Max Memory for a resource pool, Impala attempts to throttle queries if there is not enough memory to
run them within the specified resources.
Only use admission control with maximum memory resources if you can ensure there are query memory limits. Set
the pool Maximum Query Memory Limit to be certain. You can override this setting with the MEM_LIMIT query option,
if necessary.
Typically, you set query memory limits using the set MEM_LIMIT=Xg; query option. When you find the right value
for your business case, memory-based admission control works well. The potential downside is that queries that
attempt to use more memory might perform poorly or even be cancelled.
To find a reasonable Maximum Query Memory Limit:
1. Run the workload.
2. In Cloudera Manager, go to Impala > Queries.
3. Click Select Attributes.
4. Select Per Node Peak Memory Usage and click Update.
5. Allow the system time to gather information, then click the Show Histogram icon to see the results.
558 | Apache Impala Guide
Resource Management
6. Use the histogram to find a value that accounts for most queries. Queries that require more resources than this
limit should explicitly set the memory limit to ensure they can run to completion.
Configuring Admission Control in Command Line Interface
To configure admission control, use a combination of startup options for the Impala daemon and edit or create the
configuration files fair-scheduler.xml and llama-site.xml.
For a straightforward configuration using a single resource pool named default, you can specify configuration options
on the command line and skip the fair-scheduler.xml and llama-site.xml configuration files.
For an advanced configuration with multiple resource pools using different settings:
1. Set up the fair-scheduler.xml and llama-site.xml configuration files manually.
2. Provide the paths to each one using the impalad command-line options, --fair_scheduler_allocation_path
and --llama_site_path respectively.
The Impala admission control feature only uses the Fair Scheduler configuration settings to determine how to map
users and groups to different resource pools. For example, you might set up different resource pools with separate
memory limits, and maximum number of concurrent and queued queries, for different categories of users within your
organization. For details about all the Fair Scheduler configuration settings, see the Apache wiki.
The Impala admission control feature only uses a small subset of possible settings from the llama-site.xml
configuration file:
llama.am.throttling.maximum.placed.reservations.queue_name
llama.am.throttling.maximum.queued.reservations.queue_name
impala.admission-control.pool-default-query-options.queue_name
impala.admission-control.pool-queue-timeout-ms.queue_name
The impala.admission-control.pool-queue-timeout-ms setting specifies the timeout value for this pool, in
milliseconds.
Theimpala.admission-control.pool-default-query-options settings designates the default query options
for all queries that run in this pool. Its argument value is a comma-delimited string of 'key=value' pairs, for
example,'key1=val1,key2=val2'. For example, this is where you might set a default memory limit for all queries
in the pool, using an argument such as MEM_LIMIT=5G.
The impala.admission-control.* configuration settings are available in CDH 5.7 / Impala 2.5 and higher.
Example Admission Control Configuration Files
For clusters not managed by Cloudera Manager, here are sample fair-scheduler.xml and llama-site.xml files
that define resource pools root.default, root.development, and root.production. These files define resource
pools for Impala admission control and are separate from the similar fair-scheduler.xml that defines resource
pools for YARN.
fair-scheduler.xml:
Although Impala does not use the vcores value, you must still specify it to satisfy YARN requirements for the file
contents.
Each tag (other than the one for root) contains a comma-separated list of users, then a space,
then a comma-separated list of groups; these are the users and groups allowed to submit Impala statements to the
corresponding resource pool.
If you leave the element empty for a pool, nobody can submit directly to that pool; child pools
can specify their own values to authorize users and groups to submit to those pools.
50000 mb, 0 vcores
Apache Impala Guide | 559
Resource Management
*
200000 mb, 0 vcores
user1,user2 dev,ops,admin
1000000 mb, 0 vcores
ops,admin
llama-site.xml:
llama.am.throttling.maximum.placed.reservations.root.default
10
llama.am.throttling.maximum.queued.reservations.root.default
50
impala.admission-control.pool-default-query-options.root.default
mem_limit=128m,query_timeout_s=20,max_io_buffers=10
impala.admission-control.pool-queue-timeout-ms.root.default
30000
impala.admission-control.max-query-mem-limit.root.default.regularPool
1610612736
impala.admission-control.min-query-mem-limit.root.default.regularPool
52428800
impala.admission-control.clamp-mem-limit-query-option.root.default.regularPool
true
Configuring Cluster-wide Admission Control
Important: Although the following options are still present in the Cloudera Manager interface under
the Admission Control configuration settings dialog, avoid using them in CDH 5.7 / Impala 2.5 and
higher. These settings only apply if you enable admission control but leave dynamic resource pools
disabled. In CDH 5.7 / Impala 2.5 and higher, we recommend that you set up dynamic resource pools
and customize the settings for each pool as described in Using Admission Control on page 555.
The following Impala configuration options let you adjust the settings of the admission control feature. When supplying
the options on the impalad command line, prepend the option name with --.
queue_wait_timeout_ms
Purpose: Maximum amount of time (in milliseconds) that a request waits to be admitted before timing out.
560 | Apache Impala Guide
Resource Management
Type: int64
Default: 60000
default_pool_max_requests
Purpose: Maximum number of concurrent outstanding requests allowed to run before incoming requests are
queued. Because this limit applies cluster-wide, but each Impala node makes independent decisions to run queries
immediately or queue them, it is a soft limit; the overall number of concurrent queries might be slightly higher
during times of heavy load. A negative value indicates no limit. Ignored if fair_scheduler_config_path and
llama_site_path are set.
Type: int64
Default: -1, meaning unlimited (prior to CDH 5.7 / Impala 2.5 the default was 200)
default_pool_max_queued
Purpose: Maximum number of requests allowed to be queued before rejecting requests. Because this limit applies
cluster-wide, but each Impala node makes independent decisions to run queries immediately or queue them, it is
a soft limit; the overall number of queued queries might be slightly higher during times of heavy load. A negative
value or 0 indicates requests are always rejected once the maximum concurrent requests are executing. Ignored if
fair_scheduler_config_path and llama_site_path are set.
Type: int64
Default: unlimited
default_pool_mem_limit
Purpose: Maximum amount of memory (across the entire cluster) that all outstanding requests in this pool can use
before new requests to this pool are queued. Specified in bytes, megabytes, or gigabytes by a number followed by
the suffix b (optional), m, or g, either uppercase or lowercase. You can specify floating-point values for megabytes
and gigabytes, to represent fractional numbers such as 1.5. You can also specify it as a percentage of the physical
memory by specifying the suffix %. 0 or no setting indicates no limit. Defaults to bytes if no unit is given. Because
this limit applies cluster-wide, but each Impala node makes independent decisions to run queries immediately or
queue them, it is a soft limit; the overall memory used by concurrent queries might be slightly higher during times
of heavy load. Ignored if fair_scheduler_config_path and llama_site_path are set.
Note: Impala relies on the statistics produced by the COMPUTE STATS statement to estimate
memory usage for each query. See COMPUTE STATS Statement on page 219 for guidelines about
how and when to use this statement.
Type: string
Default: "" (empty string, meaning unlimited)
disable_pool_max_requests
Purpose: Disables all per-pool limits on the maximum number of running requests.
Type: Boolean
Default: false
disable_pool_mem_limits
Purpose: Disables all per-pool mem limits.
Type: Boolean
Default: false
fair_scheduler_allocation_path
Purpose: Path to the fair scheduler allocation file (fair-scheduler.xml).
Type: string
Apache Impala Guide | 561
Resource Management
Default: "" (empty string)
Usage notes: Admission control only uses a small subset of the settings that can go in this file, as described below.
For details about all the Fair Scheduler configuration settings, see the Apache wiki.
llama_site_path
Purpose: Path to the configuration file used by admission control (llama-site.xml). If set,
fair_scheduler_allocation_path must also be set.
Type: string
Default: "" (empty string)
Usage notes: Admission control only uses a few of the settings that can go in this file, as described below.
Admission Control Sample Scenario
Anne Chang is administrator for an enterprise data hub that runs a number of workloads, including Impala.
Anne has a 20-node cluster that uses Cloudera Manager static partitioning. Because of the heavy Impala workload,
Anne needs to make sure Impala gets enough resources. While the best configuration values might not be known in
advance, she decides to start by allocating 50% of resources to Impala. Each node has 128 GiB dedicated to each
impalad. Impala has 2560 GiB in aggregate that can be shared across the resource pools she creates.
Next, Anne studies the workload in more detail. After some research, she might choose to revisit these initial values
for static partitioning.
To figure out how to further allocate Impala’s resources, Anne needs to consider the workloads and users, and determine
their requirements. There are a few main sources of Impala queries:
• Large reporting queries executed by an external process/tool. These are critical business intelligence queries that
are important for business decisions. It is important that they get the resources they need to run. There typically
are not many of these queries at a given time.
• Frequent, small queries generated by a web UI. These queries scan a limited amount of data and do not require
expensive joins or aggregations. These queries are important, but not as critical, perhaps the client tries resending
the query or the end user refreshes the page.
• Occasionally, expert users might run ad-hoc queries. The queries can vary significantly in their resource
requirements. While Anne wants a good experience for these users, it is hard to control what they do (for example,
submitting inefficient or incorrect queries by mistake). Anne restricts these queries by default and tells users to
reach out to her if they need more resources.
To set up admission control for this workload, Anne first runs the workloads independently, so that she can observe
the workload’s resource usage in Cloudera Manager. If they could not easily be run manually, but had been run in the
past, Anne uses the history information from Cloudera Manager. It can be helpful to use other search criteria (for
example, user) to isolate queries by workload. Anne uses the Cloudera Manager chart for Per-Node Peak Memory
usage to identify the maximum memory requirements for the queries.
From this data, Anne observes the following about the queries in the groups above:
• Large reporting queries use up to 32 GiB per node. There are typically 1 or 2 queries running at a time. On one
occasion, she observed that 3 of these queries were running concurrently. Queries can take 3 minutes to complete.
• Web UI-generated queries use between 100 MiB per node to usually less than 4 GiB per node of memory, but
occasionally as much as 10 GiB per node. Queries take, on average, 5 seconds, and there can be as many as 140
incoming queries per minute.
• Anne has little data on ad hoc queries, but some are trivial (approximately 100 MiB per node), others join several
tables (requiring a few GiB per node), and one user submitted a huge cross join of all tables that used all system
resources (that was likely a mistake).
Based on these observations, Anne creates the admission control configuration with the following pools:
562 | Apache Impala Guide
Resource Management
XL_Reporting
Property
Max Memory
Maximum Query Memory Limit
Minimum Query Memory Limit
Max Running Queries
Queue Timeout
Value
1280 GiB
32 GiB
32 GiB
2
5 minutes
This pool is for large reporting queries. To support running 2 queries at a time, the pool memory resources are set to
1280 GiB (aggregate cluster memory). This is for 2 queries, each with 32 GiB per node, across 20 nodes. Anne sets the
pool’s Maximum Query Memory Limit to 32 GiB so that no query uses more than 32 GiB on any given node. She sets
Max Running Queries to 2 (though it is not necessary she do so). She increases the pool’s queue timeout to 5 minutes
in case a third query comes in and has to wait. She does not expect more than 3 concurrent queries, and she does not
want them to wait that long anyway, so she does not increase the queue timeout. If the workload increases in the
future, she might choose to adjust the configuration or buy more hardware.
HighThroughput_UI
Property
Max Memory
Maximum Query Memory Limit
Minimum Query Memory Limit
Max Running Queries
Queue Timeout
Value
960 GiB (inferred)
4 GiB
2 GiB
12
5 minutes
This pool is used for the small, high throughput queries generated by the web tool. Anne sets the Maximum Query
Memory Limit to 4 GiB per node, and sets Max Running Queries to 12. This implies a maximum amount of memory
per node used by the queries in this pool: 48 GiB per node (12 queries * 4 GiB per node memory limit).
Notice that Anne does not set the pool memory resources, but does set the pool’s Maximum Query Memory Limit.
This is intentional: admission control processes queries faster when a pool uses the Max Running Queries limit instead
of the peak memory resources.
This should be enough memory for most queries, since only a few go over 4 GiB per node. For those that do require
more memory, they can probably still complete with less memory (spilling if necessary). If, on occasion, a query cannot
run with this much memory and it fails, Anne might reconsider this configuration later, or perhaps she does not need
to worry about a few rare failures from this web UI.
With regard to throughput, since these queries take around 5 seconds and she is allowing 12 concurrent queries, the
pool should be able to handle approximately 144 queries per minute, which is enough for the peak maximum expected
of 140 queries per minute. In case there is a large burst of queries, Anne wants them to queue. The default maximum
size of the queue is already 200, which should be more than large enough. Anne does not need to change it.
Default
Property
Max Memory
Maximum Query Memory Limit
Minimum Query Memory Limit
Value
320 GiB
4 GiB
2 GiB
Apache Impala Guide | 563
Resource Management
Property
Max Running Queries
Queue Timeout
Value
Unlimited
60 Seconds
The default pool (which already exists) is a catch all for ad-hoc queries. Anne wants to use the remaining memory not
used by the first two pools, 16 GiB per node (XL_Reporting uses 64 GiB per node, High_Throughput_UI uses 48 GiB per
node). For the other pools to get the resources they expect, she must still set the Max Memory resources and the
Maximum Query Memory Limit. She sets the Max Memory resources to 320 GiB (16 * 20). She sets the Maximum
Query Memory Limit to 4 GiB per node for now. That is somewhat arbitrary, but satisfies some of the ad hoc queries
she observed. If someone writes a bad query by mistake, she does not actually want it using all the system resources.
If a user has a large query to submit, an expert user can override the Maximum Query Memory Limit (up to 16 GiB per
node, since that is bound by the pool Max Memory resources). If that is still insufficient for this user’s workload, the
user should work with Anne to adjust the settings and perhaps create a dedicated pool for the workload.
564 | Apache Impala Guide
Tuning Impala for Performance
Tuning Impala for Performance
The following sections explain the factors affecting the performance of Impala features, and procedures for tuning,
monitoring, and benchmarking Impala queries and other SQL operations.
This section also describes techniques for maximizing Impala scalability. Scalability is tied to performance: it means
that performance remains high as the system workload increases. For example, reducing the disk I/O performed by a
query can speed up an individual query, and at the same time improve scalability by making it practical to run more
queries simultaneously. Sometimes, an optimization technique improves scalability more than performance. For
example, reducing memory usage for a query might not change the query performance much, but might improve
scalability by allowing more Impala queries or other kinds of jobs to run at the same time without running out of
memory.
Note:
Before starting any performance tuning or benchmarking, make sure your system is configured with
all the recommended minimum hardware requirements from Hardware Requirements on page 24
and software settings from Post-Installation Configuration for Impala on page 36.
• Partitioning for Impala Tables on page 625. This technique physically divides the data based on the different values
in frequently queried columns, allowing queries to skip reading a large percentage of the data in a table.
• Performance Considerations for Join Queries on page 568. Joins are the main class of queries that you can tune at
the SQL level, as opposed to changing physical factors such as the file format or the hardware configuration. The
related topics Overview of Column Statistics on page 576 and Overview of Table Statistics on page 575 are also
important primarily for join performance.
• Overview of Table Statistics on page 575 and Overview of Column Statistics on page 576. Gathering table and column
statistics, using the COMPUTE STATS statement, helps Impala automatically optimize the performance for join
queries, without requiring changes to SQL query statements. (This process is greatly simplified in Impala 1.2.2 and
higher, because the COMPUTE STATS statement gathers both kinds of statistics in one operation, and does not
require any setup and configuration as was previously necessary for the ANALYZE TABLE statement in Hive.)
• Testing Impala Performance on page 601. Do some post-setup testing to ensure Impala is using optimal settings
for performance, before conducting any benchmark tests.
• Benchmarking Impala Queries on page 588. The configuration and sample data that you use for initial experiments
with Impala is often not appropriate for doing performance tests.
• Controlling Impala Resource Usage on page 588. The more memory Impala can utilize, the better query performance
you can expect. In a cluster running other kinds of workloads as well, you must make tradeoffs to make sure all
Hadoop components have enough memory to perform well, so you might cap the memory that Impala can use.
• Using Impala with the Amazon S3 Filesystem on page 692. Queries against data stored in the Amazon Simple
Storage Service (S3) have different performance characteristics than when the data is stored in HDFS.
A good source of tips related to scalability and performance tuning is the Impala Cookbook presentation. These slides
are updated periodically as new features come out and new benchmarks are performed.
Impala Performance Guidelines and Best Practices
Here are performance guidelines and best practices that you can use during planning, experimentation, and performance
tuning for an Impala-enabled CDH cluster. All of this information is also available in more detail elsewhere in the Impala
documentation; it is gathered together here to serve as a cookbook and emphasize which performance techniques
typically provide the highest return on investment
Apache Impala Guide | 565
Tuning Impala for Performance
Choose the appropriate file format for the data
Typically, for large volumes of data (multiple gigabytes per table or partition), the Parquet file format performs best
because of its combination of columnar storage layout, large I/O request size, and compression and encoding. See
How Impala Works with Hadoop File Formats on page 634 for comparisons of all file formats supported by Impala, and
Using the Parquet File Format with Impala Tables on page 643 for details about the Parquet file format.
Note: For smaller volumes of data, a few gigabytes or less for each table or partition, you might not
see significant performance differences between file formats. At small data volumes, reduced I/O
from an efficient compressed file format can be counterbalanced by reduced opportunity for parallel
execution. When planning for a production deployment or conducting benchmarks, always use realistic
data volumes to get a true picture of performance and scalability.
Avoid data ingestion processes that produce many small files
When producing data files outside of Impala, prefer either text format or Avro, where you can build up the files row
by row. Once the data is in Impala, you can convert it to the more efficient Parquet format and split into multiple data
files using a single INSERT ... SELECT statement. Or, if you have the infrastructure to produce multi-megabyte
Parquet files as part of your data preparation process, do that and skip the conversion step inside Impala.
Always use INSERT ... SELECT to copy significant volumes of data from table to table within Impala. Avoid INSERT
... VALUES for any substantial volume of data or performance-critical tables, because each such statement produces
a separate tiny data file. See INSERT Statement on page 277 for examples of the INSERT ... SELECT syntax.
For example, if you have thousands of partitions in a Parquet table, each with less than 256 MB of data, consider
partitioning in a less granular way, such as by year / month rather than year / month / day. If an inefficient data ingestion
process produces thousands of data files in the same table or partition, consider compacting the data by performing
an INSERT ... SELECT to copy all the data to a different table; the data will be reorganized into a smaller number
of larger files by this process.
Choose partitioning granularity based on actual data volume
Partitioning is a technique that physically divides the data based on values of one or more columns, such as by year,
month, day, region, city, section of a web site, and so on. When you issue queries that request a specific value or range
of values for the partition key columns, Impala can avoid reading the irrelevant data, potentially yielding a huge savings
in disk I/O.
When deciding which column(s) to use for partitioning, choose the right level of granularity. For example, should you
partition by year, month, and day, or only by year and month? Choose a partitioning strategy that puts at least 256
MB of data in each partition, to take advantage of HDFS bulk I/O and Impala distributed queries.
Over-partitioning can also cause query planning to take longer than necessary, as Impala prunes the unnecessary
partitions. Ideally, keep the number of partitions in the table under 30 thousand.
When preparing data files to go in a partition directory, create several large files rather than many small ones. If you
receive data in the form of many small files and have no control over the input format, consider using the INSERT
... SELECT syntax to copy data from one table or partition to another, which compacts the files into a relatively
small number (based on the number of nodes in the cluster).
If you need to reduce the overall number of partitions and increase the amount of data in each partition, first look for
partition key columns that are rarely referenced or are referenced in non-critical queries (not subject to an SLA). For
example, your web site log data might be partitioned by year, month, day, and hour, but if most queries roll up the
results by day, perhaps you only need to partition by year, month, and day.
If you need to reduce the granularity even more, consider creating “buckets”, computed values corresponding to
different sets of partition key values. For example, you can use the TRUNC() function with a TIMESTAMP column to
group date and time values based on intervals such as week or quarter. See Impala Date and Time Functions on page
424 for details.
See Partitioning for Impala Tables on page 625 for full details and performance considerations for partitioning.
566 | Apache Impala Guide
Tuning Impala for Performance
Use smallest appropriate integer types for partition key columns
Although it is tempting to use strings for partition key columns, since those values are turned into HDFS directory
names anyway, you can minimize memory usage by using numeric values for common partition key fields such as
YEAR, MONTH, and DAY. Use the smallest integer type that holds the appropriate range of values, typically TINYINT
for MONTH and DAY, and SMALLINT for YEAR. Use the EXTRACT() function to pull out individual date and time fields
from a TIMESTAMP value, and CAST() the return value to the appropriate integer type.
Choose an appropriate Parquet block size
By default, the Impala INSERT ... SELECT statement creates Parquet files with a 256 MB block size. (This default
was changed in Impala 2.0. Formerly, the limit was 1 GB, but Impala made conservative estimates about compression,
resulting in files that were smaller than 1 GB.)
Each Parquet file written by Impala is a single block, allowing the whole file to be processed as a unit by a single host.
As you copy Parquet files into HDFS or between HDFS filesystems, use hdfs dfs -pb to preserve the original block
size.
If there is only one or a few data block in your Parquet table, or in a partition that is the only one accessed by a query,
then you might experience a slowdown for a different reason: not enough data to take advantage of Impala's parallel
distributed queries. Each data block is processed by a single core on one of the DataNodes. In a 100-node cluster of
16-core machines, you could potentially process thousands of data files simultaneously. You want to find a sweet spot
between “many tiny files” and “single giant file” that balances bulk I/O and parallel processing. You can set the
PARQUET_FILE_SIZE query option before doing an INSERT ... SELECT statement to reduce the size of each
generated Parquet file. (Specify the file size as an absolute number of bytes, or in Impala 2.0 and later, in units ending
with m for megabytes or g for gigabytes.) Run benchmarks with different file sizes to find the right balance point for
your particular data volume.
Gather statistics for all tables used in performance-critical or high-volume join queries
Gather the statistics with the COMPUTE STATS statement. See Performance Considerations for Join Queries on page
568 for details.
Minimize the overhead of transmitting results back to the client
Use techniques such as:
• Aggregation. If you need to know how many rows match a condition, the total values of matching values from
some column, the lowest or highest matching value, and so on, call aggregate functions such as COUNT(), SUM(),
and MAX() in the query rather than sending the result set to an application and doing those computations there.
Remember that the size of an unaggregated result set could be huge, requiring substantial time to transmit across
the network.
• Filtering. Use all applicable tests in the WHERE clause of a query to eliminate rows that are not relevant, rather
than producing a big result set and filtering it using application logic.
• LIMIT clause. If you only need to see a few sample values from a result set, or the top or bottom values from a
query using ORDER BY, include the LIMIT clause to reduce the size of the result set rather than asking for the
full result set and then throwing most of the rows away.
• Avoid overhead from pretty-printing the result set and displaying it on the screen. When you retrieve the results
through impala-shell, use impala-shell options such as -B and --output_delimiter to produce results
without special formatting, and redirect output to a file rather than printing to the screen. Consider using INSERT
... SELECT to write the results directly to new files in HDFS. See impala-shell Configuration Options on page
714 for details about the impala-shell command-line options.
Verify that your queries are planned in an efficient logical manner
Examine the EXPLAIN plan for a query before actually running it. See EXPLAIN Statement on page 271 and Using the
EXPLAIN Plan for Performance Tuning on page 602 for details.
Apache Impala Guide | 567
Tuning Impala for Performance
Verify performance characteristics of queries
Verify that the low-level aspects of I/O, memory usage, network bandwidth, CPU utilization, and so on are within
expected ranges by examining the query profile for a query after running it. See Using the Query Profile for Performance
Tuning on page 604 for details.
Use appropriate operating system settings
See Optimizing Performance in CDH for recommendations about operating system settings that you can change to
influence Impala performance. In particular, you might find that changing the vm.swappiness Linux kernel setting
to a non-zero value improves overall performance.
Hotspot analysis
In the context of Impala, a hotspot is defined as “an Impala daemon that for a single query or a workload is spending
a far greater amount of time processing data relative to its neighbours”.
Before discussing the options to tackle this issue some background is first required to understand how this problem
can occur.
By default, the scheduling of scan based plan fragments is deterministic. This means that for multiple queries needing
to read the same block of data, the same node will be picked to host the scan. The default scheduling logic does not
take into account node workload from prior queries. The complexity of materializing a tuple depends on a few factors,
namely: decoding and decompression. If the tuples are densely packed into data pages due to good
encoding/compression ratios, there will be more work required when reconstructing the data. Each compression codec
offers different performance tradeoffs and should be considered before writing the data. Due to the deterministic
nature of the scheduler, single nodes can become bottlenecks for highly concurrent queries that use the same tables.
If, for example, a Parquet based dataset is tiny, e.g. a small dimension table, such that it fits into a single HDFS block
(Impala by default will create 256 MB blocks when Parquet is used, each containing a single row group) then there are
a number of options that can be considered to resolve the potential scheduling hotspots when querying this data:
• In CDH 5.7 and higher, the scheduler’s deterministic behaviour can be changed using the following query options:
REPLICA_PREFERENCE and RANDOM_REPLICA. For a detailed description of each of these modes see IMPALA-2696.
• HDFS caching can be used to cache block replicas. This will cause the Impala scheduler to randomly pick (from
CDH 5.4 and higher) a node that is hosting a cached block replica for the scan. Note, although HDFS caching has
benefits, it serves only to help with the reading of raw block data and not cached tuple data, but with the right
number of cached replicas (by default, HDFS only caches one replica), even load distribution can be achieved for
smaller datasets.
• Do not compress the table data. The uncompressed table data spans more nodes and eliminates skew caused by
compression.
• Reduce the Parquet file size via the PARQUET_FILE_SIZE query option when writing the table data. Using this
approach the data will span more nodes. However it’s not recommended to drop the size below 32 MB.
Performance Considerations for Join Queries
Queries involving join operations often require more tuning than queries that refer to only one table. The maximum
size of the result set from a join query is the product of the number of rows in all the joined tables. When joining several
tables with millions or billions of rows, any missed opportunity to filter the result set, or other inefficiency in the query,
could lead to an operation that does not finish in a practical time and has to be cancelled.
The simplest technique for tuning an Impala join query is to collect statistics on each table involved in the join using
the COMPUTE STATS statement, and then let Impala automatically optimize the query based on the size of each table,
number of distinct values of each column, and so on. The COMPUTE STATS statement and the join optimization are
new features introduced in Impala 1.2.2. For accurate statistics about each table, issue the COMPUTE STATS statement
after loading the data into that table, and again if the amount of data changes substantially due to an INSERT, LOAD
DATA, adding a partition, and so on.
568 | Apache Impala Guide
Tuning Impala for Performance
If statistics are not available for all the tables in the join query, or if Impala chooses a join order that is not the most
efficient, you can override the automatic join order optimization by specifying the STRAIGHT_JOIN keyword immediately
after the SELECT and any DISTINCT or ALL keywords. In this case, Impala uses the order the tables appear in the
query to guide how the joins are processed.
When you use the STRAIGHT_JOIN technique, you must order the tables in the join query manually instead of relying
on the Impala optimizer. The optimizer uses sophisticated techniques to estimate the size of the result set at each
stage of the join. For manual ordering, use this heuristic approach to start with, and then experiment to fine-tune the
order:
• Specify the largest table first. This table is read from disk by each Impala node and so its size is not significant in
terms of memory usage during the query.
• Next, specify the smallest table. The contents of the second, third, and so on tables are all transmitted across the
network. You want to minimize the size of the result set from each subsequent stage of the join query. The most
likely approach involves joining a small table first, so that the result set remains small even as subsequent larger
tables are processed.
• Join the next smallest table, then the next smallest, and so on.
For example, if you had tables BIG, MEDIUM, SMALL, and TINY, the logical join order to try would be BIG, TINY, SMALL,
MEDIUM.
The terms “largest” and “smallest” refers to the size of the intermediate result set based on the number of rows and
columns from each table that are part of the result set. For example, if you join one table sales with another table
customers, a query might find results from 100 different customers who made a total of 5000 purchases. In that case,
you would specify SELECT ... FROM sales JOIN customers ..., putting customers on the right side because
it is smaller in the context of this query.
The Impala query planner chooses between different techniques for performing join queries, depending on the absolute
and relative sizes of the tables. Broadcast joins are the default, where the right-hand table is considered to be smaller
than the left-hand table, and its contents are sent to all the other nodes involved in the query. The alternative technique
is known as a partitioned join (not related to a partitioned table), which is more suitable for large tables of roughly
equal size. With this technique, portions of each table are sent to appropriate other nodes where those subsets of
rows can be processed in parallel. The choice of broadcast or partitioned join also depends on statistics being available
for all tables in the join, gathered by the COMPUTE STATS statement.
To see which join strategy is used for a particular query, issue an EXPLAIN statement for the query. If you find that a
query uses a broadcast join when you know through benchmarking that a partitioned join would be more efficient, or
vice versa, add a hint to the query to specify the precise join mechanism to use. See Optimizer Hints in Impala on page
387 for details.
How Joins Are Processed when Statistics Are Unavailable
If table or column statistics are not available for some tables in a join, Impala still reorders the tables using the
information that is available. Tables with statistics are placed on the left side of the join order, in descending order of
cost based on overall size and cardinality. Tables without statistics are treated as zero-size, that is, they are always
placed on the right side of the join order.
Overriding Join Reordering with STRAIGHT_JOIN
If an Impala join query is inefficient because of outdated statistics or unexpected data distribution, you can keep Impala
from reordering the joined tables by using the STRAIGHT_JOIN keyword immediately after the SELECT and any
DISTINCT or ALL keywords. The STRAIGHT_JOIN keyword turns off the reordering of join clauses that Impala does
internally, and produces a plan that relies on the join clauses being ordered optimally in the query text.
Apache Impala Guide | 569
Tuning Impala for Performance
Note:
The STRAIGHT_JOIN hint affects the join order of table references in the query block containing the
hint. It does not affect the join order of nested queries, such as views, inline views, or WHERE-clause
subqueries. To use this hint for performance tuning of complex queries, apply the hint to all query
blocks that need a fixed join order.
In this example, the subselect from the BIG table produces a very small result set, but the table might still be treated
as if it were the biggest and placed first in the join order. Using STRAIGHT_JOIN for the last join clause prevents the
final table from being reordered, keeping it as the rightmost table in the join order.
select straight_join x from medium join small join (select * from big where c1 < 10) as
big
where medium.id = small.id and small.id = big.id;
-- If the query contains [DISTINCT | ALL], the hint goes after those keywords.
select distinct straight_join x from medium join small join (select * from big where c1
< 10) as big
where medium.id = small.id and small.id = big.id;
Examples of Join Order Optimization
Here are examples showing joins between tables with 1 billion, 200 million, and 1 million rows. (In this case, the tables
are unpartitioned and using Parquet format.) The smaller tables contain subsets of data from the largest one, for
convenience of joining on the unique ID column. The smallest table only contains a subset of columns from the others.
[localhost:21000] > create table big stored as parquet as select * from raw_data;
+----------------------------+
| summary |
+----------------------------+
| Inserted 1000000000 row(s) |
+----------------------------+
Returned 1 row(s) in 671.56s
[localhost:21000] > desc big;
+-----------+---------+---------+
| name | type | comment |
+-----------+---------+---------+
| id | int | |
| val | int | |
| zfill | string | |
| name | string | |
| assertion | boolean | |
+-----------+---------+---------+
Returned 5 row(s) in 0.01s
[localhost:21000] > create table medium stored as parquet as select * from big limit
200 * floor(1e6);
+---------------------------+
| summary |
+---------------------------+
| Inserted 200000000 row(s) |
+---------------------------+
Returned 1 row(s) in 138.31s
[localhost:21000] > create table small stored as parquet as select id,val,name from big
where assertion = true limit 1 * floor(1e6);
+-------------------------+
| summary |
+-------------------------+
| Inserted 1000000 row(s) |
+-------------------------+
Returned 1 row(s) in 6.32s
For any kind of performance experimentation, use the EXPLAIN statement to see how any expensive query will be
performed without actually running it, and enable verbose EXPLAIN plans containing more performance-oriented
detail: The most interesting plan lines are highlighted in bold, showing that without statistics for the joined tables,
570 | Apache Impala Guide
Tuning Impala for Performance
Impala cannot make a good estimate of the number of rows involved at each stage of processing, and is likely to stick
with the BROADCAST join mechanism that sends a complete copy of one of the tables to each node.
[localhost:21000] > set explain_level=verbose;
EXPLAIN_LEVEL set to verbose
[localhost:21000] > explain select count(*) from big join medium where big.id = medium.id;
+----------------------------------------------------------+
| Explain String |
+----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=2.10GB VCores=2 |
| |
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 6:AGGREGATE (merge finalize) |
| | output: SUM(COUNT(*)) |
| | cardinality: 1 |
| | per-host memory: unavailable |
| | tuple ids: 2 |
| | |
| 5:EXCHANGE |
| cardinality: 1 |
| per-host memory: unavailable |
| tuple ids: 2 |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 5 |
| UNPARTITIONED |
| |
| 3:AGGREGATE |
| | output: COUNT(*) |
| | cardinality: 1 |
| | per-host memory: 10.00MB |
| | tuple ids: 2 |
| | |
| 2:HASH JOIN |
| | join op: INNER JOIN (BROADCAST) |
| | hash predicates: |
| | big.id = medium.id |
| | cardinality: unavailable |
| | per-host memory: 2.00GB |
| | tuple ids: 0 1 |
| | |
| |----4:EXCHANGE |
| | cardinality: unavailable |
| | per-host memory: 0B |
| | tuple ids: 1 |
| | |
| 0:SCAN HDFS |
| table=join_order.big #partitions=1/1 size=23.12GB |
| table stats: unavailable |
| column stats: unavailable |
| cardinality: unavailable |
| per-host memory: 88.00MB |
| tuple ids: 0 |
| |
| PLAN FRAGMENT 2 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 4 |
| UNPARTITIONED |
| |
| 1:SCAN HDFS |
| table=join_order.medium #partitions=1/1 size=4.62GB |
| table stats: unavailable |
| column stats: unavailable |
| cardinality: unavailable |
| per-host memory: 88.00MB |
| tuple ids: 1 |
Apache Impala Guide | 571
Tuning Impala for Performance
+----------------------------------------------------------+
Returned 64 row(s) in 0.04s
Gathering statistics for all the tables is straightforward, one COMPUTE STATS statement per table:
[localhost:21000] > compute stats small;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 3 column(s). |
+-----------------------------------------+
Returned 1 row(s) in 4.26s
[localhost:21000] > compute stats medium;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 5 column(s). |
+-----------------------------------------+
Returned 1 row(s) in 42.11s
[localhost:21000] > compute stats big;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 5 column(s). |
+-----------------------------------------+
Returned 1 row(s) in 165.44s
With statistics in place, Impala can choose a more effective join order rather than following the left-to-right sequence
of tables in the query, and can choose BROADCAST or PARTITIONED join strategies based on the overall sizes and
number of rows in the table:
[localhost:21000] > explain select count(*) from medium join big where big.id = medium.id;
Query: explain select count(*) from medium join big where big.id = medium.id
+-----------------------------------------------------------+
| Explain String |
+-----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=937.23MB VCores=2 |
| |
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 6:AGGREGATE (merge finalize) |
| | output: SUM(COUNT(*)) |
| | cardinality: 1 |
| | per-host memory: unavailable |
| | tuple ids: 2 |
| | |
| 5:EXCHANGE |
| cardinality: 1 |
| per-host memory: unavailable |
| tuple ids: 2 |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 5 |
| UNPARTITIONED |
| |
| 3:AGGREGATE |
| | output: COUNT(*) |
| | cardinality: 1 |
| | per-host memory: 10.00MB |
| | tuple ids: 2 |
| | |
| 2:HASH JOIN |
| | join op: INNER JOIN (BROADCAST) |
| | hash predicates: |
| | big.id = medium.id |
| | cardinality: 1443004441 |
| | per-host memory: 839.23MB |
572 | Apache Impala Guide
Tuning Impala for Performance
| | tuple ids: 1 0 |
| | |
| |----4:EXCHANGE |
| | cardinality: 200000000 |
| | per-host memory: 0B |
| | tuple ids: 0 |
| | |
| 1:SCAN HDFS |
| table=join_order.big #partitions=1/1 size=23.12GB |
| table stats: 1000000000 rows total |
| column stats: all |
| cardinality: 1000000000 |
| per-host memory: 88.00MB |
| tuple ids: 1 |
| |
| PLAN FRAGMENT 2 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 4 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=join_order.medium #partitions=1/1 size=4.62GB |
| table stats: 200000000 rows total |
| column stats: all |
| cardinality: 200000000 |
| per-host memory: 88.00MB |
| tuple ids: 0 |
+-----------------------------------------------------------+
Returned 64 row(s) in 0.04s
[localhost:21000] > explain select count(*) from small join big where big.id = small.id;
Query: explain select count(*) from small join big where big.id = small.id
+-----------------------------------------------------------+
| Explain String |
+-----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=101.15MB VCores=2 |
| |
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 6:AGGREGATE (merge finalize) |
| | output: SUM(COUNT(*)) |
| | cardinality: 1 |
| | per-host memory: unavailable |
| | tuple ids: 2 |
| | |
| 5:EXCHANGE |
| cardinality: 1 |
| per-host memory: unavailable |
| tuple ids: 2 |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 5 |
| UNPARTITIONED |
| |
| 3:AGGREGATE |
| | output: COUNT(*) |
| | cardinality: 1 |
| | per-host memory: 10.00MB |
| | tuple ids: 2 |
| | |
| 2:HASH JOIN |
| | join op: INNER JOIN (BROADCAST) |
| | hash predicates: |
| | big.id = small.id |
| | cardinality: 1000000000 |
| | per-host memory: 3.15MB |
| | tuple ids: 1 0 |
Apache Impala Guide | 573
Tuning Impala for Performance
| | |
| |----4:EXCHANGE |
| | cardinality: 1000000 |
| | per-host memory: 0B |
| | tuple ids: 0 |
| | |
| 1:SCAN HDFS |
| table=join_order.big #partitions=1/1 size=23.12GB |
| table stats: 1000000000 rows total |
| column stats: all |
| cardinality: 1000000000 |
| per-host memory: 88.00MB |
| tuple ids: 1 |
| |
| PLAN FRAGMENT 2 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 4 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=join_order.small #partitions=1/1 size=17.93MB |
| table stats: 1000000 rows total |
| column stats: all |
| cardinality: 1000000 |
| per-host memory: 32.00MB |
| tuple ids: 0 |
+-----------------------------------------------------------+
Returned 64 row(s) in 0.03s
When queries like these are actually run, the execution times are relatively consistent regardless of the table order in
the query text. Here are examples using both the unique ID column and the VAL column containing duplicate values:
[localhost:21000] > select count(*) from big join small on (big.id = small.id);
Query: select count(*) from big join small on (big.id = small.id)
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
Returned 1 row(s) in 21.68s
[localhost:21000] > select count(*) from small join big on (big.id = small.id);
Query: select count(*) from small join big on (big.id = small.id)
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
Returned 1 row(s) in 20.45s
[localhost:21000] > select count(*) from big join small on (big.val = small.val);
+------------+
| count(*) |
+------------+
| 2000948962 |
+------------+
Returned 1 row(s) in 108.85s
[localhost:21000] > select count(*) from small join big on (big.val = small.val);
+------------+
| count(*) |
+------------+
| 2000948962 |
+------------+
Returned 1 row(s) in 100.76s
574 | Apache Impala Guide
Tuning Impala for Performance
Note: When examining the performance of join queries and the effectiveness of the join order
optimization, make sure the query involves enough data and cluster resources to see a difference
depending on the query plan. For example, a single data file of just a few megabytes will reside in a
single HDFS block and be processed on a single node. Likewise, if you use a single-node or two-node
cluster, there might not be much difference in efficiency for the broadcast or partitioned join strategies.
Table and Column Statistics
Impala can do better optimization for complex or multi-table queries when it has access to statistics about the volume
of data and how the values are distributed. Impala uses this information to help parallelize and distribute the work for
a query. For example, optimizing join queries requires a way of determining if one table is “bigger” than another, which
is a function of the number of rows and the average row size for each table. The following sections describe the
categories of statistics Impala can work with, and how to produce them and keep them up to date.
Overview of Table Statistics
The Impala query planner can make use of statistics about entire tables and partitions. This information includes
physical characteristics such as the number of rows, number of data files, the total size of the data files, and the file
format. For partitioned tables, the numbers are calculated per partition, and as totals for the whole table. This metadata
is stored in the metastore database, and can be updated by either Impala or Hive. If a number is not available, the
value -1 is used as a placeholder. Some numbers, such as number and total sizes of data files, are always kept up to
date because they can be calculated cheaply, as part of gathering HDFS block metadata.
The following example shows table stats for an unpartitioned Parquet table. The values for the number and sizes of
files are always available. Initially, the number of rows is not known, because it requires a potentially expensive scan
through the entire table, and so that value is displayed as -1. The COMPUTE STATS statement fills in any unknown
table stats values.
show table stats parquet_snappy;
+-------+--------+---------+--------------+-------------------+---------+-------------------+...
| #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental
stats |...
+-------+--------+---------+--------------+-------------------+---------+-------------------+...
| -1 | 96 | 23.35GB | NOT CACHED | NOT CACHED | PARQUET | false
|...
+-------+--------+---------+--------------+-------------------+---------+-------------------+...
compute stats parquet_snappy;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 6 column(s). |
+-----------------------------------------+
show table stats parquet_snappy;
+------------+--------+---------+--------------+-------------------+---------+-------------------+...
| #Rows | #Files | Size | Bytes Cached | Cache Replication | Format | Incremental
stats |...
+------------+--------+---------+--------------+-------------------+---------+-------------------+...
| 1000000000 | 96 | 23.35GB | NOT CACHED | NOT CACHED | PARQUET | false
|...
+------------+--------+---------+--------------+-------------------+---------+-------------------+...
Impala performs some optimizations using this metadata on its own, and other optimizations by using a combination
of table and column statistics.
To check that table statistics are available for a table, and see the details of those statistics, use the statement SHOW
TABLE STATS table_name. See SHOW Statement on page 363 for details.
Apache Impala Guide | 575
Tuning Impala for Performance
If you use the Hive-based methods of gathering statistics, see the Hive wiki for information about the required
configuration on the Hive side. Where practical, use the Impala COMPUTE STATS statement to avoid potential
configuration and scalability issues with the statistics-gathering process.
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR COLUMNS, Impala can only use the
resulting column statistics if the table is unpartitioned. Impala cannot use Hive-generated column statistics for a
partitioned table.
Overview of Column Statistics
The Impala query planner can make use of statistics about individual columns when that metadata is available in the
metastore database. This technique is most valuable for columns compared across tables in join queries, to help
estimate how many rows the query will retrieve from each table. These statistics are also important for correlated
subqueries using the EXISTS() or IN() operators, which are processed internally the same way as join queries.
The following example shows column stats for an unpartitioned Parquet table. The values for the maximum and average
sizes of some types are always available, because those figures are constant for numeric and other fixed-size types.
Initially, the number of distinct values is not known, because it requires a potentially expensive scan through the entire
table, and so that value is displayed as -1. The same applies to maximum and average sizes of variable-sized types,
such as STRING. The COMPUTE STATS statement fills in most unknown column stats values. (It does not record the
number of NULL values, because currently Impala does not use that figure for query optimization.)
show column stats parquet_snappy;
+-------------+----------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-------------+----------+------------------+--------+----------+----------+
| id | BIGINT | -1 | -1 | 8 | 8 |
| val | INT | -1 | -1 | 4 | 4 |
| zerofill | STRING | -1 | -1 | -1 | -1 |
| name | STRING | -1 | -1 | -1 | -1 |
| assertion | BOOLEAN | -1 | -1 | 1 | 1 |
| location_id | SMALLINT | -1 | -1 | 2 | 2 |
+-------------+----------+------------------+--------+----------+----------+
compute stats parquet_snappy;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 6 column(s). |
+-----------------------------------------+
show column stats parquet_snappy;
+-------------+----------+------------------+--------+----------+-------------------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-------------+----------+------------------+--------+----------+-------------------+
| id | BIGINT | 183861280 | -1 | 8 | 8 |
| val | INT | 139017 | -1 | 4 | 4 |
| zerofill | STRING | 101761 | -1 | 6 | 6 |
| name | STRING | 145636240 | -1 | 22 | 13.00020027160645 |
| assertion | BOOLEAN | 2 | -1 | 1 | 1 |
| location_id | SMALLINT | 339 | -1 | 2 | 2 |
+-------------+----------+------------------+--------+----------+-------------------+
Note:
For column statistics to be effective in Impala, you also need to have table statistics for the applicable
tables, as described in Overview of Table Statistics on page 575. When you use the Impala COMPUTE
STATS statement, both table and column statistics are automatically gathered at the same time, for
all columns in the table.
576 | Apache Impala Guide
Tuning Impala for Performance
Note: Prior to Impala 1.4.0, COMPUTE STATS counted the number of NULL values in each column
and recorded that figure in the metastore database. Because Impala does not currently use the NULL
count during query planning, Impala 1.4.0 and higher speeds up the COMPUTE STATS statement by
skipping this NULL counting.
To check whether column statistics are available for a particular set of columns, use the SHOW COLUMN STATS
table_name statement, or check the extended EXPLAIN output for a query against that table that refers to those
columns. See SHOW Statement on page 363 and EXPLAIN Statement on page 271 for details.
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR COLUMNS, Impala can only use the
resulting column statistics if the table is unpartitioned. Impala cannot use Hive-generated column statistics for a
partitioned table.
How Table and Column Statistics Work for Partitioned Tables
When you use Impala for “big data”, you are highly likely to use partitioning for your biggest tables, the ones representing
data that can be logically divided based on dates, geographic regions, or similar criteria. The table and column statistics
are especially useful for optimizing queries on such tables. For example, a query involving one year might involve
substantially more or less data than a query involving a different year, or a range of several years. Each query might
be optimized differently as a result.
The following examples show how table and column stats work with a partitioned table. The table for this example is
partitioned by year, month, and day. For simplicity, the sample data consists of 5 partitions, all from the same year
and month. Table stats are collected independently for each partition. (In fact, the SHOW PARTITIONS statement
displays exactly the same information as SHOW TABLE STATS for a partitioned table.) Column stats apply to the entire
table, not to individual partitions. Because the partition key column values are represented as HDFS directories, their
characteristics are typically known in advance, even when the values for non-key columns are shown as -1.
show partitions year_month_day;
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
| year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication |
Format |...
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
| 2013 | 12 | 1 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 2 | -1 | 1 | 2.53MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 3 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 4 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 5 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| Total | | | -1 | 5 | 12.58MB | 0B | |
|...
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
show table stats year_month_day;
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
| year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication |
Format |...
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
| 2013 | 12 | 1 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 2 | -1 | 1 | 2.53MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 3 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 4 | -1 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 5 | -1 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| Total | | | -1 | 5 | 12.58MB | 0B | |
|...
Apache Impala Guide | 577
Tuning Impala for Performance
+-------+-------+-----+-------+--------+---------+--------------+-------------------+---------+...
show column stats year_month_day;
+-----------+---------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-----------+---------+------------------+--------+----------+----------+
| id | INT | -1 | -1 | 4 | 4 |
| val | INT | -1 | -1 | 4 | 4 |
| zfill | STRING | -1 | -1 | -1 | -1 |
| name | STRING | -1 | -1 | -1 | -1 |
| assertion | BOOLEAN | -1 | -1 | 1 | 1 |
| year | INT | 1 | 0 | 4 | 4 |
| month | INT | 1 | 0 | 4 | 4 |
| day | INT | 5 | 0 | 4 | 4 |
+-----------+---------+------------------+--------+----------+----------+
compute stats year_month_day;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 5 partition(s) and 5 column(s). |
+-----------------------------------------+
show table stats year_month_day;
+-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+...
| year | month | day | #Rows | #Files | Size | Bytes Cached | Cache Replication |
Format |...
+-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+...
| 2013 | 12 | 1 | 93606 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 2 | 94158 | 1 | 2.53MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 3 | 94122 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 4 | 93559 | 1 | 2.51MB | NOT CACHED | NOT CACHED |
PARQUET |...
| 2013 | 12 | 5 | 93845 | 1 | 2.52MB | NOT CACHED | NOT CACHED |
PARQUET |...
| Total | | | 469290 | 5 | 12.58MB | 0B | |
|...
+-------+-------+-----+--------+--------+---------+--------------+-------------------+---------+...
show column stats year_month_day;
+-----------+---------+------------------+--------+----------+-------------------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+-----------+---------+------------------+--------+----------+-------------------+
| id | INT | 511129 | -1 | 4 | 4 |
| val | INT | 364853 | -1 | 4 | 4 |
| zfill | STRING | 311430 | -1 | 6 | 6 |
| name | STRING | 471975 | -1 | 22 | 13.00160026550293 |
| assertion | BOOLEAN | 2 | -1 | 1 | 1 |
| year | INT | 1 | 0 | 4 | 4 |
| month | INT | 1 | 0 | 4 | 4 |
| day | INT | 5 | 0 | 4 | 4 |
+-----------+---------+------------------+--------+----------+-------------------+
If you run the Hive statement ANALYZE TABLE COMPUTE STATISTICS FOR COLUMNS, Impala can only use the
resulting column statistics if the table is unpartitioned. Impala cannot use Hive-generated column statistics for a
partitioned table.
Generating Table and Column Statistics
Use the COMPUTE STATS family of commands to collect table and column statistics. The COMPUTE STATS variants
offer different tradeoffs between computation cost, staleness, and maintenance workflows which are explained below.
578 | Apache Impala Guide
Tuning Impala for Performance
Important:
For a particular table, use either COMPUTE STATS or COMPUTE INCREMENTAL STATS. The two kinds
of stats do not interoperate with each other at the table level. Without dropping the stats, if you run
COMPUTE INCREMENTAL STATS it will overwrite the full compute stats or if you run COMPUTE STATS
it will drop all incremental stats for consistency.
COMPUTE STATS
The COMPUTE STATS command collects and sets the table-level and partition-level row counts as well as all column
statistics for a given table. The collection process is CPU-intensive and can take a long time to complete for very large
tables.
To speed up COMPUTE STATS consider the following options which can be combined.
• Limit the number of columns for which statistics are collected to increase the efficiency of COMPUTE STATS.
Queries benefit from statistics for those columns involved in filters, join conditions, group by or partition by clauses.
Other columns are good candidates to exclude from COMPUTE STATS. This feature is available since Impala 2.12.
• Set the MT_DOP query option to use more threads within each participating impalad to compute the statistics
faster - but not more efficiently. Note that computing stats on a large table with a high MT_DOP value can negatively
affect other queries running at the same time if the COMPUTE STATS claims most CPU cycles. This feature is
available since Impala 2.8.
• Consider the experimental extrapolation and sampling features (see below) to further increase the efficiency of
computing stats.
COMPUTE STATS is intended to be run periodically, e.g. weekly, or on-demand when the contents of a table have
changed significantly. Due to the high resource utilization and long repsonse time of tCOMPUTE STATS, it is most
practical to run it in a scheduled maintnance window where the Impala cluster is idle enough to accommodate the
expensive operation. The degree of change that qualifies as “significant” depends on the query workload, but typically,
if 30% of the rows have changed then it is recommended to recompute statistics.
If you reload a complete new set of data for a table, but the number of rows and number of distinct values for each
column is relatively unchanged from before, you do not need to recompute stats for the table.
Experimental: Extrapolation and Sampling
Impala 2.12 and higher includes two experimental features to alleviate common issues for computing and maintaining
statistics on very large tables. The following shortcomings are improved upon:
• Newly added partitions do not have row count statistics. Table scans that only access those new partitions are
treated as not having stats. Similarly, table scans that access both new and old partitions estimate the scan
cardinality based on those old partitions that have stats, and the new partitions without stats are treated as having
0 rows.
• The row counts of existing partitions become stale when data is added or dropped.
• Computing stats for tables with a 100,000 or more partitions might fail or be very slow due to the high cost of
updating the partition metadata in the Hive Metastore.
• With transient compute resources it is important to minimize the time from starting a new cluster to successfully
running queries. Since the cluster might be relatively short-lived, users might prefer to quickly collect stats that
are "good enough" as opposed to spending a lot of time and resouces on computing full-fidelity stats.
For very large tables, it is often wasteful or impractical to run a full COMPUTE STATS to address the scenarios above
on a frequent basis.
The sampling feature makes COMPUTE STATS more efficient by processing a fraction of the table data, and the
extrapolation feature aims to reduce the frequency at which COMPUTE STATS needs to be re-run by estimating the
row count of new and modified partitions.
Apache Impala Guide | 579
Tuning Impala for Performance
The sampling and extrapolation features are disabled by default. They can be enabled globally or for specific tables,
as follows. Set the impalad start-up configuration "--enable_stats_extrapolation" to enable the features globally. To
enable them only for a specific table, set the "impala.enable.stats.extrapolation" table property to "true" for the desired
table. The table-level property overrides the global setting, so it is also possible to enable sampling and extrapolation
globally, but disable it for specific tables by setting the table property to "false". Example: ALTER TABLE mytable
test_table SET TBLPROPERTIES("impala.enable.stats.extrapolation"="true")
Note: Why are these features experimental? Due to their probabilistic nature it is possible that these
features perform pathologically poorly on tables with extreme data/file/size distributions. Since it is
not feasible for us to test all possible scenarios we only cautiously advertise these new capabilities.
That said, the features have been thoroughly tested and are considered functionally stable. If you
decide to give these features a try, please tell us about your experience at user@impala.apache.org!
We rely on user feedback to guide future inprovements in statistics collection.
Stats Extrapolation
The main idea of stats extrapolation is to estimate the row count of new and modified partitions based on the result
of the last COMPUTE STATS. Enabling stats extrapolation changes the behavior of COMPUTE STATS, as well as the
cardinality estimation of table scans. COMPUTE STATS no longer computes and stores per-partition row counts, and
instead, only computes a table-level row count together with the total number of file bytes in the table at that time.
No partition metadata is modified. The input cardinality of a table scan is estimated by converting the data volume of
relevant partitions to a row count, based on the table-level row count and file bytes statistics. It is assumed that within
the same table, different sets of files with the same data volume correspond to the similar number of rows on average.
With extrapolation enabled, the scan cardinality estimation ignores per-partition row counts. It only relies on the
table-level statistics and the scanned data volume.
The SHOW TABLE STATS and EXPLAIN commands distinguish between row counts stored in the Hive Metastore, and
the row counts extrapolated based on the above process. Consult the SHOW TABLE STATS and EXPLAIN documentation
for more details.
Sampling
A TABLESAMPLE clause may be added to COMPUTE STATS to limit the percentage of data to be processed. The final
statistics are obtained by extrapolating the statistics from the data sample over the entire table. The extrapolated
statistics are stored in the Hive Metastore, just as if no sampling was used. The following example runs COMPUTE
STATS over a 10 percent data sample: COMPUTE STATS test_table TABLESAMPLE SYSTEM(10)
We have found that a 10 percent sampling rate typically offers a good tradeoff between statistics accuracy and execution
cost. A sampling rate well below 10 percent has shown poor results and is not recommended.
Important: Sampling-based techniques sacrifice result accuracy for execution efficiency, so your
mileage may vary for different tables and columns depending on their data distribution. The
extrapolation procedure Impala uses for estimating the number of distinct values per column is
inherently non-detetministic, so your results may even vary between runs of COMPUTE STATS
TABLESAMPLE, even if no data has changed.
COMPUTE INCREMENTAL STATS
In Impala 2.1.0 and higher, you can use the COMPUTE INCREMENTAL STATS and DROP INCREMENTAL STATS
commands. The INCREMENTAL clauses work with incremental statistics, a specialized feature for partitioned tables.
When you compute incremental statistics for a partitioned table, by default Impala only processes those partitions
that do not yet have incremental statistics. By processing only newly added partitions, you can keep statistics up to
date without incurring the overhead of reprocessing the entire table each time.
You can also compute or drop statistics for a specified subset of partitions by including a PARTITION clause in the
COMPUTE INCREMENTAL STATS or DROP INCREMENTAL STATS statement.
580 | Apache Impala Guide
Tuning Impala for Performance
Important:
In Impala 3.0 and lower, approximately 400 bytes of metadata per column per partition are needed
for caching. Tables with a big number of partitions and many columns can add up to a significant
memory overhead as the metadata must be cached on the catalogd host and on every impalad
host that is eligible to be a coordinator. If this metadata for all tables exceeds 2 GB, you might
experience service downtime. In Impala 3.1 and higher, the issue was alleviated with an improved
handling of incremental stats.
When you run COMPUTE INCREMENTAL STATS on a table for the first time, the statistics are computed
again from scratch regardless of whether the table already has statistics. Therefore, expect a one-time
resource-intensive operation for scanning the entire table when running COMPUTE INCREMENTAL
STATS for the first time on a given table.
The metadata for incremental statistics is handled differently from the original style of statistics:
• Issuing a COMPUTE INCREMENTAL STATS without a partition clause causes Impala to compute incremental stats
for all partitions that do not already have incremental stats. This might be the entire table when running the
command for the first time, but subsequent runs should only update new partitions. You can force updating a
partition that already has incremental stats by issuing a DROP INCREMENTAL STATS before running COMPUTE
INCREMENTAL STATS.
• The SHOW TABLE STATS and SHOW PARTITIONS statements now include an additional column showing whether
incremental statistics are available for each column. A partition could already be covered by the original type of
statistics based on a prior COMPUTE STATS statement, as indicated by a value other than -1 under the #Rows
column. Impala query planning uses either kind of statistics when available.
• COMPUTE INCREMENTAL STATS takes more time than COMPUTE STATS for the same volume of data. Therefore
it is most suitable for tables with large data volume where new partitions are added frequently, making it impractical
to run a full COMPUTE STATS operation for each new partition. For unpartitioned tables, or partitioned tables
that are loaded once and not updated with new partitions, use the original COMPUTE STATS syntax.
• COMPUTE INCREMENTAL STATS uses some memory in the catalogd process, proportional to the number of
partitions and number of columns in the applicable table. The memory overhead is approximately 400 bytes for
each column in each partition. This memory is reserved in the catalogd daemon, the statestored daemon,
and in each instance of the impalad daemon.
• In cases where new files are added to an existing partition, issue a REFRESH statement for the table, followed by
a DROP INCREMENTAL STATS and COMPUTE INCREMENTAL STATS sequence for the changed partition.
• The DROP INCREMENTAL STATS statement operates only on a single partition at a time. To remove statistics
(whether incremental or not) from all partitions of a table, issue a DROP STATS statement with no INCREMENTAL
or PARTITION clauses.
The following considerations apply to incremental statistics when the structure of an existing table is changed (known
as schema evolution):
• If you use an ALTER TABLE statement to drop a column, the existing statistics remain valid and COMPUTE
INCREMENTAL STATS does not rescan any partitions.
• If you use an ALTER TABLE statement to add a column, Impala rescans all partitions and fills in the appropriate
column-level values the next time you run COMPUTE INCREMENTAL STATS.
• If you use an ALTER TABLE statement to change the data type of a column, Impala rescans all partitions and fills
in the appropriate column-level values the next time you run COMPUTE INCREMENTAL STATS.
• If you use an ALTER TABLE statement to change the file format of a table, the existing statistics remain valid and
a subsequent COMPUTE INCREMENTAL STATS does not rescan any partitions.
See COMPUTE STATS Statement on page 219 and DROP STATS Statement on page 265 for syntax details.
Apache Impala Guide | 581
Tuning Impala for Performance
Maximum Serialized Stats Size
In Impala 3.0 / CDH 5.15 and lower, when executing COMPUTE INCREMENTAL STATS on very large tables, use the
configuration setting --inc_stats_size_limit_bytes to prevent Impala from running out of memory while
updating table metadata. If this limit is reached, Impala will stop loading the table and return an error. The error serves
as an indication that COMPUTE INCREMENTAL STATS should not be used on the particular table. Consider spitting
the table and using regular COMPUTE STATS ]if possible.
The --inc_stats_size_limit_bytes limit is set as a safety check, to prevent Impala from hitting the maximum
limit for the table metadata. Note that this limit is only one part of the entire table's metadata all of which together
must be below 2 GB.
The default value for --inc_stats_size_limit_bytes is 209715200, 200 MB.
To change the --inc_stats_size_limit_bytes value, restart impalad and catalogd with the new value specified
in bytes, for example, 1048576000 for 1 GB. See Modifying Impala Startup Options for the steps to change the option
and restart Impala daemons.
In Impala 3.1 / CDH 5.16 / CDH 6.1 and higher, Impala improved how metadata is updated when executing COMPUTE
INCREMENTAL STATS, significantly reducing the need for --inc_stats_size_limit_bytes.
Detecting Missing Statistics
You can check whether a specific table has statistics using the SHOW TABLE STATS statement (for any table) or the
SHOW PARTITIONS statement (for a partitioned table). Both statements display the same information. If a table or a
partition does not have any statistics, the #Rows field contains -1. Once you compute statistics for the table or partition,
the #Rows field changes to an accurate value.
The following example shows a table that initially does not have any statistics. The SHOW TABLE STATS statement
displays different values for #Rows before and after the COMPUTE STATS operation.
[localhost:21000] > create table no_stats (x int);
[localhost:21000] > show table stats no_stats;
+-------+--------+------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+--------+------+--------------+--------+-------------------+
| -1 | 0 | 0B | NOT CACHED | TEXT | false |
+-------+--------+------+--------------+--------+-------------------+
[localhost:21000] > compute stats no_stats;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 1 column(s). |
+-----------------------------------------+
[localhost:21000] > show table stats no_stats;
+-------+--------+------+--------------+--------+-------------------+
| #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+--------+------+--------------+--------+-------------------+
| 0 | 0 | 0B | NOT CACHED | TEXT | false |
+-------+--------+------+--------------+--------+-------------------+
The following example shows a similar progression with a partitioned table. Initially, #Rows is -1. After a COMPUTE
STATS operation, #Rows changes to an accurate value. Any newly added partition starts with no statistics, meaning
that you must collect statistics after adding a new partition.
[localhost:21000] > create table no_stats_partitioned (x int) partitioned by (year
smallint);
[localhost:21000] > show table stats no_stats_partitioned;
+-------+-------+--------+------+--------------+--------+-------------------+
| year | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+-------+--------+------+--------------+--------+-------------------+
| Total | -1 | 0 | 0B | 0B | | |
+-------+-------+--------+------+--------------+--------+-------------------+
[localhost:21000] > show partitions no_stats_partitioned;
+-------+-------+--------+------+--------------+--------+-------------------+
| year | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+-------+--------+------+--------------+--------+-------------------+
582 | Apache Impala Guide
Tuning Impala for Performance
| Total | -1 | 0 | 0B | 0B | | |
+-------+-------+--------+------+--------------+--------+-------------------+
[localhost:21000] > alter table no_stats_partitioned add partition (year=2013);
[localhost:21000] > compute stats no_stats_partitioned;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 1 partition(s) and 1 column(s). |
+-----------------------------------------+
[localhost:21000] > alter table no_stats_partitioned add partition (year=2014);
[localhost:21000] > show partitions no_stats_partitioned;
+-------+-------+--------+------+--------------+--------+-------------------+
| year | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats |
+-------+-------+--------+------+--------------+--------+-------------------+
| 2013 | 0 | 0 | 0B | NOT CACHED | TEXT | false |
| 2014 | -1 | 0 | 0B | NOT CACHED | TEXT | false |
| Total | 0 | 0 | 0B | 0B | | |
+-------+-------+--------+------+--------------+--------+-------------------+
Note: Because the default COMPUTE STATS statement creates and updates statistics for all partitions
in a table, if you expect to frequently add new partitions, use the COMPUTE INCREMENTAL STATS
syntax instead, which lets you compute stats for a single specified partition, or only for those partitions
that do not already have incremental stats.
If checking each individual table is impractical, due to a large number of tables or views that hide the underlying base
tables, you can also check for missing statistics for a particular query. Use the EXPLAIN statement to preview query
efficiency before actually running the query. Use the query profile output available through the PROFILE command
in impala-shell or the web UI to verify query execution and timing after running the query. Both the EXPLAIN plan
and the PROFILE output display a warning if any tables or partitions involved in the query do not have statistics.
[localhost:21000] > create table no_stats (x int);
[localhost:21000] > explain select count(*) from no_stats;
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=10.00MB VCores=1 |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| incremental_stats.no_stats |
| |
| 03:AGGREGATE [FINALIZE] |
| | output: count:merge(*) |
| | |
| 02:EXCHANGE [UNPARTITIONED] |
| | |
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HDFS [incremental_stats.no_stats] |
| partitions=1/1 files=0 size=0B |
+------------------------------------------------------------------------------------+
Because Impala uses the partition pruning technique when possible to only evaluate certain partitions, if you have a
partitioned table with statistics for some partitions and not others, whether or not the EXPLAIN statement shows the
warning depends on the actual partitions used by the query. For example, you might see warnings or not for different
queries against the same table:
-- No warning because all the partitions for the year 2012 have stats.
EXPLAIN SELECT ... FROM t1 WHERE year = 2012;
-- Missing stats warning because one or more partitions in this range
-- do not have stats.
EXPLAIN SELECT ... FROM t1 WHERE year BETWEEN 2006 AND 2009;
To confirm if any partitions at all in the table are missing statistics, you might explain a query that scans the entire
table, such as SELECT COUNT(*) FROM table_name.
Apache Impala Guide | 583
Tuning Impala for Performance
Manually Setting Table and Column Statistics with ALTER TABLE
Setting Table Statistics
The most crucial piece of data in all the statistics is the number of rows in the table (for an unpartitioned or partitioned
table) and for each partition (for a partitioned table). The COMPUTE STATS statement always gathers statistics about
all columns, as well as overall table statistics. If it is not practical to do a full COMPUTE STATS or COMPUTE INCREMENTAL
STATS operation after adding a partition or inserting data, or if you can see that Impala would produce a more efficient
plan if the number of rows was different, you can manually set the number of rows through an ALTER TABLE statement:
-- Set total number of rows. Applies to both unpartitioned and partitioned tables.
alter table table_name set tblproperties('numRows'='new_value',
'STATS_GENERATED_VIA_STATS_TASK'='true');
-- Set total number of rows for a specific partition. Applies to partitioned tables
only.
-- You must specify all the partition key columns in the PARTITION clause.
alter table table_name partition (keycol1=val1,keycol2=val2...) set
tblproperties('numRows'='new_value', 'STATS_GENERATED_VIA_STATS_TASK'='true');
This statement avoids re-scanning any data files. (The requirement to include the STATS_GENERATED_VIA_STATS_TASK
property is relatively new, as a result of the issue HIVE-8648 for the Hive metastore.)
create table analysis_data stored as parquet as select * from raw_data;
Inserted 1000000000 rows in 181.98s
compute stats analysis_data;
insert into analysis_data select * from smaller_table_we_forgot_before;
Inserted 1000000 rows in 15.32s
-- Now there are 1001000000 rows. We can update this single data point in the stats.
alter table analysis_data set tblproperties('numRows'='1001000000',
'STATS_GENERATED_VIA_STATS_TASK'='true');
For a partitioned table, update both the per-partition number of rows and the number of rows for the whole table:
-- If the table originally contained 1 million rows, and we add another partition with
30 thousand rows,
-- change the numRows property for the partition and the overall table.
alter table partitioned_data partition(year=2009, month=4) set tblproperties
('numRows'='30000', 'STATS_GENERATED_VIA_STATS_TASK'='true');
alter table partitioned_data set tblproperties ('numRows'='1030000',
'STATS_GENERATED_VIA_STATS_TASK'='true');
In practice, the COMPUTE STATS statement, or COMPUTE INCREMENTAL STATS for a partitioned table, should be
fast and convenient enough that this technique is only useful for the very largest partitioned tables. Because the column
statistics might be left in a stale state, do not use this technique as a replacement for COMPUTE STATS. Only use this
technique if all other means of collecting statistics are impractical, or as a low-overhead operation that you run in
between periodic COMPUTE STATS or COMPUTE INCREMENTAL STATS operations.
Setting Column Statistics
In CDH 5.8 / Impala 2.6 and higher, you can also use the SET COLUMN STATS clause of ALTER TABLE to manually set
or change column statistics. Only use this technique in cases where it is impractical to run COMPUTE STATS or COMPUTE
INCREMENTAL STATS frequently enough to keep up with data changes for a huge table.
You specify a case-insensitive symbolic name for the kind of statistics: numDVs, numNulls, avgSize, maxSize. The
key names and values are both quoted. This operation applies to an entire table, not a specific partition. For example:
create table t1 (x int, s string);
insert into t1 values (1, 'one'), (2, 'two'), (2, 'deux');
show column stats t1;
+--------+--------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+--------+------------------+--------+----------+----------+
584 | Apache Impala Guide
Tuning Impala for Performance
| x | INT | -1 | -1 | 4 | 4 |
| s | STRING | -1 | -1 | -1 | -1 |
+--------+--------+------------------+--------+----------+----------+
alter table t1 set column stats x ('numDVs'='2','numNulls'='0');
alter table t1 set column stats s ('numdvs'='3','maxsize'='4');
show column stats t1;
+--------+--------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+--------+------------------+--------+----------+----------+
| x | INT | 2 | 0 | 4 | 4 |
| s | STRING | 3 | -1 | 4 | -1 |
+--------+--------+------------------+--------+----------+----------+
Examples of Using Table and Column Statistics with Impala
The following examples walk through a sequence of SHOW TABLE STATS, SHOW COLUMN STATS, ALTER TABLE, and
SELECT and INSERT statements to illustrate various aspects of how Impala uses statistics to help optimize queries.
This example shows table and column statistics for the STORE column used in the TPC-DS benchmarks for decision
support systems. It is a tiny table holding data for 12 stores. Initially, before any statistics are gathered by a COMPUTE
STATS statement, most of the numeric fields show placeholder values of -1, indicating that the figures are unknown.
The figures that are filled in are values that are easily countable or deducible at the physical level, such as the number
of files, total data size of the files, and the maximum and average sizes for data types that have a constant size such
as INT, FLOAT, and TIMESTAMP.
[localhost:21000] > show table stats store;
+-------+--------+--------+--------+
| #Rows | #Files | Size | Format |
+-------+--------+--------+--------+
| -1 | 1 | 3.08KB | TEXT |
+-------+--------+--------+--------+
Returned 1 row(s) in 0.03s
[localhost:21000] > show column stats store;
+--------------------+-----------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------------------+-----------+------------------+--------+----------+----------+
| s_store_sk | INT | -1 | -1 | 4 | 4 |
| s_store_id | STRING | -1 | -1 | -1 | -1 |
| s_rec_start_date | TIMESTAMP | -1 | -1 | 16 | 16 |
| s_rec_end_date | TIMESTAMP | -1 | -1 | 16 | 16 |
| s_closed_date_sk | INT | -1 | -1 | 4 | 4 |
| s_store_name | STRING | -1 | -1 | -1 | -1 |
| s_number_employees | INT | -1 | -1 | 4 | 4 |
| s_floor_space | INT | -1 | -1 | 4 | 4 |
| s_hours | STRING | -1 | -1 | -1 | -1 |
| s_manager | STRING | -1 | -1 | -1 | -1 |
| s_market_id | INT | -1 | -1 | 4 | 4 |
| s_geography_class | STRING | -1 | -1 | -1 | -1 |
| s_market_desc | STRING | -1 | -1 | -1 | -1 |
| s_market_manager | STRING | -1 | -1 | -1 | -1 |
| s_division_id | INT | -1 | -1 | 4 | 4 |
| s_division_name | STRING | -1 | -1 | -1 | -1 |
| s_company_id | INT | -1 | -1 | 4 | 4 |
| s_company_name | STRING | -1 | -1 | -1 | -1 |
| s_street_number | STRING | -1 | -1 | -1 | -1 |
| s_street_name | STRING | -1 | -1 | -1 | -1 |
| s_street_type | STRING | -1 | -1 | -1 | -1 |
| s_suite_number | STRING | -1 | -1 | -1 | -1 |
| s_city | STRING | -1 | -1 | -1 | -1 |
| s_county | STRING | -1 | -1 | -1 | -1 |
| s_state | STRING | -1 | -1 | -1 | -1 |
| s_zip | STRING | -1 | -1 | -1 | -1 |
| s_country | STRING | -1 | -1 | -1 | -1 |
| s_gmt_offset | FLOAT | -1 | -1 | 4 | 4 |
| s_tax_percentage | FLOAT | -1 | -1 | 4 | 4 |
+--------------------+-----------+------------------+--------+----------+----------+
Returned 29 row(s) in 0.04s
Apache Impala Guide | 585
Tuning Impala for Performance
With the Hive ANALYZE TABLE statement for column statistics, you had to specify each column for which to gather
statistics. The Impala COMPUTE STATS statement automatically gathers statistics for all columns, because it reads
through the entire table relatively quickly and can efficiently compute the values for all the columns. This example
shows how after running the COMPUTE STATS statement, statistics are filled in for both the table and all its columns:
[localhost:21000] > compute stats store;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 1 partition(s) and 29 column(s). |
+------------------------------------------+
Returned 1 row(s) in 1.88s
[localhost:21000] > show table stats store;
+-------+--------+--------+--------+
| #Rows | #Files | Size | Format |
+-------+--------+--------+--------+
| 12 | 1 | 3.08KB | TEXT |
+-------+--------+--------+--------+
Returned 1 row(s) in 0.02s
[localhost:21000] > show column stats store;
+--------------------+-----------+------------------+--------+----------+-------------------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size
|
+--------------------+-----------+------------------+--------+----------+-------------------+
| s_store_sk | INT | 12 | -1 | 4 | 4
|
| s_store_id | STRING | 6 | -1 | 16 | 16
|
| s_rec_start_date | TIMESTAMP | 4 | -1 | 16 | 16
|
| s_rec_end_date | TIMESTAMP | 3 | -1 | 16 | 16
|
| s_closed_date_sk | INT | 3 | -1 | 4 | 4
|
| s_store_name | STRING | 8 | -1 | 5 | 4.25
|
| s_number_employees | INT | 9 | -1 | 4 | 4
|
| s_floor_space | INT | 10 | -1 | 4 | 4
|
| s_hours | STRING | 2 | -1 | 8 |
7.083300113677979 |
| s_manager | STRING | 7 | -1 | 15 | 12
|
| s_market_id | INT | 7 | -1 | 4 | 4
|
| s_geography_class | STRING | 1 | -1 | 7 | 7
|
| s_market_desc | STRING | 10 | -1 | 94 | 55.5
|
| s_market_manager | STRING | 7 | -1 | 16 | 14
|
| s_division_id | INT | 1 | -1 | 4 | 4
|
| s_division_name | STRING | 1 | -1 | 7 | 7
|
| s_company_id | INT | 1 | -1 | 4 | 4
|
| s_company_name | STRING | 1 | -1 | 7 | 7
|
| s_street_number | STRING | 9 | -1 | 3 |
2.833300113677979 |
| s_street_name | STRING | 12 | -1 | 11 |
6.583300113677979 |
| s_street_type | STRING | 8 | -1 | 9 |
4.833300113677979 |
| s_suite_number | STRING | 11 | -1 | 9 | 8.25
|
| s_city | STRING | 2 | -1 | 8 | 6.5
|
| s_county | STRING | 1 | -1 | 17 | 17
|
586 | Apache Impala Guide
Tuning Impala for Performance
| s_state | STRING | 1 | -1 | 2 | 2
|
| s_zip | STRING | 2 | -1 | 5 | 5
|
| s_country | STRING | 1 | -1 | 13 | 13
|
| s_gmt_offset | FLOAT | 1 | -1 | 4 | 4
|
| s_tax_percentage | FLOAT | 5 | -1 | 4 | 4
|
+--------------------+-----------+------------------+--------+----------+-------------------+
Returned 29 row(s) in 0.04s
The following example shows how statistics are represented for a partitioned table. In this case, we have set up a table
to hold the world's most trivial census data, a single STRING field, partitioned by a YEAR column. The table statistics
include a separate entry for each partition, plus final totals for the numeric fields. The column statistics include some
easily deducible facts for the partitioning column, such as the number of distinct values (the number of partition
subdirectories).
localhost:21000] > describe census;
+------+----------+---------+
| name | type | comment |
+------+----------+---------+
| name | string | |
| year | smallint | |
+------+----------+---------+
Returned 2 row(s) in 0.02s
[localhost:21000] > show table stats census;
+-------+-------+--------+------+---------+
| year | #Rows | #Files | Size | Format |
+-------+-------+--------+------+---------+
| 2000 | -1 | 0 | 0B | TEXT |
| 2004 | -1 | 0 | 0B | TEXT |
| 2008 | -1 | 0 | 0B | TEXT |
| 2010 | -1 | 0 | 0B | TEXT |
| 2011 | 0 | 1 | 22B | TEXT |
| 2012 | -1 | 1 | 22B | TEXT |
| 2013 | -1 | 1 | 231B | PARQUET |
| Total | 0 | 3 | 275B | |
+-------+-------+--------+------+---------+
Returned 8 row(s) in 0.02s
[localhost:21000] > show column stats census;
+--------+----------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+----------+------------------+--------+----------+----------+
| name | STRING | -1 | -1 | -1 | -1 |
| year | SMALLINT | 7 | -1 | 2 | 2 |
+--------+----------+------------------+--------+----------+----------+
Returned 2 row(s) in 0.02s
The following example shows how the statistics are filled in by a COMPUTE STATS statement in Impala.
[localhost:21000] > compute stats census;
+-----------------------------------------+
| summary |
+-----------------------------------------+
| Updated 3 partition(s) and 1 column(s). |
+-----------------------------------------+
Returned 1 row(s) in 2.16s
[localhost:21000] > show table stats census;
+-------+-------+--------+------+---------+
| year | #Rows | #Files | Size | Format |
+-------+-------+--------+------+---------+
| 2000 | -1 | 0 | 0B | TEXT |
| 2004 | -1 | 0 | 0B | TEXT |
| 2008 | -1 | 0 | 0B | TEXT |
| 2010 | -1 | 0 | 0B | TEXT |
| 2011 | 4 | 1 | 22B | TEXT |
| 2012 | 4 | 1 | 22B | TEXT |
| 2013 | 1 | 1 | 231B | PARQUET |
Apache Impala Guide | 587
Tuning Impala for Performance
| Total | 9 | 3 | 275B | |
+-------+-------+--------+------+---------+
Returned 8 row(s) in 0.02s
[localhost:21000] > show column stats census;
+--------+----------+------------------+--------+----------+----------+
| Column | Type | #Distinct Values | #Nulls | Max Size | Avg Size |
+--------+----------+------------------+--------+----------+----------+
| name | STRING | 4 | -1 | 5 | 4.5 |
| year | SMALLINT | 7 | -1 | 2 | 2 |
+--------+----------+------------------+--------+----------+----------+
Returned 2 row(s) in 0.02s
For examples showing how some queries work differently when statistics are available, see Examples of Join Order
Optimization on page 570. You can see how Impala executes a query differently in each case by observing the EXPLAIN
output before and after collecting statistics. Measure the before and after query times, and examine the throughput
numbers in before and after SUMMARY or PROFILE output, to verify how much the improved plan speeds up performance.
Benchmarking Impala Queries
Because Impala, like other Hadoop components, is designed to handle large data volumes in a distributed environment,
conduct any performance tests using realistic data and cluster configurations. Use a multi-node cluster rather than a
single node; run queries against tables containing terabytes of data rather than tens of gigabytes. The parallel processing
techniques used by Impala are most appropriate for workloads that are beyond the capacity of a single server.
When you run queries returning large numbers of rows, the CPU time to pretty-print the output can be substantial,
giving an inaccurate measurement of the actual query time. Consider using the -B option on the impala-shell
command to turn off the pretty-printing, and optionally the -o option to store query results in a file rather than printing
to the screen. See impala-shell Configuration Options on page 714 for details.
Controlling Impala Resource Usage
Sometimes, balancing raw query performance against scalability requires limiting the amount of resources, such as
memory or CPU, used by a single query or group of queries. Impala can use several mechanisms that help to smooth
out the load during heavy concurrent usage, resulting in faster overall query times and sharing of resources across
Impala queries, MapReduce jobs, and other kinds of workloads across a CDH cluster:
• The Impala admission control feature uses a fast, distributed mechanism to hold back queries that exceed limits
on the number of concurrent queries or the amount of memory used. The queries are queued, and executed as
other queries finish and resources become available. You can control the concurrency limits, and specify different
limits for different groups of users to divide cluster resources according to the priorities of different classes of
users. This feature is new in Impala 1.3. See Admission Control and Query Queuing on page 549 for details.
• You can restrict the amount of memory Impala reserves during query execution by specifying the -mem_limit
option for the impalad daemon. See Modifying Impala Startup Options for details. This limit applies only to the
memory that is directly consumed by queries; Impala reserves additional memory at startup, for example to hold
cached metadata.
• For production deployment, Cloudera recommends that you implement resource isolation using mechanisms such
as cgroups, which you can configure using Cloudera Manager. For details, see Static Resource Pools in the Cloudera
Manager documentation.
Runtime Filtering for Impala Queries (CDH 5.7 or higher only)
Runtime filtering is a wide-ranging optimization feature available in CDH 5.7 / Impala 2.5 and higher. When only a
fraction of the data in a table is needed for a query against a partitioned table or to evaluate a join condition, Impala
determines the appropriate conditions while the query is running, and broadcasts that information to all the impalad
588 | Apache Impala Guide
Tuning Impala for Performance
nodes that are reading the table so that they can avoid unnecessary I/O to read partition data, and avoid unnecessary
network transmission by sending only the subset of rows that match the join keys across the network.
This feature is primarily used to optimize queries against large partitioned tables (under the name dynamic partition
pruning) and joins of large tables. The information in this section includes concepts, internals, and troubleshooting
information for the entire runtime filtering feature. For specific tuning steps for partitioned tables, see Dynamic Partition
Pruning on page 628.
Important:
When this feature made its debut in CDH 5.7, the default setting was RUNTIME_FILTER_MODE=LOCAL.
Now the default is RUNTIME_FILTER_MODE=GLOBAL in CDH 5.8 / Impala 2.6 and higher, which enables
more wide-ranging and ambitious query optimization without requiring you to explicitly set any query
options.
Background Information for Runtime Filtering
To understand how runtime filtering works at a detailed level, you must be familiar with some terminology from the
field of distributed database technology:
• What a plan fragment is. Impala decomposes each query into smaller units of work that are distributed across
the cluster. Wherever possible, a data block is read, filtered, and aggregated by plan fragments executing on the
same host. For some operations, such as joins and combining intermediate results into a final result set, data is
transmitted across the network from one Impala daemon to another.
• What SCAN and HASH JOIN plan nodes are, and their role in computing query results:
In the Impala query plan, a scan node performs the I/O to read from the underlying data files. Although this is an
expensive operation from the traditional database perspective, Hadoop clusters and Impala are optimized to do
this kind of I/O in a highly parallel fashion. The major potential cost savings come from using the columnar Parquet
format (where Impala can avoid reading data for unneeded columns) and partitioned tables (where Impala can
avoid reading data for unneeded partitions).
Most Impala joins use the hash join mechanism. (It is only fairly recently that Impala started using the nested-loop
join technique, for certain kinds of non-equijoin queries.) In a hash join, when evaluating join conditions from two
tables, Impala constructs a hash table in memory with all the different column values from the table on one side
of the join. Then, for each row from the table on the other side of the join, Impala tests whether the relevant
column values are in this hash table or not.
A hash join node constructs such an in-memory hash table, then performs the comparisons to identify which rows
match the relevant join conditions and should be included in the result set (or at least sent on to the subsequent
intermediate stage of query processing). Because some of the input for a hash join might be transmitted across
the network from another host, it is especially important from a performance perspective to prune out ahead of
time any data that is known to be irrelevant.
The more distinct values are in the columns used as join keys, the larger the in-memory hash table and thus the
more memory required to process the query.
• The difference between a broadcast join and a shuffle join. (The Hadoop notion of a shuffle join is sometimes
referred to in Impala as a partitioned join.) In a broadcast join, the table from one side of the join (typically the
smaller table) is sent in its entirety to all the hosts involved in the query. Then each host can compare its portion
of the data from the other (larger) table against the full set of possible join keys. In a shuffle join, there is no
obvious “smaller” table, and so the contents of both tables are divided up, and corresponding portions of the data
are transmitted to each host involved in the query. See Optimizer Hints in Impala on page 387 for information
about how these different kinds of joins are processed.
• The notion of the build phase and probe phase when Impala processes a join query. The build phase is where the
rows containing the join key columns, typically for the smaller table, are transmitted across the network and built
into an in-memory hash table data structure on one or more destination nodes. The probe phase is where data
is read locally (typically from the larger table) and the join key columns are compared to the values in the in-memory
Apache Impala Guide | 589
Tuning Impala for Performance
hash table. The corresponding input sources (tables, subqueries, and so on) for these phases are referred to as
the build side and the probe side.
• How to set Impala query options: interactively within an impala-shell session through the SET command, for
a JDBC or ODBC application through the SET statement, or globally for all impalad daemons through the
default_query_options configuration setting.
Runtime Filtering Internals
The filter that is transmitted between plan fragments is essentially a list of values for join key columns. When this list
of values is transmitted in time to a scan node, Impala can filter out non-matching values immediately after reading
them, rather than transmitting the raw data to another host to compare against the in-memory hash table on that
host.
For HDFS-based tables, this data structure is implemented as a Bloom filter, which uses a probability-based algorithm
to determine all possible matching values. (The probability-based aspects means that the filter might include some
non-matching values, but if so, that does not cause any inaccuracy in the final results.)
Another kind of filter is the “min-max” filter. It currently only applies to Kudu tables. The filter is a data structure
representing a minimum and maximum value. These filters are passed to Kudu to reduce the number of rows returned
to Impala when scanning the probe side of the join.
There are different kinds of filters to match the different kinds of joins (partitioned and broadcast). A broadcast filter
reflects the complete list of relevant values and can be immediately evaluated by a scan node. A partitioned filter
reflects only the values processed by one host in the cluster; all the partitioned filters must be combined into one (by
the coordinator node) before the scan nodes can use the results to accurately filter the data as it is read from storage.
Broadcast filters are also classified as local or global. With a local broadcast filter, the information in the filter is used
by a subsequent query fragment that is running on the same host that produced the filter. A non-local broadcast filter
must be transmitted across the network to a query fragment that is running on a different host. Impala designates 3
hosts to each produce non-local broadcast filters, to guard against the possibility of a single slow host taking too long.
Depending on the setting of the RUNTIME_FILTER_MODE query option (LOCAL or GLOBAL), Impala either uses a
conservative optimization strategy where filters are only consumed on the same host that produced them, or a more
aggressive strategy where filters are eligible to be transmitted across the network.
Note: In CDH 5.8 / Impala 2.6 and higher, the default for runtime filtering is the GLOBAL setting.
File Format Considerations for Runtime Filtering
Parquet tables get the most benefit from the runtime filtering optimizations. Runtime filtering can speed up join queries
against partitioned or unpartitioned Parquet tables, and single-table queries against partitioned Parquet tables. See
Using the Parquet File Format with Impala Tables on page 643 for information about using Parquet tables with Impala.
For other file formats (text, Avro, RCFile, and SequenceFile), runtime filtering speeds up queries against partitioned
tables only. Because partitioned tables can use a mixture of formats, Impala produces the filters in all cases, even if
they are not ultimately used to optimize the query.
Wait Intervals for Runtime Filters
Because it takes time to produce runtime filters, especially for partitioned filters that must be combined by the
coordinator node, there is a time interval above which it is more efficient for the scan nodes to go ahead and construct
their intermediate result sets, even if that intermediate data is larger than optimal. If it only takes a few seconds to
produce the filters, it is worth the extra time if pruning the unnecessary data can save minutes in the overall query
time. You can specify the maximum wait time in milliseconds using the RUNTIME_FILTER_WAIT_TIME_MS query
option.
By default, each scan node waits for up to 1 second (1000 milliseconds) for filters to arrive. If all filters have not arrived
within the specified interval, the scan node proceeds, using whatever filters did arrive to help avoid reading unnecessary
590 | Apache Impala Guide
Tuning Impala for Performance
data. If a filter arrives after the scan node begins reading data, the scan node applies that filter to the data that is read
after the filter arrives, but not to the data that was already read.
If the cluster is relatively busy and your workload contains many resource-intensive or long-running queries, consider
increasing the wait time so that complicated queries do not miss opportunities for optimization. If the cluster is lightly
loaded and your workload contains many small queries taking only a few seconds, consider decreasing the wait time
to avoid the 1 second delay for each query.
Query Options for Runtime Filtering
See the following sections for information about the query options that control runtime filtering:
• The first query option adjusts the “sensitivity” of this feature. By default, it is set to the highest level (GLOBAL).
(This default applies to CDH 5.8 / Impala 2.6 and higher. In previous releases, the default was LOCAL.)
– RUNTIME_FILTER_MODE Query Option (CDH 5.7 or higher only) on page 358
• The other query options are tuning knobs that you typically only adjust after doing performance testing, and that
you might want to change only for the duration of a single expensive query:
– MAX_NUM_RUNTIME_FILTERS Query Option (CDH 5.7 or higher only) on page 339
– DISABLE_ROW_RUNTIME_FILTERING Query Option (CDH 5.7 or higher only) on page 328
– RUNTIME_FILTER_MAX_SIZE Query Option (CDH 5.8 or higher only) on page 357
– RUNTIME_FILTER_MIN_SIZE Query Option (CDH 5.8 or higher only) on page 357
– RUNTIME_BLOOM_FILTER_SIZE Query Option (CDH 5.7 or higher only) on page 356; in CDH 5.8 / Impala 2.6
and higher, this setting acts as a fallback when statistics are not available, rather than as a directive.
Runtime Filtering and Query Plans
In the same way the query plan displayed by the EXPLAIN statement includes information about predicates used by
each plan fragment, it also includes annotations showing whether a plan fragment produces or consumes a runtime
filter. A plan fragment that produces a filter includes an annotation such as runtime filters: filter_id <-
table.column, while a plan fragment that consumes a filter includes an annotation such as runtime filters:
filter_id -> table.column. Setting the query option EXPLAIN_LEVEL=2 adds additional annotations showing
the type of the filter, either filter_id[bloom] (for HDFS-based tables) or filter_id[min_max] (for Kudu tables).
The following example shows a query that uses a single runtime filter, labeled RF000, to prune the partitions based
on evaluating the result set of a subquery at runtime:
CREATE TABLE yy (s STRING) PARTITIONED BY (year INT);
INSERT INTO yy PARTITION (year) VALUES ('1999', 1999), ('2000', 2000),
('2001', 2001), ('2010', 2010), ('2018', 2018);
COMPUTE STATS yy;
CREATE TABLE yy2 (s STRING, year INT);
INSERT INTO yy2 VALUES ('1999', 1999), ('2000', 2000), ('2001', 2001);
COMPUTE STATS yy2;
-- The following query reads an unknown number of partitions, whose key values
-- are only known at run time. The runtime filters line shows the
-- information used in query fragment 02 to decide which partitions to skip.
EXPLAIN SELECT s FROM yy WHERE year IN (SELECT year FROM yy2);
+--------------------------------------------------------------------------+
| PLAN-ROOT SINK |
| | |
| 04:EXCHANGE [UNPARTITIONED] |
| | |
| 02:HASH JOIN [LEFT SEMI JOIN, BROADCAST] |
| | hash predicates: year = year |
Apache Impala Guide | 591
Tuning Impala for Performance
| | runtime filters: RF000 <- year |
| | |
| |--03:EXCHANGE [BROADCAST] |
| | | |
| | 01:SCAN HDFS [default.yy2] |
| | partitions=1/1 files=1 size=620B |
| | |
| 00:SCAN HDFS [default.yy] |
| partitions=5/5 files=5 size=1.71KB |
| runtime filters: RF000 -> year |
+--------------------------------------------------------------------------+
SELECT s FROM yy WHERE year IN (SELECT year FROM yy2); -- Returns 3 rows from yy
PROFILE;
The query profile (displayed by the PROFILE command in impala-shell) contains both the EXPLAIN plan and more
detailed information about the internal workings of the query. The profile output includes the Filter routing
table section with information about each filter based on its ID.
Examples of Queries that Benefit from Runtime Filtering
In this example, Impala would normally do extra work to interpret the columns C1, C2, C3, and ID for each row in
HUGE_T1, before checking the ID value against the in-memory hash table constructed from all the TINY_T2.ID values.
By producing a filter containing all the TINY_T2.ID values even before the query starts scanning the HUGE_T1 table,
Impala can skip the unnecessary work to parse the column info as soon as it determines that an ID value does not
match any of the values from the other table.
The example shows COMPUTE STATS statements for both the tables (even though that is a one-time operation after
loading data into those tables) because Impala relies on up-to-date statistics to determine which one has more distinct
ID values than the other. That information lets Impala make effective decisions about which table to use to construct
the in-memory hash table, and which table to read from disk and compare against the entries in the hash table.
COMPUTE STATS huge_t1;
COMPUTE STATS tiny_t2;
SELECT c1, c2, c3 FROM huge_t1 JOIN tiny_t2 WHERE huge_t1.id = tiny_t2.id;
In this example, T1 is a table partitioned by year. The subquery on T2 produces multiple values, and transmits those
values as a filter to the plan fragments that are reading from T1. Any non-matching partitions in T1 are skipped.
select c1 from t1 where year in (select distinct year from t2);
Now the WHERE clause contains an additional test that does not apply to the partition key column. A filter on a column
that is not a partition key is called a per-row filter. Because per-row filters only apply for Parquet, T1 must be a Parquet
table.
The subqueries result in two filters being transmitted to the scan nodes that read from T1. The filter on YEAR helps
the query eliminate entire partitions based on non-matching years. The filter on C2 lets Impala discard rows with
non-matching C2 values immediately after reading them. Without runtime filtering, Impala would have to keep the
non-matching values in memory, assemble C1, C2, and C3 into rows in the intermediate result set, and transmit all
the intermediate rows back to the coordinator node, where they would be eliminated only at the very end of the query.
select c1, c2, c3 from t1
where year in (select distinct year from t2)
and c2 in (select other_column from t3);
592 | Apache Impala Guide
Tuning Impala for Performance
This example involves a broadcast join. The fact that the ON clause would return a small number of matching rows
(because there are not very many rows in TINY_T2) means that the corresponding filter is very selective. Therefore,
runtime filtering will probably be effective in optimizing this query.
select c1 from huge_t1 join [broadcast] tiny_t2
on huge_t1.id = tiny_t2.id
where huge_t1.year in (select distinct year from tiny_t2)
and c2 in (select other_column from t3);
This example involves a shuffle or partitioned join. Assume that most rows in HUGE_T1 have a corresponding row in
HUGE_T2. The fact that the ON clause could return a large number of matching rows means that the corresponding
filter would not be very selective. Therefore, runtime filtering might be less effective in optimizing this query.
select c1 from huge_t1 join [shuffle] huge_t2
on huge_t1.id = huge_t2.id
where huge_t1.year in (select distinct year from huge_t2)
and c2 in (select other_column from t3);
Tuning and Troubleshooting Queries that Use Runtime Filtering
These tuning and troubleshooting procedures apply to queries that are resource-intensive enough, long-running
enough, and frequent enough that you can devote special attention to optimizing them individually.
Use the EXPLAIN statement and examine the runtime filters: lines to determine whether runtime filters are
being applied to the WHERE predicates and join clauses that you expect. For example, runtime filtering does not apply
to queries that use the nested loop join mechanism due to non-equijoin operators.
Make sure statistics are up-to-date for all tables involved in the queries. Use the COMPUTE STATS statement after
loading data into non-partitioned tables, and COMPUTE INCREMENTAL STATS after adding new partitions to partitioned
tables.
If join queries involving large tables use unique columns as the join keys, for example joining a primary key column
with a foreign key column, the overhead of producing and transmitting the filter might outweigh the performance
benefit because not much data could be pruned during the early stages of the query. For such queries, consider setting
the query option RUNTIME_FILTER_MODE=OFF.
Limitations and Restrictions for Runtime Filtering
The runtime filtering feature is most effective for the Parquet file formats. For other file formats, filtering only applies
for partitioned tables. See File Format Considerations for Runtime Filtering on page 590. For the ways in which runtime
filtering works for Kudu tables, see Impala Query Performance for Kudu Tables on page 683.
When the spill-to-disk mechanism is activated on a particular host during a query, that host does not produce any
filters while processing that query. This limitation does not affect the correctness of results; it only reduces the amount
of optimization that can be applied to the query.
Using HDFS Caching with Impala (CDH 5.3 or higher only)
HDFS caching provides performance and scalability benefits in production environments where Impala queries and
other Hadoop jobs operate on quantities of data much larger than the physical RAM on the DataNodes, making it
impractical to rely on the Linux OS cache, which only keeps the most recently used data in memory. Data read from
the HDFS cache avoids the overhead of checksumming and memory-to-memory copying involved when using data
from the Linux OS cache.
Apache Impala Guide | 593
Tuning Impala for Performance
Note:
On a small or lightly loaded cluster, HDFS caching might not produce any speedup. It might even lead
to slower queries, if I/O read operations that were performed in parallel across the entire cluster are
replaced by in-memory operations operating on a smaller number of hosts. The hosts where the HDFS
blocks are cached can become bottlenecks because they experience high CPU load while processing
the cached data blocks, while other hosts remain idle. Therefore, always compare performance with
and without this feature enabled, using a realistic workload.
In CDH 5.4 / Impala 2.2 and higher, you can spread the CPU load more evenly by specifying the WITH
REPLICATION clause of the CREATE TABLE and ALTER TABLE statements. This clause lets you
control the replication factor for HDFS caching for a specific table or partition. By default, each cached
block is only present on a single host, which can lead to CPU contention if the same host processes
each cached block. Increasing the replication factor lets Impala choose different hosts to process
different cached blocks, to better distribute the CPU load. Always use a WITH REPLICATION setting
of at least 3, and adjust upward if necessary to match the replication factor for the underlying HDFS
data files.
In CDH 5.7 / Impala 2.5 and higher, Impala automatically randomizes which host processes a cached
HDFS block, to avoid CPU hotspots. For tables where HDFS caching is not applied, Impala designates
which host to process a data block using an algorithm that estimates the load on each host. If CPU
hotspots still arise during queries, you can enable additional randomization for the scheduling algorithm
for non-HDFS cached data by setting the SCHEDULE_RANDOM_REPLICA query option.
For background information about how to set up and manage HDFS caching for a CDH cluster, see the documentation
for HDFS caching.
Overview of HDFS Caching for Impala
With CDH 5.1 / Impala 1.4 and higher, Impala can use the HDFS caching feature to make more effective use of RAM,
so that repeated queries can take advantage of data “pinned” in memory regardless of how much data is processed
overall. The HDFS caching feature lets you designate a subset of frequently accessed data to be pinned permanently
in memory, remaining in the cache across multiple queries and never being evicted. This technique is suitable for tables
or partitions that are frequently accessed and are small enough to fit entirely within the HDFS memory cache. For
example, you might designate several dimension tables to be pinned in the cache, to speed up many different join
queries that reference them. Or in a partitioned table, you might pin a partition holding data from the most recent
time period because that data will be queried intensively; then when the next set of data arrives, you could unpin the
previous partition and pin the partition holding the new data.
Because this Impala performance feature relies on HDFS infrastructure, it only applies to Impala tables that use HDFS
data files. HDFS caching for Impala does not apply to HBase tables, S3 tables, Kudu tables, or Isilon tables.
Setting Up HDFS Caching for Impala
To use HDFS caching with Impala, first set up that feature for your CDH cluster:
• Decide how much memory to devote to the HDFS cache on each host. Remember that the total memory available
for cached data is the sum of the cache sizes on all the hosts. By default, any data block is only cached on one
host, although you can cache a block across multiple hosts by increasing the replication factor.
• Issue hdfs cacheadmin commands to set up one or more cache pools, owned by the same user as the impalad
daemon (typically impala). For example:
hdfs cacheadmin -addPool four_gig_pool -owner impala -limit 4000000000
For details about the hdfs cacheadmin command, see the documentation for HDFS caching.
Once HDFS caching is enabled and one or more pools are available, see Enabling HDFS Caching for Impala Tables and
Partitions on page 595 for how to choose which Impala data to load into the HDFS cache. On the Impala side, you specify
594 | Apache Impala Guide
Tuning Impala for Performance
the cache pool name defined by the hdfs cacheadmin command in the Impala DDL statements that enable HDFS
caching for a table or partition, such as CREATE TABLE ... CACHED IN pool or ALTER TABLE ... SET CACHED
IN pool.
Enabling HDFS Caching for Impala Tables and Partitions
Begin by choosing which tables or partitions to cache. For example, these might be lookup tables that are accessed by
many different join queries, or partitions corresponding to the most recent time period that are analyzed by different
reports or ad hoc queries.
In your SQL statements, you specify logical divisions such as tables and partitions to be cached. Impala translates these
requests into HDFS-level directives that apply to particular directories and files. For example, given a partitioned table
CENSUS with a partition key column YEAR, you could choose to cache all or part of the data as follows:
In CDH 5.4 / Impala 2.2 and higher, the optional WITH REPLICATION clause for CREATE TABLE and ALTER TABLE
lets you specify a replication factor, the number of hosts on which to cache the same data blocks. When Impala
processes a cached data block, where the cache replication factor is greater than 1, Impala randomly selects a host
that has a cached copy of that data block. This optimization avoids excessive CPU usage on a single host when the
same cached data block is processed multiple times. Cloudera recommends specifying a value greater than or equal
to the HDFS block replication factor.
-- Cache the entire table (all partitions).
alter table census set cached in 'pool_name';
-- Remove the entire table from the cache.
alter table census set uncached;
-- Cache a portion of the table (a single partition).
-- If the table is partitioned by multiple columns (such as year, month, day),
-- the ALTER TABLE command must specify values for all those columns.
alter table census partition (year=1960) set cached in 'pool_name';
-- Cache the data from one partition on up to 4 hosts, to minimize CPU load on any
-- single host when the same data block is processed multiple times.
alter table census partition (year=1970)
set cached in 'pool_name' with replication = 4;
-- At each stage, check the volume of cached data.
-- For large tables or partitions, the background loading might take some time,
-- so you might have to wait and reissue the statement until all the data
-- has finished being loaded into the cache.
show table stats census;
+-------+-------+--------+------+--------------+--------+
| year | #Rows | #Files | Size | Bytes Cached | Format |
+-------+-------+--------+------+--------------+--------+
| 1900 | -1 | 1 | 11B | NOT CACHED | TEXT |
| 1940 | -1 | 1 | 11B | NOT CACHED | TEXT |
| 1960 | -1 | 1 | 11B | 11B | TEXT |
| 1970 | -1 | 1 | 11B | NOT CACHED | TEXT |
| Total | -1 | 4 | 44B | 11B | |
+-------+-------+--------+------+--------------+--------+
CREATE TABLE considerations:
The HDFS caching feature affects the Impala CREATE TABLE statement as follows:
• You can put a CACHED IN 'pool_name' clause and optionally a WITH REPLICATION = number_of_hosts
clause at the end of a CREATE TABLE statement to automatically cache the entire contents of the table, including
any partitions added later. The pool_name is a pool that you previously set up with the hdfs cacheadmin
command.
• Once a table is designated for HDFS caching through the CREATE TABLE statement, if new partitions are added
later through ALTER TABLE ... ADD PARTITION statements, the data in those new partitions is automatically
cached in the same pool.
• If you want to perform repetitive queries on a subset of data from a large table, and it is not practical to designate
the entire table or specific partitions for HDFS caching, you can create a new cached table with just a subset of
Apache Impala Guide | 595
Tuning Impala for Performance
the data by using CREATE TABLE ... CACHED IN 'pool_name' AS SELECT ... WHERE .... When you
are finished with generating reports from this subset of data, drop the table and both the data files and the data
cached in RAM are automatically deleted.
See CREATE TABLE Statement on page 234 for the full syntax.
Other memory considerations:
Certain DDL operations, such as ALTER TABLE ... SET LOCATION, are blocked while the underlying HDFS directories
contain cached files. You must uncache the files first, before changing the location, dropping the table, and so on.
When data is requested to be pinned in memory, that process happens in the background without blocking access to
the data while the caching is in progress. Loading the data from disk could take some time. Impala reads each HDFS
data block from memory if it has been pinned already, or from disk if it has not been pinned yet.
The amount of data that you can pin on each node through the HDFS caching mechanism is subject to a quota that is
enforced by the underlying HDFS service. Before requesting to pin an Impala table or partition in memory, check that
its size does not exceed this quota.
Note: Because the HDFS cache consists of combined memory from all the DataNodes in the cluster,
cached tables or partitions can be bigger than the amount of HDFS cache memory on any single host.
Loading and Removing Data with HDFS Caching Enabled
When HDFS caching is enabled, extra processing happens in the background when you add or remove data through
statements such as INSERT and DROP TABLE.
Inserting or loading data:
• When Impala performs an INSERT or LOAD DATA statement for a table or partition that is cached, the new data
files are automatically cached and Impala recognizes that fact automatically.
• If you perform an INSERT or LOAD DATA through Hive, as always, Impala only recognizes the new data files after
a REFRESH table_name statement in Impala.
• If the cache pool is entirely full, or becomes full before all the requested data can be cached, the Impala DDL
statement returns an error. This is to avoid situations where only some of the requested data could be cached.
• When HDFS caching is enabled for a table or partition, new data files are cached automatically when they are
added to the appropriate directory in HDFS, without the need for a REFRESH statement in Impala. Impala
automatically performs a REFRESH once the new data is loaded into the HDFS cache.
Dropping tables, partitions, or cache pools:
The HDFS caching feature interacts with the Impala DROP TABLE and ALTER TABLE ... DROP PARTITION statements
as follows:
• When you issue a DROP TABLE for a table that is entirely cached, or has some partitions cached, the DROP TABLE
succeeds and all the cache directives Impala submitted for that table are removed from the HDFS cache system.
• The same applies to ALTER TABLE ... DROP PARTITION. The operation succeeds and any cache directives
are removed.
• As always, the underlying data files are removed if the dropped table is an internal table, or the dropped partition
is in its default location underneath an internal table. The data files are left alone if the dropped table is an external
table, or if the dropped partition is in a non-default location.
• If you designated the data files as cached through the hdfs cacheadmin command, and the data files are left
behind as described in the previous item, the data files remain cached. Impala only removes the cache directives
submitted by Impala through the CREATE TABLE or ALTER TABLE statements. It is OK to have multiple redundant
cache directives pertaining to the same files; the directives all have unique IDs and owners so that the system can
tell them apart.
• If you drop an HDFS cache pool through the hdfs cacheadmin command, all the Impala data files are preserved,
just no longer cached. After a subsequent REFRESH, SHOW TABLE STATS reports 0 bytes cached for each associated
Impala table or partition.
596 | Apache Impala Guide
Tuning Impala for Performance
Relocating a table or partition:
The HDFS caching feature interacts with the Impala ALTER TABLE ... SET LOCATION statement as follows:
• If you have designated a table or partition as cached through the CREATE TABLE or ALTER TABLE statements,
subsequent attempts to relocate the table or partition through an ALTER TABLE ... SET LOCATION statement
will fail. You must issue an ALTER TABLE ... SET UNCACHED statement for the table or partition first. Otherwise,
Impala would lose track of some cached data files and have no way to uncache them later.
Administration for HDFS Caching with Impala
Here are the guidelines and steps to check or change the status of HDFS caching for Impala data:
hdfs cacheadmin command:
• If you drop a cache pool with the hdfs cacheadmin command, Impala queries against the associated data files
will still work, by falling back to reading the files from disk. After performing a REFRESH on the table, Impala
reports the number of bytes cached as 0 for all associated tables and partitions.
• You might use hdfs cacheadmin to get a list of existing cache pools, or detailed information about the pools,
as follows:
hdfs cacheadmin -listDirectives # Basic info
Found 122 entries
ID POOL REPL EXPIRY PATH
123 testPool 1 never /user/hive/warehouse/tpcds.store_sales
124 testPool 1 never /user/hive/warehouse/tpcds.store_sales/ss_date=1998-01-15
125 testPool 1 never /user/hive/warehouse/tpcds.store_sales/ss_date=1998-02-01
...
hdfs cacheadmin -listDirectives -stats # More details
Found 122 entries
ID POOL REPL EXPIRY PATH BYTES_NEEDED BYTES_CACHED FILES_NEEDED
FILES_CACHED
123 testPool 1 never /user/hive/warehouse/tpcds.store_sales 0 0 0
0
124 testPool 1 never /user/hive/warehouse/tpcds.store_sales/ss_date=1998-01-15 143169 143169 1
1
125 testPool 1 never /user/hive/warehouse/tpcds.store_sales/ss_date=1998-02-01 112447 112447 1
1
...
Impala SHOW statement:
• For each table or partition, the SHOW TABLE STATS or SHOW PARTITIONS statement displays the number of
bytes currently cached by the HDFS caching feature. If there are no cache directives in place for that table or
partition, the result set displays NOT CACHED. A value of 0, or a smaller number than the overall size of the table
or partition, indicates that the cache request has been submitted but the data has not been entirely loaded into
memory yet. See SHOW Statement on page 363 for details.
Cloudera Manager:
• You can enable or disable HDFS caching through Cloudera Manager, using the configuration setting Maximum
Memory Used for Caching for the HDFS service. This control sets the HDFS configuration parameter
dfs_datanode_max_locked_memory, which specifies the upper limit of HDFS cache size on each node.
• All the other manipulation of the HDFS caching settings, such as what files are cached, is done through the command
line, either Impala DDL statements or the Linux hdfs cacheadmin command.
Impala memory limits:
The Impala HDFS caching feature interacts with the Impala memory limits as follows:
• The maximum size of each HDFS cache pool is specified externally to Impala, through the hdfs cacheadmin
command.
• All the memory used for HDFS caching is separate from the impalad daemon address space and does not count
towards the limits of the --mem_limit startup option, MEM_LIMIT query option, or further limits imposed
through YARN resource management or the Linux cgroups mechanism.
• Because accessing HDFS cached data avoids a memory-to-memory copy operation, queries involving cached data
require less memory on the Impala side than the equivalent queries on uncached data. In addition to any
performance benefits in a single-user environment, the reduced memory helps to improve scalability under
high-concurrency workloads.
Apache Impala Guide | 597
Tuning Impala for Performance
Performance Considerations for HDFS Caching with Impala
In Impala 1.4.0 and higher, Impala supports efficient reads from data that is pinned in memory through HDFS caching.
Impala takes advantage of the HDFS API and reads the data from memory rather than from disk whether the data files
are pinned using Impala DDL statements, or using the command-line mechanism where you specify HDFS paths.
When you examine the output of the impala-shell SUMMARY command, or look in the metrics report for the impalad
daemon, you see how many bytes are read from the HDFS cache. For example, this excerpt from a query profile
illustrates that all the data read during a particular phase of the query came from the HDFS cache, because the
BytesRead and BytesReadDataNodeCache values are identical.
HDFS_SCAN_NODE (id=0):(Total: 11s114ms, non-child: 11s114ms, % non-child: 100.00%)
- AverageHdfsReadThreadConcurrency: 0.00
- AverageScannerThreadConcurrency: 32.75
- BytesRead: 10.47 GB (11240756479)
- BytesReadDataNodeCache: 10.47 GB (11240756479)
- BytesReadLocal: 10.47 GB (11240756479)
- BytesReadShortCircuit: 10.47 GB (11240756479)
- DecompressionTime: 27s572ms
For queries involving smaller amounts of data, or in single-user workloads, you might not notice a significant difference
in query response time with or without HDFS caching. Even with HDFS caching turned off, the data for the query might
still be in the Linux OS buffer cache. The benefits become clearer as data volume increases, and especially as the system
processes more concurrent queries. HDFS caching improves the scalability of the overall system. That is, it prevents
query performance from declining when the workload outstrips the capacity of the Linux OS cache.
Due to a limitation of HDFS, zero-copy reads are not supported with encryption. Cloudera recommends not using HDFS
caching for Impala data files in encryption zones. The queries fall back to the normal read path during query execution,
which might cause some performance overhead.
SELECT considerations:
The Impala HDFS caching feature interacts with the SELECT statement and query performance as follows:
• Impala automatically reads from memory any data that has been designated as cached and actually loaded into
the HDFS cache. (It could take some time after the initial request to fully populate the cache for a table with large
size or many partitions.) The speedup comes from two aspects: reading from RAM instead of disk, and accessing
the data straight from the cache area instead of copying from one RAM area to another. This second aspect yields
further performance improvement over the standard OS caching mechanism, which still results in
memory-to-memory copying of cached data.
• For small amounts of data, the query speedup might not be noticeable in terms of wall clock time. The performance
might be roughly the same with HDFS caching turned on or off, due to recently used data being held in the Linux
OS cache. The difference is more pronounced with:
– Data volumes (for all queries running concurrently) that exceed the size of the Linux OS cache.
– A busy cluster running many concurrent queries, where the reduction in memory-to-memory copying and
overall memory usage during queries results in greater scalability and throughput.
– Thus, to really exercise and benchmark this feature in a development environment, you might need to simulate
realistic workloads and concurrent queries that match your production environment.
– One way to simulate a heavy workload on a lightly loaded system is to flush the OS buffer cache (on each
DataNode) between iterations of queries against the same tables or partitions:
sync
echo 1 > /proc/sys/vm/drop_caches
• Impala queries take advantage of HDFS cached data regardless of whether the cache directive was issued by
Impala or externally through the hdfs cacheadmin command, for example for an external table where the
cached data files might be accessed by several different Hadoop components.
• If your query returns a large result set, the time reported for the query could be dominated by the time needed
to print the results on the screen. To measure the time for the underlying query processing, query the COUNT()
of the big result set, which does all the same processing but only prints a single line to the screen.
598 | Apache Impala Guide
Tuning Impala for Performance
Detecting and Correcting HDFS Block Skew Conditions
For best performance of Impala parallel queries, the work is divided equally across hosts in the cluster, and all hosts
take approximately equal time to finish their work. If one host takes substantially longer than others, the extra time
needed for the slow host can become the dominant factor in query performance. Therefore, one of the first steps in
performance tuning for Impala is to detect and correct such conditions.
The main cause of uneven performance that you can correct within Impala is skew in the number of HDFS data blocks
processed by each host, where some hosts process substantially more data blocks than others. This condition can
occur because of uneven distribution of the data values themselves, for example causing certain data files or partitions
to be large while others are very small. (Although it is possible to have unevenly distributed data without any problems
with the distribution of HDFS blocks.) Block skew could also be due to the underlying block allocation policies within
HDFS, the replication factor of the data files, and the way that Impala chooses the host to process each data block.
The most convenient way to detect block skew, or slow-host issues in general, is to examine the “executive summary”
information from the query profile after running a query:
• In impala-shell, issue the SUMMARY command immediately after the query is complete, to see just the summary
information. If you detect issues involving skew, you might switch to issuing the PROFILE command, which displays
the summary information followed by a detailed performance analysis.
• In the Cloudera Manager interface or the Impala debug web UI, click on the Profile link associated with the query
after it is complete. The executive summary information is displayed early in the profile output.
For each phase of the query, you see an Avg Time and a Max Time value, along with #Hosts indicating how many hosts
are involved in that query phase. For all the phases with #Hosts greater than one, look for cases where the maximum
time is substantially greater than the average time. Focus on the phases that took the longest, for example, those
taking multiple seconds rather than milliseconds or microseconds.
If you detect that some hosts take longer than others, first rule out non-Impala causes. One reason that some hosts
could be slower than others is if those hosts have less capacity than the others, or if they are substantially busier due
to unevenly distributed non-Impala workloads:
• For clusters running Impala, keep the relative capacities of all hosts roughly equal. Any cost savings from including
some underpowered hosts in the cluster will likely be outweighed by poor or uneven performance, and the time
spent diagnosing performance issues.
• If non-Impala workloads cause slowdowns on some hosts but not others, use the appropriate load-balancing
techniques for the non-Impala components to smooth out the load across the cluster.
If the hosts on your cluster are evenly powered and evenly loaded, examine the detailed profile output to determine
which host is taking longer than others for the query phase in question. Examine how many bytes are processed during
that phase on that host, how much memory is used, and how many bytes are transmitted across the network.
The most common symptom is a higher number of bytes read on one host than others, due to one host being requested
to process a higher number of HDFS data blocks. This condition is more likely to occur when the number of blocks
accessed by the query is relatively small. For example, if you have a 10-node cluster and the query processes 10 HDFS
blocks, each node might not process exactly one block. If one node sits idle while another node processes two blocks,
the query could take twice as long as if the data was perfectly distributed.
Possible solutions in this case include:
• If the query is artificially small, perhaps for benchmarking purposes, scale it up to process a larger data set. For
example, if some nodes read 10 HDFS data blocks while others read 11, the overall effect of the uneven distribution
is much lower than when some nodes did twice as much work as others. As a guideline, aim for a “sweet spot”
where each node reads 2 GB or more from HDFS per query. Queries that process lower volumes than that could
experience inconsistent performance that smooths out as queries become more data-intensive.
• If the query processes only a few large blocks, so that many nodes sit idle and cannot help to parallelize the query,
consider reducing the overall block size. For example, you might adjust the PARQUET_FILE_SIZE query option
Apache Impala Guide | 599
Tuning Impala for Performance
before copying or converting data into a Parquet table. Or you might adjust the granularity of data files produced
earlier in the ETL pipeline by non-Impala components. In Impala 2.0 and later, the default Parquet block size is
256 MB, reduced from 1 GB, to improve parallelism for common cluster sizes and data volumes.
• Reduce the amount of compression applied to the data. For text data files, the highest degree of compression
(gzip) produces unsplittable files that are more difficult for Impala to process in parallel, and require extra memory
during processing to hold the compressed and uncompressed data simultaneously. For binary formats such as
Parquet and Avro, compression can result in fewer data blocks overall, but remember that when queries process
relatively few blocks, there is less opportunity for parallel execution and many nodes in the cluster might sit idle.
Note that when Impala writes Parquet data with the query option COMPRESSION_CODEC=NONE enabled, the data
is still typically compact due to the encoding schemes used by Parquet, independent of the final compression
step.
Data Cache for Remote Reads
When Impala compute nodes and its storage are not co-located, the network bandwidth requirement goes up as the
network traffic includes the data fetch as well as the shuffling exchange traffic of intermediate results.
Note: This is an experimental feature in Impala 3.3 / CDH 6.3 and is not generally supported.
To mitigate the pressure on the network, you can enable the compute nodes to cache the working set read from remote
filesystems, such as, remote HDFS data node, S3, ABFS, ADLS.
To enable remote data cache:
1. In Cloudera Manager, navigate to Clusters > Impala Service > .
2. In the Configuration tab, select Impala Daemon in Scope, and select Advanced in Category.
3. Enter the following in the Impala Daemon Command Line Argument Advanced Configuration Snippet (Safety
Valve) field, set the --data_cache Impala Daemon start-up flag as below:
--data_cache=dir1,dir2,dir3,...:quota
The flag is set to a list of directories, separated by ,, followed by a :, and a capacity quota per directory.
The directories must be specified in absolute path.
4. Click Save Changes and restart the Impala service.
If set to an empty string, data caching is disabled.
Cached data is stored in the specified directories.
The specified directories must exist in the local filesystem of each Impala Daemon, or Impala will fail to start.
In addition, the filesystem which the directory resides in must support hole punching.
The cache can consume up to the quota bytes for each of the directories specified.
The default setting for --data_cache is an empty string.
For example, with the following setting, the data cache may use up to 2 TB, with 1 TB max in /data/0 and /data/1
respectively.
--data_cache=/data/0,/data/1:1TB
600 | Apache Impala Guide
Tuning Impala for Performance
Testing Impala Performance
Test to ensure that Impala is configured for optimal performance. If you have installed Impala without Cloudera
Manager, complete the processes described in this topic to help ensure a proper configuration. Even if you installed
Impala with Cloudera Manager, which automatically applies appropriate configurations, these procedures can be used
to verify that Impala is set up correctly.
Checking Impala Configuration Values
You can inspect Impala configuration values by connecting to your Impala server using a browser.
To check Impala configuration values:
1. Use a browser to connect to one of the hosts running impalad in your environment. Connect using an address
of the form http://hostname:port/varz.
Note: In the preceding example, replace hostname and port with the name and port of your
Impala server. The default port is 25000.
2. Review the configured values.
For example, to check that your system is configured to use block locality tracking information, you would check
that the value for dfs.datanode.hdfs-blocks-metadata.enabled is true.
To check data locality:
1. Execute a query on a dataset that is available across multiple nodes. For example, for a table named MyTable
that has a reasonable chance of being spread across multiple DataNodes:
[impalad-host:21000] > SELECT COUNT (*) FROM MyTable
2. After the query completes, review the contents of the Impala logs. You should find a recent message similar to
the following:
Total remote scan volume = 0
The presence of remote scans may indicate impalad is not running on the correct nodes. This can be because some
DataNodes do not have impalad running or it can be because the impalad instance that is starting the query is unable
to contact one or more of the impalad instances.
To understand the causes of this issue:
1. Connect to the debugging web server. By default, this server runs on port 25000. This page lists all impalad
instances running in your cluster. If there are fewer instances than you expect, this often indicates some DataNodes
are not running impalad. Ensure impalad is started on all DataNodes.
2. If you are using multi-homed hosts, ensure that the Impala daemon's hostname resolves to the interface on which
impalad is running. The hostname Impala is using is displayed when impalad starts. To explicitly set the hostname,
use the --hostname flag.
3. Check that statestored is running as expected. Review the contents of the state store log to ensure all instances
of impalad are listed as having connected to the state store.
Reviewing Impala Logs
You can review the contents of the Impala logs for signs that short-circuit reads or block location tracking are not
functioning. Before checking logs, execute a simple query against a small HDFS dataset. Completing a query task
generates log messages using current settings. Information on executing queries can be found in Using the Impala
Apache Impala Guide | 601
Tuning Impala for Performance
Shell (impala-shell Command) on page 713. Information on logging can be found in Using Impala Logging on page 709.
Log messages and their interpretations are as follows:
Log Message
Unknown disk id. This will negatively affect performance. Check
your hdfs settings to enable block location metadata
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
Interpretation
Tracking block locality is not
enabled.
Native checksumming is not
enabled.
Understanding Impala Query Performance - EXPLAIN Plans and Query Profiles
To understand the high-level performance considerations for Impala queries, read the output of the EXPLAIN statement
for the query. You can get the EXPLAIN plan without actually running the query itself.
For an overview of the physical performance characteristics for a query, issue the SUMMARY statement in impala-shell
immediately after executing a query. This condensed information shows which phases of execution took the most
time, and how the estimates for memory usage and number of rows at each phase compare to the actual values.
To understand the detailed performance characteristics for a query, issue the PROFILE statement in impala-shell
immediately after executing a query. This low-level information includes physical details about memory, CPU, I/O, and
network usage, and thus is only available after the query is actually run.
Also, see Performance Considerations for the Impala-HBase Integration on page 685 and Understanding and Tuning
Impala Query Performance for S3 Data on page 697 for examples of interpreting EXPLAIN plans for queries against
HBase tables and data stored in the Amazon Simple Storage System (S3).
Using the EXPLAIN Plan for Performance Tuning
The EXPLAIN statement gives you an outline of the logical steps that a query will perform, such as how the work will
be distributed among the nodes and how intermediate results will be combined to produce the final result set. You
can see these details before actually running the query. You can use this information to check that the query will not
operate in some very unexpected or inefficient way.
[impalad-host:21000] > explain select count(*) from customer_address;
+----------------------------------------------------------+
| Explain String |
+----------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=42.00MB VCores=1 |
| |
| 03:AGGREGATE [MERGE FINALIZE] |
| | output: sum(count(*)) |
| | |
| 02:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HDFS [default.customer_address] |
| partitions=1/1 size=5.25MB |
+----------------------------------------------------------+
Read the EXPLAIN plan from bottom to top:
• The last part of the plan shows the low-level details such as the expected amount of data that will be read, where
you can judge the effectiveness of your partitioning strategy and estimate how long it will take to scan a table
based on total data size and the size of the cluster.
• As you work your way up, next you see the operations that will be parallelized and performed on each Impala
node.
• At the higher levels, you see how data flows when intermediate result sets are combined and transmitted from
one node to another.
602 | Apache Impala Guide
Tuning Impala for Performance
• See EXPLAIN_LEVEL Query Option on page 331 for details about the EXPLAIN_LEVEL query option, which lets you
customize how much detail to show in the EXPLAIN plan depending on whether you are doing high-level or
low-level tuning, dealing with logical or physical aspects of the query.
The EXPLAIN plan is also printed at the beginning of the query profile report described in Using the Query Profile for
Performance Tuning on page 604, for convenience in examining both the logical and physical aspects of the query
side-by-side.
The amount of detail displayed in the EXPLAIN output is controlled by the EXPLAIN_LEVEL query option. You typically
increase this setting from standard to extended (or from 1 to 2) when doublechecking the presence of table and
column statistics during performance tuning, or when estimating query resource usage in conjunction with the resource
management features in CDH 5.
Using the SUMMARY Report for Performance Tuning
The SUMMARY command within the impala-shell interpreter gives you an easy-to-digest overview of the timings
for the different phases of execution for a query. Like the EXPLAIN plan, it is easy to see potential performance
bottlenecks. Like the PROFILE output, it is available after the query is run and so displays actual timing numbers.
The SUMMARY report is also printed at the beginning of the query profile report described in Using the Query Profile
for Performance Tuning on page 604, for convenience in examining high-level and low-level aspects of the query
side-by-side.
For example, here is a query involving an aggregate function, on a single-node VM. The different stages of the query
and their timings are shown (rolled up for all nodes), along with estimated and actual values used in planning the query.
In this case, the AVG() function is computed for a subset of data on each node (stage 01) and then the aggregated
results from all nodes are combined at the end (stage 03). You can see which stages took the most time, and whether
any estimates were substantially different than the actual data distribution. (When examining the time values, be sure
to consider the suffixes such as us for microseconds and ms for milliseconds, rather than just looking for the largest
numbers.)
[localhost:21000] > select avg(ss_sales_price) from store_sales where ss_coupon_amt =
0;
+---------------------+
| avg(ss_sales_price) |
+---------------------+
| 37.80770926328327 |
+---------------------+
[localhost:21000] > summary;
+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+
| Operator | #Hosts | Avg Time | Max Time | #Rows | Est. #Rows | Peak Mem | Est.
Peak Mem | Detail |
+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+
| 03:AGGREGATE | 1 | 1.03ms | 1.03ms | 1 | 1 | 48.00 KB | -1 B
| MERGE FINALIZE |
| 02:EXCHANGE | 1 | 0ns | 0ns | 1 | 1 | 0 B | -1 B
| UNPARTITIONED |
| 01:AGGREGATE | 1 | 30.79ms | 30.79ms | 1 | 1 | 80.00 KB | 10.00
MB | |
| 00:SCAN HDFS | 1 | 5.45s | 5.45s | 2.21M | -1 | 64.05 MB | 432.00
MB | tpc.store_sales |
+--------------+--------+----------+----------+-------+------------+----------+---------------+-----------------+
Notice how the longest initial phase of the query is measured in seconds (s), while later phases working on smaller
intermediate results are measured in milliseconds (ms) or even nanoseconds (ns).
Here is an example from a more complicated query, as it would appear in the PROFILE output:
Operator #Hosts Avg Time Max Time #Rows Est. #Rows Peak Mem Est.
Peak Mem Detail
------------------------------------------------------------------------------------------------------------------------
09:MERGING-EXCHANGE 1 79.738us 79.738us 5 5 0
-1.00 B UNPARTITIONED
05:TOP-N 3 84.693us 88.810us 5 5 12.00 KB
120.00 B
Apache Impala Guide | 603
Tuning Impala for Performance
04:AGGREGATE 3 5.263ms 6.432ms 5 5 44.00 KB
10.00 MB MERGE FINALIZE
08:AGGREGATE 3 16.659ms 27.444ms 52.52K 600.12K 3.20 MB
15.11 MB MERGE
07:EXCHANGE 3 2.644ms 5.1ms 52.52K 600.12K 0
0 HASH(o_orderpriority)
03:AGGREGATE 3 342.913ms 966.291ms 52.52K 600.12K 10.80 MB
15.11 MB
02:HASH JOIN 3 2s165ms 2s171ms 144.87K 600.12K 13.63 MB
941.01 KB INNER JOIN, BROADCAST
|--06:EXCHANGE 3 8.296ms 8.692ms 57.22K 15.00K 0
0 BROADCAST
| 01:SCAN HDFS 2 1s412ms 1s978ms 57.22K 15.00K 24.21 MB
176.00 MB tpch.orders o
00:SCAN HDFS 3 8s032ms 8s558ms 3.79M 600.12K 32.29 MB
264.00 MB tpch.lineitem l
Using the Query Profile for Performance Tuning
The PROFILE command, available in the impala-shell interpreter, produces a detailed low-level report showing
how the most recent query was executed. Unlike the EXPLAIN plan described in Using the EXPLAIN Plan for Performance
Tuning on page 602, this information is only available after the query has finished. It shows physical details such as the
number of bytes read, maximum memory usage, and so on for each node. You can use this information to determine
if the query is I/O-bound or CPU-bound, whether some network condition is imposing a bottleneck, whether a slowdown
is affecting some nodes but not others, and to check that recommended configuration settings such as short-circuit
local reads are in effect.
By default, time values in the profile output reflect the wall-clock time taken by an operation. For values denoting
system time or user time, the measurement unit is reflected in the metric name, such as ScannerThreadsSysTime
or ScannerThreadsUserTime. For example, a multi-threaded I/O operation might show a small figure for wall-clock
time, while the corresponding system time is larger, representing the sum of the CPU time taken by each thread. Or
a wall-clock time figure might be larger because it counts time spent waiting, while the corresponding system and user
time figures only measure the time while the operation is actively using CPU cycles.
The EXPLAIN plan is also printed at the beginning of the query profile report, for convenience in examining both the
logical and physical aspects of the query side-by-side. The EXPLAIN_LEVEL query option also controls the verbosity of
the EXPLAIN output printed by the PROFILE command.
In CDH 6.2 / Impala 3.2, a new Per Node Profiles section was added to the profile output. The new section includes
the following metrics that can be controlled by the RESOURCE_TRACE_RATIO query option.
• CpuIoWaitPercentage
• CpuSysPercentage
• CpuUserPercentage
• HostDiskReadThroughput: All data read by the host as part of the execution of this query (spilling), by the
HDFS data node, and by other processes running on the same system.
• HostDiskWriteThroughput: All data written by the host as part of the execution of this query (spilling), by the
HDFS data node, and by other processes running on the same system.
• HostNetworkRx: All data received by the host as part of the execution of this query, other queries, and other
processes running on the same system.
• HostNetworkTx: All data transmitted by the host as part of the execution of this query, other queries, and other
processes running on the same system.
See Query Details for the steps to download the query profile in the text format in Cloudera Manager.
604 | Apache Impala Guide
Scalability Considerations for Impala
Scalability Considerations for Impala
This section explains how the size of your cluster and the volume of data influences SQL performance and schema
design for Impala tables. Typically, adding more cluster capacity reduces problems due to memory limits or disk
throughput. On the other hand, larger clusters are more likely to have other kinds of scalability issues, such as a single
slow node that causes performance problems for queries.
A good source of tips related to scalability and performance tuning is the Impala Cookbook presentation. These slides
are updated periodically as new features come out and new benchmarks are performed.
Impact of Many Tables or Partitions on Impala Catalog Performance and Memory Usage
Because Hadoop I/O is optimized for reading and writing large files, Impala is optimized for tables containing relatively
few, large data files. Schemas containing thousands of tables, or tables containing thousands of partitions, can encounter
performance issues during startup or during DDL operations such as ALTER TABLE statements.
Important:
Because of a change in the default heap size for the catalogd daemon in CDH 5.7 / Impala 2.5 and
higher, the following procedure to increase the catalogd memory limit might be required following
an upgrade to CDH 5.7 / Impala 2.5 even if not needed previously.
For schemas with large numbers of tables, partitions, and data files, the catalogd daemon might encounter an
out-of-memory error. To prevent the error, increase the memory limit for the catalogd daemon:
1. Check current memory usage for the catalogd daemon by running the following commands on the host where
that daemon runs on your cluster:
jcmd catalogd_pid VM.flags
jmap -heap catalogd_pid
2. Decide on a large enough value for the catalogd heap.
• On systems managed by Cloudera Manager, include this value in the configuration field Java Heap Size of
Catalog Server in Bytes (Cloudera Manager 5.7 and higher), or Impala Catalog Server Environment Advanced
Configuration Snippet (Safety Valve) (prior to Cloudera Manager 5.7). Then restart the Impala service.
• On systems not managed by Cloudera Manager, put the JAVA_TOOL_OPTIONS environment variable setting
into the startup script for the catalogd daemon, then restart the catalogd daemon.
For example, the following environment variable setting specifies the maximum heap size of 8 GB.
JAVA_TOOL_OPTIONS="-Xmx8g"
3. Use the same jcmd and jmap commands as earlier to verify that the new settings are in effect.
Scalability Consideration for Large Clusters
When processing queries, Impala daemons (Impalads) frequently exchange large volumes of data with other Impala
daemons in the cluster, for example during a partitioned hash join. This communication between the Impala daemons
Apache Impala Guide | 605
Scalability Considerations for Impala
happens through remote procedure calls (RPCs). In CDH 5.14 / CDH 6.0 and lower, intercommunication was done
exclusively using the Apache Thrift library. With Thrift RPC, the number of network connections per host sharply
increases as the number of nodes goes up:
number of connections per host = (number of nodes) x (average number of query fragments per host)
For example, in a 100-node cluster with 32 concurrent queries, each of which had 50 fragments, there would be 100
x 32 x 50 = 160,000 connections and threads per host. Across the entire cluster, there would be 16 million connections.
As more nodes are added to a CDH cluster due to increase in data and workloads, the excessive number of RPC threads
and network connections can lead to instability and poor performance in Impala in CDH 5.14 / CDH 6.0 and lower.
In CDH 5.15.0 / CDH 6.1 and higher, Impala can use an alternate RPC option via KRPC that provides improvements in
throughput and reliability while reducing the resource usage significantly. Using KRPC, Impala can reliably handle
concurrent complex queries on large data sets. See this blog for detail information on KRPC for communications
among Impala daemons.
If you are experiencing the scalability issue with connections but unable to upgrade to CDH 5.15 / CDH 6.1, these are
some of the alternate options you can consider to alleviate the issues:
• Use dedicated coordinators.
• Isolate some workloads.
• Increase the status reporting interval by setting the start up flag --status_report_interval to a larger value.
By default, it's set to 5 seconds.
• Use Workload Experience Manager analysis to identify any other improvements.
Scalability Considerations for the Impala Statestore
Before CDH 5.5 / Impala 2.3, the statestore sent only one kind of message to its subscribers. This message contained
all updates for any topics that a subscriber had subscribed to. It also served to let subscribers know that the statestore
had not failed, and conversely the statestore used the success of sending a heartbeat to a subscriber to decide whether
or not the subscriber had failed.
Combining topic updates and failure detection in a single message led to bottlenecks in clusters with large numbers
of tables, partitions, and HDFS data blocks. When the statestore was overloaded with metadata updates to transmit,
heartbeat messages were sent less frequently, sometimes causing subscribers to time out their connection with the
statestore. Increasing the subscriber timeout and decreasing the frequency of statestore heartbeats worked around
the problem, but reduced responsiveness when the statestore failed or restarted.
As of CDH 5.5 / Impala 2.3, the statestore now sends topic updates and heartbeats in separate messages. This allows
the statestore to send and receive a steady stream of lightweight heartbeats, and removes the requirement to send
topic updates according to a fixed schedule, reducing statestore network overhead.
The statestore now has the following relevant configuration flags for the statestored daemon:
-statestore_num_update_threads
The number of threads inside the statestore dedicated to sending topic updates. You should not typically need to
change this value.
Default: 10
-statestore_update_frequency_ms
The frequency, in milliseconds, with which the statestore tries to send topic updates to each subscriber. This is a
best-effort value; if the statestore is unable to meet this frequency, it sends topic updates as fast as it can. You
should not typically need to change this value.
Default: 2000
-statestore_num_heartbeat_threads
The number of threads inside the statestore dedicated to sending heartbeats. You should not typically need to
change this value.
606 | Apache Impala Guide
Scalability Considerations for Impala
Default: 10
-statestore_heartbeat_frequency_ms
The frequency, in milliseconds, with which the statestore tries to send heartbeats to each subscriber. This value
should be good for large catalogs and clusters up to approximately 150 nodes. Beyond that, you might need to
increase this value to make the interval longer between heartbeat messages.
Default: 1000 (one heartbeat message every second)
As of CDH 5.5 / Impala 2.3 , not all of these flags are present in the Cloudera Manager user interface. Some must be
set using the Advanced Configuration Snippet fields for the statestore component.
If it takes a very long time for a cluster to start up, and impala-shell consistently displays This Impala daemon
is not ready to accept user requests, the statestore might be taking too long to send the entire catalog
topic to the cluster. In this case, consider adding --load_catalog_in_background=false to your catalog service
configuration. This setting stops the statestore from loading the entire catalog into memory at cluster startup. Instead,
metadata for each table is loaded when the table is accessed for the first time.
Effect of Buffer Pool on Memory Usage (CDH 5.13 and higher)
The buffer pool feature, available in CDH 5.13 and higher, changes the way Impala allocates memory during a query.
Most of the memory needed is reserved at the beginning of the query, avoiding cases where a query might run for a
long time before failing with an out-of-memory error. The actual memory estimates and memory buffers are typically
smaller than before, so that more queries can run concurrently or process larger volumes of data than previously.
The buffer pool feature includes some query options that you can fine-tune: BUFFER_POOL_LIMIT Query Option on
page 324, DEFAULT_SPILLABLE_BUFFER_SIZE Query Option on page 327, MAX_ROW_SIZE Query Option on page 340,
and MIN_SPILLABLE_BUFFER_SIZE Query Option on page 345.
Most of the effects of the buffer pool are transparent to you as an Impala user. Memory use during spilling is now
steadier and more predictable, instead of increasing rapidly as more data is spilled to disk. The main change from a
user perspective is the need to increase the MAX_ROW_SIZE query option setting when querying tables with columns
containing long strings, many columns, or other combinations of factors that produce very large rows. If Impala
encounters rows that are too large to process with the default query option settings, the query fails with an error
message suggesting to increase the MAX_ROW_SIZE setting.
SQL Operations that Spill to Disk
Certain memory-intensive operations write temporary data to disk (known as spilling to disk) when Impala is close to
exceeding its memory limit on a particular host.
Note:
In CDH 5.13 and higher, also see Effect of Buffer Pool on Memory Usage (CDH 5.13 and higher) on
page 607 for changes to Impala memory allocation that might change the details of which queries spill
to disk, and how much memory and disk space is involved in the spilling operation.
The result is a query that completes successfully, rather than failing with an out-of-memory error. The tradeoff is
decreased performance due to the extra disk I/O to write the temporary data and read it back in. The slowdown could
be potentially be significant. Thus, while this feature improves reliability, you should optimize your queries, system
parameters, and hardware configuration to make this spilling a rare occurrence.
What kinds of queries might spill to disk:
Several SQL clauses and constructs require memory allocations that could activat the spilling mechanism:
• when a query uses a GROUP BY clause for columns with millions or billions of distinct values, Impala keeps a similar
number of temporary results in memory, to accumulate the aggregate results for each value in the group.
Apache Impala Guide | 607
Scalability Considerations for Impala
• When large tables are joined together, Impala keeps the values of the join columns from one table in memory,
to compare them to incoming values from the other table.
• When a large result set is sorted by the ORDER BY clause, each node sorts its portion of the result set in memory.
• The DISTINCT and UNION operators build in-memory data structures to represent all values found so far, to
eliminate duplicates as the query progresses.
When the spill-to-disk feature is activated for a join node within a query, Impala does not produce any runtime filters
for that join operation on that host. Other join nodes within the query are not affected.
In CDH 5.13 / Impala 2.10 and higher, the way SQL operators such as GROUP BY, DISTINCT, and joins, transition
between using additional memory or activating the spill-to-disk feature is changed. The memory required to spill to
disk is reserved up front, and you can examine it in the EXPLAIN plan when the EXPLAIN_LEVEL query option is set
to 2 or higher.
The infrastructure of the spilling feature affects the way the affected SQL operators, such as GROUP BY, DISTINCT,
and joins, use memory. On each host that participates in the query, each such operator in a query requires memory
to store rows of data and other data structures. Impala reserves a certain amount of memory up front for each operator
that supports spill-to-disk that is sufficient to execute the operator. If an operator accumulates more data than can fit
in the reserved memory, it can either reserve more memory to continue processing data in memory or start spilling
data to temporary scratch files on disk. Thus, operators with spill-to-disk support can adapt to different memory
constraints by using however much memory is available to speed up execution, yet tolerate low memory conditions
by spilling data to disk.
The amount data depends on the portion of the data being handled by that host, and thus the operator may end up
consuming different amounts of memory on different hosts.
How Impala handles scratch disk space for spilling:
By default, intermediate files used during large sort, join, aggregation, or analytic function operations are stored in
the directory /tmp/impala-scratch . These files are removed when the operation finishes. (Multiple concurrent
queries can perform operations that use the “spill to disk” technique, without any name conflicts for these temporary
files.) You can specify a different location by starting the impalad daemon with the
--scratch_dirs="path_to_directory" configuration option or the equivalent configuration option in the Cloudera
Manager user interface. You can specify a single directory, or a comma-separated list of directories. The scratch
directories must be on the local filesystem, not in HDFS. You might specify different directory paths for different hosts,
depending on the capacity and speed of the available storage devices. In CDH 5.5 / Impala 2.3 or higher, Impala
successfully starts (with a warning written to the log) if it cannot create or read and write files in one of the scratch
directories. If there is less than 1 GB free on the filesystem where that directory resides, Impala still runs, but writes a
warning message to its log. If Impala encounters an error reading or writing files in a scratch directory during a query,
Impala logs the error and the query fails.
Memory usage for SQL operators:
In CDH 5.13 / Impala 2.10 and higher, the way SQL operators such as GROUP BY, DISTINCT, and joins, transition
between using additional memory or activating the spill-to-disk feature is changed. The memory required to spill to
disk is reserved up front, and you can examine it in the EXPLAIN plan when the EXPLAIN_LEVEL query option is set
to 2 or higher.
The infrastructure of the spilling feature affects the way the affected SQL operators, such as GROUP BY, DISTINCT,
and joins, use memory. On each host that participates in the query, each such operator in a query requires memory
to store rows of data and other data structures. Impala reserves a certain amount of memory up front for each operator
that supports spill-to-disk that is sufficient to execute the operator. If an operator accumulates more data than can fit
in the reserved memory, it can either reserve more memory to continue processing data in memory or start spilling
data to temporary scratch files on disk. Thus, operators with spill-to-disk support can adapt to different memory
constraints by using however much memory is available to speed up execution, yet tolerate low memory conditions
by spilling data to disk.
The amount of data depends on the portion of the data being handled by that host, and thus the operator may end
up consuming different amounts of memory on different hosts.
608 | Apache Impala Guide
Scalability Considerations for Impala
Added in: This feature was added to the ORDER BY clause in CDH 5.1 / Impala 1.4. This feature was extended to cover
join queries, aggregation functions, and analytic functions in CDH 5.2 / Impala 2.0. The size of the memory work area
required by each operator that spills was reduced from 512 megabytes to 256 megabytes in CDH 5.4 / Impala 2.2. The
spilling mechanism was reworked to take advantage of the Impala buffer pool feature and be more predictable and
stable in CDH 5.13 / Impala 2.10.
Avoiding queries that spill to disk:
Because the extra I/O can impose significant performance overhead on these types of queries, try to avoid this situation
by using the following steps:
1. Detect how often queries spill to disk, and how much temporary data is written. Refer to the following sources:
• The output of the PROFILE command in the impala-shell interpreter. This data shows the memory usage
for each host and in total across the cluster. The WriteIoBytes counter reports how much data was written
to disk for each operator during the query. (In CDH 5.12 / Impala 2.9, the counter was named
ScratchBytesWritten; in CDH 5.10 / Impala 2.8 and earlier, it was named BytesWritten.)
• The Impala Queries dialog in Cloudera Manager. You can see the peak memory usage for a query, combined
across all nodes in the cluster.
• The Queries tab in the Impala debug web user interface. Select the query to examine and click the
corresponding Profile link. This data breaks down the memory usage for a single host within the cluster, the
host whose web interface you are connected to.
2. Use one or more techniques to reduce the possibility of the queries spilling to disk:
• Increase the Impala memory limit if practical, for example, if you can increase the available memory by more
than the amount of temporary data written to disk on a particular node. Remember that in Impala 2.0 and
later, you can issue SET MEM_LIMIT as a SQL statement, which lets you fine-tune the memory usage for
queries from JDBC and ODBC applications.
• Increase the number of nodes in the cluster, to increase the aggregate memory available to Impala and reduce
the amount of memory required on each node.
• Add more memory to the hosts running Impala daemons.
• On a cluster with resources shared between Impala and other Hadoop components, use resource management
features to allocate more memory for Impala. See Resource Management on page 549 for details.
• If the memory pressure is due to running many concurrent queries rather than a few memory-intensive ones,
consider using the Impala admission control feature to lower the limit on the number of concurrent queries.
By spacing out the most resource-intensive queries, you can avoid spikes in memory usage and improve
overall response times. See Admission Control and Query Queuing on page 549 for details.
• Tune the queries with the highest memory requirements, using one or more of the following techniques:
– Run the COMPUTE STATS statement for all tables involved in large-scale joins and aggregation queries.
– Minimize your use of STRING columns in join columns. Prefer numeric values instead.
– Examine the EXPLAIN plan to understand the execution strategy being used for the most
resource-intensive queries. See Using the EXPLAIN Plan for Performance Tuning on page 602 for details.
– If Impala still chooses a suboptimal execution strategy even with statistics available, or if it is impractical
to keep the statistics up to date for huge or rapidly changing tables, add hints to the most
resource-intensive queries to select the right execution strategy. See Optimizer Hints in Impala on page
387 for details.
• If your queries experience substantial performance overhead due to spilling, enable the
DISABLE_UNSAFE_SPILLS query option. This option prevents queries whose memory usage is likely to be
exorbitant from spilling to disk. See DISABLE_UNSAFE_SPILLS Query Option (CDH 5.2 or higher only) on page
329 for details. As you tune problematic queries using the preceding steps, fewer and fewer will be cancelled
by this option setting.
Testing performance implications of spilling to disk:
To artificially provoke spilling, to test this feature and understand the performance implications, use a test environment
with a memory limit of at least 2 GB. Issue the SET command with no arguments to check the current setting for the
Apache Impala Guide | 609
Scalability Considerations for Impala
MEM_LIMIT query option. Set the query option DISABLE_UNSAFE_SPILLS=true. This option limits the spill-to-disk
feature to prevent runaway disk usage from queries that are known in advance to be suboptimal. Within impala-shell,
run a query that you expect to be memory-intensive, based on the criteria explained earlier. A self-join of a large table
is a good candidate:
select count(*) from big_table a join big_table b using (column_with_many_values);
Issue the PROFILE command to get a detailed breakdown of the memory usage on each node during the query.
Set the MEM_LIMIT query option to a value that is smaller than the peak memory usage reported in the profile output.
Do not specify a memory limit lower than reported in the profile output. Now try the memory-intensive query again.
Check if the query fails with a message like the following:
WARNINGS: Spilling has been disabled for plans that do not have stats and are not hinted
to prevent potentially bad plans from using too many cluster resources. Compute stats
on
these tables, hint the plan or disable this behavior via query options to enable spilling.
If so, the query could have consumed substantial temporary disk space, slowing down so much that it would not
complete in any reasonable time. Rather than rely on the spill-to-disk feature in this case, issue the COMPUTE STATS
statement for the table or tables in your sample query. Then run the query again, check the peak memory usage again
in the PROFILE output, and adjust the memory limit again if necessary to be lower than the peak memory usage.
At this point, you have a query that is memory-intensive, but Impala can optimize it efficiently so that the memory
usage is not exorbitant. You have set an artificial constraint through the MEM_LIMIT option so that the query would
normally fail with an out-of-memory error. But the automatic spill-to-disk feature means that the query should actually
succeed, at the expense of some extra disk I/O to read and write temporary work data.
Try the query again, and confirm that it succeeds. Examine the PROFILE output again. This time, look for lines of this
form:
- SpilledPartitions: N
If you see any such lines with N greater than 0, that indicates the query would have failed in Impala releases prior to
2.0, but now it succeeded because of the spill-to-disk feature. Examine the total time taken by the AGGREGATION_NODE
or other query fragments containing non-zero SpilledPartitions values. Compare the times to similar fragments
that did not spill, for example in the PROFILE output when the same query is run with a higher memory limit. This
gives you an idea of the performance penalty of the spill operation for a particular query with a particular memory
limit. If you make the memory limit just a little lower than the peak memory usage, the query only needs to write a
small amount of temporary data to disk. The lower you set the memory limit, the more temporary data is written and
the slower the query becomes.
Now repeat this procedure for actual queries used in your environment. Use the DISABLE_UNSAFE_SPILLS setting
to identify cases where queries used more memory than necessary due to lack of statistics on the relevant tables and
columns, and issue COMPUTE STATS where necessary.
When to use DISABLE_UNSAFE_SPILLS:
You might wonder, why not leave DISABLE_UNSAFE_SPILLS turned on all the time. Whether and how frequently to
use this option depends on your system environment and workload.
DISABLE_UNSAFE_SPILLS is suitable for an environment with ad hoc queries whose performance characteristics
and memory usage are not known in advance. It prevents “worst-case scenario” queries that use large amounts of
memory unnecessarily. Thus, you might turn this option on within a session while developing new SQL code, even
though it is turned off for existing applications.
Organizations where table and column statistics are generally up-to-date might leave this option turned on all the
time, again to avoid worst-case scenarios for untested queries or if a problem in the ETL pipeline results in a table with
no statistics. Turning on DISABLE_UNSAFE_SPILLS lets you “fail fast” in this case and immediately gather statistics
or tune the problematic queries.
610 | Apache Impala Guide
Some organizations might leave this option turned off. For example, you might have tables large enough that the
COMPUTE STATS takes substantial time to run, making it impractical to re-run after loading new data. If you have
examined the EXPLAIN plans of your queries and know that they are operating efficiently, you might leave
DISABLE_UNSAFE_SPILLS turned off. In that case, you know that any queries that spill will not go overboard with
their memory consumption.
Scalability Considerations for Impala
Limits on Query Size and Complexity
There are hardcoded limits on the maximum size and complexity of queries. Currently, the maximum number of
expressions in a query is 2000. You might exceed the limits with large or deeply nested queries produced by business
intelligence tools or other query generators.
If you have the ability to customize such queries or the query generation logic that produces them, replace sequences
of repetitive expressions with single operators such as IN or BETWEEN that can represent multiple values or ranges.
For example, instead of a large number of OR clauses:
WHERE val = 1 OR val = 2 OR val = 6 OR val = 100 ...
use a single IN clause:
WHERE val IN (1,2,6,100,...)
Scalability Considerations for Impala I/O
Impala parallelizes its I/O operations aggressively, therefore the more disks you can attach to each host, the better.
Impala retrieves data from disk so quickly using bulk read operations on large blocks, that most queries are CPU-bound
rather than I/O-bound.
Because the kind of sequential scanning typically done by Impala queries does not benefit much from the random-access
capabilities of SSDs, spinning disks typically provide the most cost-effective kind of storage for Impala data, with little
or no performance penalty as compared to SSDs.
Resource management features such as YARN, Llama, and admission control typically constrain the amount of memory,
CPU, or overall number of queries in a high-concurrency environment. Currently, there is no throttling mechanism for
Impala I/O.
Scalability Considerations for Table Layout
Due to the overhead of retrieving and updating table metadata in the metastore database, try to limit the number of
columns in a table to a maximum of approximately 2000. Although Impala can handle wider tables than this, the
metastore overhead can become significant, leading to query performance that is slower than expected based on the
actual data volume.
To minimize overhead related to the metastore database and Impala query planning, try to limit the number of partitions
for any partitioned table to a few tens of thousands.
If the volume of data within a table makes it impractical to run exploratory queries, consider using the TABLESAMPLE
clause to limit query processing to only a percentage of data within the table. This technique reduces the overhead
for query startup, I/O to read the data, and the amount of network, CPU, and memory needed to process intermediate
results during the query. See TABLESAMPLE Clause on page 315 for details.
Kerberos-Related Network Overhead for Large Clusters
When Impala starts up, or after each kinit refresh, Impala sends a number of simultaneous requests to the KDC. For
a cluster with 100 hosts, the KDC might be able to process all the requests within roughly 5 seconds. For a cluster with
Apache Impala Guide | 611
Scalability Considerations for Impala
1000 hosts, the time to process the requests would be roughly 500 seconds. Impala also makes a number of DNS
requests at the same time as these Kerberos-related requests.
While these authentication requests are being processed, any submitted Impala queries will fail. During this period,
the KDC and DNS may be slow to respond to requests from components other than Impala, so other secure services
might be affected temporarily.
In CDH 5.15 / Impala 2.12 or earlier, to reduce the frequency of the kinit renewal that initiates a new set of
authentication requests, increase the kerberos_reinit_interval configuration setting for the impalad daemons.
Currently, the default is 60 minutes. Consider using a higher value such as 360 (6 hours).
The kerberos_reinit_interval configuration setting is removed in CDH 6.0 / Impala 3.0, and the above step is
no longer needed.
Avoiding CPU Hotspots for HDFS Cached Data
You can use the HDFS caching feature, described in Using HDFS Caching with Impala (CDH 5.3 or higher only) on page
593, with Impala to reduce I/O and memory-to-memory copying for frequently accessed tables or partitions.
In the early days of this feature, you might have found that enabling HDFS caching resulted in little or no performance
improvement, because it could result in “hotspots”: instead of the I/O to read the table data being parallelized across
the cluster, the I/O was reduced but the CPU load to process the data blocks might be concentrated on a single host.
To avoid hotspots, include the WITH REPLICATION clause with the CREATE TABLE or ALTER TABLE statements for
tables that use HDFS caching. This clause allows more than one host to cache the relevant data blocks, so the CPU load
can be shared, reducing the load on any one host. See CREATE TABLE Statement on page 234 and ALTER TABLE Statement
on page 205 for details.
Hotspots with high CPU load for HDFS cached data could still arise in some cases, due to the way that Impala schedules
the work of processing data blocks on different hosts. In CDH 5.7 / Impala 2.5 and higher, scheduling improvements
mean that the work for HDFS cached data is divided better among all the hosts that have cached replicas for a particular
data block. When more than one host has a cached replica for a data block, Impala assigns the work of processing that
block to whichever host has done the least work (in terms of number of bytes read) for the current query. If hotspots
persist even with this load-based scheduling algorithm, you can enable the query option
SCHEDULE_RANDOM_REPLICA=TRUE to further distribute the CPU load. This setting causes Impala to randomly pick
a host to process a cached data block if the scheduling algorithm encounters a tie when deciding which host has done
the least work.
Scalability Considerations for File Handle Caching
One scalability aspect that affects heavily loaded clusters is the load on the metadata layer from looking up the details
as each file is opened. On HDFS, that can lead to increased load on the NameNode, and on S3, this can lead to an
excessive number of S3 metadata requests. For example, a query that does a full table scan on a partitioned table may
need to read thousands of partitions, each partition containing multiple data files. Accessing each column of a Parquet
file also involves a separate “open” call, further increasing the load on the NameNode. High NameNode overhead can
add startup time (that is, increase latency) to Impala queries, and reduce overall throughput for non-Impala workloads
that also require accessing HDFS files.
You can reduce the number of calls made to your file system's metadata layer by enabling the file handle caching
feature. Data files that are accessed by different queries, or even multiple times within the same query, can be accessed
without a new “open” call and without fetching the file details multiple times.
Impala supports file handle caching for the following file systems:
• HDFS in CDH 5.13 / Impala 2.10 and higher
In Impala 3.2 and higher, file handle caching also applies to remote HDFS file handles. This is controlled by the
cache_remote_file_handles flag for an impalad. It is recommended that you use the default value of true
as this caching prevents your NameNode from overloading when your cluster has many remote HDFS reads.
612 | Apache Impala Guide
Scalability Considerations for Impala
• S3 in CDH 6.3 / Impala 3.3 and higher
The cache_s3_file_handles impalad flag controls the S3 file handle caching. The feature is enabled by default
with the flag set to true.
The feature is enabled by default with 20,000 file handles to be cached. To change the value, set the configuration
option max_cached_file_handles to a non-zero value for each impalad daemon. From the initial default value
of 20000, adjust upward if NameNode request load is still significant, or downward if it is more important to reduce
the extra memory usage on each host. Each cache entry consumes 6 KB, meaning that caching 20,000 file handles
requires up to 120 MB on each Impala executor. The exact memory usage varies depending on how many file handles
have actually been cached; memory is freed as file handles are evicted from the cache.
If a manual operation moves a file to the trashcan while the file handle is cached, Impala still accesses the contents of
that file. This is a change from prior behavior. Previously, accessing a file that was in the trashcan would cause an error.
This behavior only applies to non-Impala methods of removing files, not the Impala mechanisms such as TRUNCATE
TABLE or DROP TABLE.
If files are removed, replaced, or appended by operations outside of Impala, the way to bring the file information up
to date is to run the REFRESH statement on the table.
File handle cache entries are evicted as the cache fills up, or based on a timeout period when they have not been
accessed for some time.
To evaluate the effectiveness of file handle caching for a particular workload, issue the PROFILE statement in
impala-shell or examine query profiles in the Impala Web UI. Look for the ratio of CachedFileHandlesHitCount
(ideally, should be high) to CachedFileHandlesMissCount (ideally, should be low). Before starting any evaluation,
run several representative queries to “warm up” the cache because the first time each data file is accessed is always
recorded as a cache miss.
To see metrics about file handle caching for each impalad instance, examine the following fields on the /metrics page
in the Impala Web UI:
• impala-server.io.mgr.cached-file-handles-miss-count
• impala-server.io.mgr.num-cached-file-handles
Scaling Limits and Guidelines
This topic lists the scalability limitation in Impala. For a given functional feature, it is recommended that you respect
these limitations to achieve optimal scalability and performance. For example, while you might be able to create a
table with 2000 columns, you will experience performance problems while querying the table. This topic does not
cover functional limitations in Impala.
Unless noted otherwise, the limits were tested and certified.
The limits noted as "generally safe" are not certified, but recommended as generally safe. A safe range is not a hard
limit as unforeseen errors or troubles in your particular environment can affect the range.
Deployment Limits
• Number of Impalad Executors
– 80 nodes in CDH 5.14 and lower
– 150 nodes in CDH 5.15 and higher
• Number of Impalad Coordinators: 1 coordinator for at most every 50 executors
See Dedicated Coordinators for details.
• The number of Impala clusters per deployment
– 1 Impala cluster in Impala 3.1 and lower
– Multiple clusters in Impala 3.2 and higher is generally safe.
Apache Impala Guide | 613
Scalability Considerations for Impala
Data Storage Limits
There are no hard limits for the following, but you will experience gradual performance degradation as you increase
these numbers.
• Number of databases
• Number of tables - total, per database
• Number of partitions - total, per table
• Number of files - total, per table, per table per partition
• Number of views - total, per database
• Number of user-defined functions - total, per database
• Parquet
– Number of columns per row group
– Number of row groups per block
– Number of HDFS blocks per file
Schema Design Limits
• Number of columns
– 300 for Kudu tables
See Kudu Usage Limitations for more information.
– 1000 for other types of tables
Security Limits
• Number of roles: 10,000 for Sentry
Query Limits - Compile Time
• Maximum number of columns in a query, included in a SELECT list, INSERT, and in an expression: no limit
• Number of tables referenced: no limit
• Number of plan nodes: no limit
• Number of plan fragments: no limit
• Depth of expression tree: 1000 hard limit
• Width of expression tree: 10,000 hard limit
Query Limits - Runtime Time
• Codegen
– Very deeply nested expressions within queries can exceed internal Impala limits, leading to excessive memory
usage. Setting the query option disable_codegen=true may reduce the impact, at a cost of longer query
runtime.
How to Configure Impala with Dedicated Coordinators
Each host that runs the Impala Daemon acts as both a coordinator and as an executor, by default, managing metadata
caching, query compilation, and query execution. In this configuration, Impala clients can connect to any Impala daemon
and send query requests.
During highly concurrent workloads for large-scale queries, the dual roles can cause scalability issues because:
• The extra work required for a host to act as the coordinator could interfere with its capacity to perform other
work for the later phases of the query. For example, coordinators can experience significant network and CPU
614 | Apache Impala Guide
Scalability Considerations for Impala
overhead with queries containing a large number of query fragments. Each coordinator caches metadata for all
table partitions and data files, which requires coordinators to be configured with a large JVM heap. Executor-only
Impala daemons should be configured with the default JVM heaps, which leaves more memory available to process
joins, aggregations, and other operations performed by query executors.
• Having a large number of hosts act as coordinators can cause unnecessary network overhead, or even timeout
errors, as each of those hosts communicates with the Statestored daemon for metadata updates.
• The "soft limits" imposed by the admission control feature are more likely to be exceeded when there are a large
number of heavily loaded hosts acting as coordinators. Check IMPALA-3649 and IMPALA-6437 to see the status
of the enhancements to mitigate this issue.
The following factors can further exacerbate the above issues:
• High number of concurrent query fragments due to query concurrency and/or query complexity
• Large metadata topic size related to the number of partitions/files/blocks
• High number of coordinator nodes
• High number of coordinators used in the same resource pool
If such scalability bottlenecks occur, in CDH 5.12 / Impala 2.9 and higher, you can assign one dedicated role to each
Impala daemon host, either as a coordinator or as an executor, to address the issues.
• All explicit or load-balanced client connections must go to the coordinator hosts. These hosts perform the network
communication to keep metadata up-to-date and route query results to the appropriate clients. The dedicated
coordinator hosts do not participate in I/O-intensive operations such as scans, and CPU-intensive operations such
as aggregations.
• The executor hosts perform the intensive I/O, CPU, and memory operations that make up the bulk of the work
for each query. The executors do communicate with the Statestored daemon for membership status, but the
dedicated executor hosts do not process the final result sets for queries.
Using dedicated coordinators offers the following benefits:
• Reduces memory usage by limiting the number of Impala nodes that need to cache metadata.
• Provides better concurrency by avoiding coordinator bottleneck.
• Eliminates query over-admission.
• Reduces resource, especially network, utilization on the Statestored daemon by limiting metadata broadcast to
a subset of nodes.
• Improves reliability and performance for highly concurrent workloads by reducing workload stress on coordinators.
Dedicated coordinators require 50% or fewer connections and threads.
• Reduces the number of explicit metadata refreshes required.
• Improves diagnosability if a bottleneck or other performance issue arises on a specific host, you can narrow down
the cause more easily because each host is dedicated to specific operations within the overall Impala workload.
In this configuration with dedicated coordinators / executors, you cannot connect to the dedicated executor hosts
through clients such as impala-shell or business intelligence tools as only the coordinator nodes support client
connections.
Determining the Optimal Number of Dedicated Coordinators
You should have the smallest number of coordinators that will still satisfy your workload requirements in a cluster. A
rough estimation is 1 coordinator for every 50 executors.
Apache Impala Guide | 615
Scalability Considerations for Impala
To maintain a healthy state and optimal performance, it is recommended that you keep the peak utilization of all
resources used by Impala, including CPU, the number of threads, the number of connections, and RPCs, under 80%.
Consider the following factors to determine the right number of coordinators in your cluster:
• What is the number of concurrent queries?
• What percentage of the workload is DDL?
• What is the average query resource usage at the various stages (merge, runtime filter, result set size, etc.)?
• How many Impala Daemons (impalad) is in the cluster?
• Is there a high availability requirement?
• Compute/storage capacity reduction factor
Start with the below set of steps to determine the initial number of coordinators:
1. If your cluster has less than 10 nodes, we recommend that you configure one dedicated coordinator. Deploy the
dedicated coordinator on a DataNode to avoid losing storage capacity. In most of the cases, one dedicated
coordinator is enough to support all workloads on a cluster.
2. Add more coordinators if the dedicated coordinator CPU or network peak utilization is 80% or higher. You might
need 1 coordinator for every 50 executors.
3. If the Impala service is shared by multiple workgroups with a dynamic resource pool assigned, use one coordinator
per pool to avoid admission control over admission.
4. If high availability is required, double the number of coordinators. One set as an active set and the other as a
backup set.
Advanced Tuning
Use the following guidelines to further tune the throughput and stability.
1. The concurrency of DML statements does not typically depend on the number of coordinators or size of the cluster.
Queries that return large result sets (10,000+ rows) consume more CPU and memory resources on the coordinator.
Add one or two coordinators if the workload has many such queries.
2. DDL queries, excluding COMPUTE STATS and CREATE TABLE AS SELECT, are executed only on coordinators.
If your workload contains many DDL queries running concurrently, you could add one coordinator.
3. The CPU contention on coordinators can slow down query executions when concurrency is high, especially for
very short queries (<10s). Add more coordinators to avoid CPU contention.
4. On a large cluster with 50+ nodes, the number of network connections from a coordinator to executors can grow
quickly as query complexity increases. The growth is much greater on coordinators than executors. Add a few
more coordinators if workloads are complex, i.e. (an average number of fragments * number of Impalad) > 500,
but with the low memory/CPU usage to share the load. Watch IMPALA-4603 and IMPALA-7213 to track the progress
on fixing this issue.
5. When using multiple coordinators for DML statements, divide queries to different groups (number of groups =
number of coordinators). Configure a separate dynamic resource pool for each group and direct each group of
query requests to a specific coordinator. This is to avoid query over admission.
6. The front-end connection requirement is not a factor in determining the number of dedicated coordinators.
Consider setting up a connection pool at the client side instead of adding coordinators. For a short-term solution,
you could increase the value of fe_service_threads on coordinators to allow more client connections.
7. In general, you should have a very small number of coordinators so storage capacity reduction is not a concern.
On a very small cluster (less than 10 nodes), deploy a dedicated coordinator on a DataNode to avoid storage
capacity reduction.
Estimating Coordinator Resource Usage
Resource
Memory
Safe range
Notes / CM tsquery to monitor
(Max JVM heap setting +
Memory usage:
616 | Apache Impala Guide
Scalability Considerations for Impala
query concurrency *
query mem_limit)
<=
80% of Impala process memory
allocation
SELECT mem_rss WHERE entityName
= "Coordinator Instance ID" AND
category = ROLE
JVM heap usage (metadata cache):
SELECT
impala_jvm_heap_current_usage_bytes
WHERE entityName = "Coordinator
Instance ID" AND category = ROLE
(only in release 5.15 and above)
TCP Connection
Incoming + outgoing < 16K
Incoming connection usage:
Threads
< 32K
CPU
Concurrency =
non-DDL query concurrency <=
number of virtual cores allocated to
Impala per node
SELECT
thrift_server_backend_connections_in_use
WHERE entityName = "Coordinator
Instance ID" AND category = ROLE
Outgoing connection usage:
SELECT
backends_client_cache_clients_in_use
WHERE entityName = "Coordinator
Instance ID" AND category = ROLE
SELECT
thread_manager_running_threads
WHERE entityName = "Coordinator
Instance ID" AND category = ROLE
CPU usage estimation should be based
on how many cores are allocated to
Impala per node, not a sum of all cores
of the cluster.
It is recommended that concurrency
should not be more than the number
of virtual cores allocated to Impala per
node.
Query concurrency:
SELECT
total_impala_num_queries_registered_across_impalads
WHERE entityName = "IMPALA-1" AND
category = SERVICE
If usage of any of the above resources exceeds the safe range, add one more coordinator.
Monitoring Coordinator Resource Usage
Using Cloudera Manager, monitor the coordinator resource usage to understand your workload and adjust the number
of coordinators according to the guidelines above. The available options are:
• Impala Queries tab: Monitor such attributes as DDL queries and Rows produced. See Monitoring Impala Queries
for detail information.
• Custom charts: Monitor activities, such as query complexity which is an average fragment count per query (total
fragments / total queries).
• tsquery: Build the custom charts to monitor and estimate the amount of resource the coordinator needs. See
tsquery Language for more information.
Apache Impala Guide | 617
Scalability Considerations for Impala
The following are sample queries for common resource usage monitoring. Replace entityName values with your
coordinator instance id.
Per coordinator tsquery
Resource Usage
Memory usage
JVM heap usage (metadata cache)
CPU usage
Network usage (host level)
Incoming connection usage
Outgoing connection usage
Thread usage
Cluster wide tsquery
Resource usage
Front-end connection usage
Query concurrency
Tsquery
SELECT impala_memory_total_used,
mem_tracker_process_limit WHERE entityName =
"Coordinator Instance ID" AND category = ROLE
SELECT impala_jvm_heap_current_usage_bytes WHERE
entityName = "Coordinator Instance ID" AND category =
ROLE (only in release 5.15 and above)
SELECT cpu_user_rate / getHostFact(numCores, 1) * 100,
cpu_system_rate / getHostFact(numCores, 1) * 100 WHERE
entityName="Coordinator Instance ID"
SELECT
total_bytes_receive_rate_across_network_interfaces,
total_bytes_transmit_rate_across_network_interfaces
WHERE entityName="Coordinator Instance ID"
SELECT thrift_server_backend_connections_in_use WHERE
entityName = "Coordinator Instance ID" AND category =
ROLE
SELECT backends_client_cache_clients_in_use WHERE
entityName = "Coordinator Instance ID" AND category =
ROLE
SELECT thread_manager_running_threads WHERE
entityName = "Coordinator Instance ID" AND category =
ROLE
Tsquery
SELECT
total_thrift_server_beeswax_frontend_connections_in_use_across_impalads,
total_thrift_server_hiveserver2_frontend_connections_in_use_across_impalads
SELECT
total_impala_num_queries_registered_across_impalads
WHERE entityName = "IMPALA-1" AND category = SERVICE
Deploying Dedicated Coordinators and Executors in Cloudera Manager
This section describes the process to configure a dedicated coordinator and a dedicated executor roles for Impala.
• Dedicated coordinator:
– Should be on an edge node with no other services running on it.
– Does not need large local disks but still needs some that can be used for Spilling.
– Require at least the same or even larger memory allocation than executors.
• (Dedicated)Executors:
• Should be colocated with DataNodes.
• The number of hosts with dedicated executors typically increases as the cluster grows larger and handles
more table partitions, data files, and concurrent queries.
618 | Apache Impala Guide
Scalability Considerations for Impala
To configuring dedicated coordinators/executors:
1. Navigate to Clusters > Impala > Configuration > Role Groups.
2. Click Create to create two role groups with the following values.
a. Group for Coordinators
a. Group Name: Coordinators
b. Role Type: Impala Daemon
c. Copy from:
• Select Impala Daemon Default Group if you want the existing configuration gets carried over to
the Coordinators.
• Select None if you want to start with a blank configuration.
b. Group for Executors
a. Group Name: Executors
b. Role Type: Impala Daemon
c. Copy from:
• Select Impala Daemon Default Group if you want the existing configuration gets carried over to
the Executors.
• Select None if you want to start with a blank configuration.
3. In the Role Groups page, click Impala Daemon Default Group.
a. Select the set of nodes intended to be coordinators.
a. Click Action for Selected and select Move To Different Role Group….
b. Select the Coordinators.
b. Select the set of nodes intended to be Executors.
a. Click Action for Selected and select Move To Different Role Group….
b. Select Executors.
4. Click Configuration. In the search field, type Impala Daemon Specialization.
5. Click Edit Individual Values.
6. For Coordinators role group, select COORDINATOR_ONLY.
7. For Executors role group, select EXECUTOR_ONLY.
8. Click Save Changes and then restart the Impala service.
Deploying Dedicated Coordinators and Executors from Command Line
To configuring dedicated coordinators/executors, you specify one of the following startup flags for the impalad
daemon on each host:
• --is_executor=false for each host that does not act as an executor for Impala queries. These hosts act
exclusively as query coordinators. This setting typically applies to a relatively small number of hosts, because the
most common topology is to have nearly all DataNodes doing work for query execution.
• --is_coordinator=false for each host that does not act as a coordinator for Impala queries. These hosts act
exclusively as executors. The number of hosts with this setting typically increases as the cluster grows larger and
handles more table partitions, data files, and concurrent queries. As the overhead for query coordination increases,
it becomes more important to centralize that work on dedicated hosts.
Apache Impala Guide | 619
Scalability Considerations for Impala
Metadata Management
This topic describes various knobs you can use to control how Impala manages its metadata in order to improve
performance and scalability.
On-demand Metadata
In previous versions of Impala, every coordinator kept a replica of all the cache in catalogd, consuming large memory
on each coordinator with no option to evict. Metadata always propagated through the statestored and suffers from
head-of-line blocking, for example, one user loading a big table blocking another user loading a small table.
With this new feature, the coordinators pull metadata as needed from catalogd and cache it locally. The cached
metadata gets evicted automatically under memory pressure.
The granularity of on-demand metadata fetches is now at the partition level between the coordinator and catalogd.
Common use cases like add/drop partitions do not trigger unnecessary serialization/deserialization of large metadata.
This feature is disabled by default.
The feature can be used in either of the following modes.
Metadata on-demand mode
In this mode, all coordinators use the metadata on-demand.
Set the following on catalogd:
--catalog_topic_mode=minimal
Set the following on all impalad coordinators:
--use_local_catalog=true
Mixed mode
In this mode, only some coordinators are enabled to use the metadata on-demand.
We recommend that you use the mixed mode only for testing local catalog’s impact on heap usage.
Set the following on catalogd:
--catalog_topic_mode=mixed
Set the following on impalad coordinators with metdadata on-demand:
--use_local_catalog=true
Limitation:
Global INVALIDATES are not supported when this feature is enabled. If your workload requires global INVALIDATES,
do not use this feature.
Automatic Invalidation of Metadata Cache
To keep the size of metadata bounded, catalogd periodically scans all the tables and invalidates those not recently
used. There are two types of configurations in catalogd.
Time-based cache invalidation
Catalogd invalidates tables that are not recently used in the specified time period (in seconds).
The --invalidate_tables_timeout_s flag needs to be applied to both impalad and catalogd.
Memory-based cache invalidation
When the memory pressure reaches 60% of JVM heap size after a Java garbage collection in catalogd, Impala
invalidates 10% of the least recently used tables.
620 | Apache Impala Guide
Scalability Considerations for Impala
The --invalidate_tables_on_memory_pressure flag needs to be applied to both impalad and catalogd.
Automatic invalidation of metadata provides more stability with lower chances of running out of memory, but the
feature could potentially cause performance issues and may require tuning.
Automatic Invalidation/Refresh of Metadata
When tools such as Hive and Spark are used to process the raw data ingested into Hive tables, new HMS metadata
(database, tables, partitions) and filesystem metadata (new files in existing partitions/tables) is generated. In previous
versions of Impala, in order to pick up this new information, Impala users needed to manually issue an INVALIDATE
or REFRESH commands.
When automatic invalidate/refresh of metadata is enabled, catalogd polls Hive Metastore (HMS) notification events
at a configurable interval and processes the following changes:
Note: This is a preview feature in CDH 6.3 / Impala 3.3 and not generally available.
• Invalidates the tables when it receives the ALTER TABLE event.
• Refreshes the partition when it receives the ALTER, ADD, or DROP partitions.
• Adds the tables or databases when it receives the CREATE TABLE or CREATE DATABASE events.
• Removes the tables from catalogd when it receives the DROP TABLE or DROP DATABASE events.
• Refreshes the table and partitions when it receives the INSERT events.
If the table is not loaded at the time of processing the INSERT event, the event processor does not need to refresh
the table and skips it.
This feature is controlled by the --hms_event_polling_interval_s flag. Start the catalogd with the
--hms_event_polling_interval_s flag set to a positive integer to enable the feature and set the polling frequency
in seconds. We recommend the value to be less than 5 seconds.
The following use cases are not supported:
• When you bypass HMS and add or remove data into table by adding files directly on the filesystem, HMS does
not generate the INSERT event, and the event processor will not invalidate the corresponding table or refresh
the corresponding partition.
It is recommended that you use the LOAD DATA command to do the data load in such cases, so that event processor
can act on the events generated by the LOAD command.
• The Spark API that saves data to a specified location does not generate events in HMS, thus is not supported. For
example:
Seq((1, 2)).toDF("i",
"j").write.save("/user/hive/warehouse/spark_etl.db/customers/date=01012019")
This feature is turned off by default with the --hms_event_polling_interval_s flag set to 0.
Configure HMS for Event Based Automatic Metadata Sync
As the first step to use the HMS event based metadata sync, enable and configure HMS notifications in Cloudera
Manager.
1. Navigate to Clusters > Hive > Configuration > Filters > SCOPE > Hive Metastore Server.
2. Select Enable Stored Notifications in Database.
3. In Hive Metastore Server Advanced Configuration Snippet (Safety Valve) for hive-site.xml, click + to expand and
enter the following:
• Name: hive.metastore.notifications.add.thrift.objects
• Value: true
Apache Impala Guide | 621
Scalability Considerations for Impala
• Name: hive.metastore.alter.notifications.basic
• Value: false
4. Click Save Changes.
5. If you want INSERT events are generated when the Spark (and other non-hive) applications insert data into
existing tables and partitions:
a. In Hive Client Advanced Configuration Snippet (Safety Valve) for hive-site.xml, click + to expand and enter
the following:
• Name: hive.metastore.dml.events
• Value: true
b. In Hive Service Advanced Configuration Snippet (Safety Valve) for hive-site.xml, click + to expand and enter
the following:
• Name: hive.metastore.dml.events
• Value: true
c. Click Save Changes.
6. Restart stale services.
Configure Catalog Server for Event-Based Automatic Metadata Sync
1. Navigate to Clusters > Impala > Configuration > Filters > SCOPE > Impala Catalog Server.
2. In Catalog Server Command Line Argument Advanced Configuration Snippet (Safety Valve) , click + to expand
and enter the following:
• Name: --hms_event_polling_interval_s
• Value: 4
3. Click Save Changes.
4. Restart the Catalog server when appropriate.
Disable Event Based Automatic Metadata Sync
When the --hms_event_polling_interval_s flag is set to a non-zero value for your catalogd, the event-based
automatic invalidation is enabled for all databases and tables. If you wish to have the fine-grained control on which
tables or databases need to be synced using events, you can use the impala.disableHmsSync property to disable
the event processing at the table or database level.
When you add the DBPROPERTIES or TBLPROPERTIES with the impala.disableHmsSync key, the HMS event based
sync is turned on or off. The value of the impala.disableHmsSync property determines if the event processing
needs to be disabled for a particular table or database.
• If 'impala.disableHmsSync'='true', the events for that table or database are ignored and not synced with
HMS.
• If 'impala.disableHmsSync'='false' or if impala.disableHmsSync is not set, the automatic sync with
HMS is enabled if the --hms_event_polling_interval_s global flag is set to non-zero.
• To disable the event based HMS sync for a new database, set the impala.disableHmsSync database properties
in Hive as currently, Impala does not support setting database properties:
CREATE DATABASE WITH DBPROPERTIES ('impala.disableHmsSync'='true');
• To enable or disable the event based HMS sync for a table:
CREATE TABLE ... TBLPROPERTIES ('impala.disableHmsSync'='true' | 'false');
622 | Apache Impala Guide
Scalability Considerations for Impala
• To change the event based HMS sync at the table level:
ALTER TABLE SET TBLPROPERTIES ('impala.disableHmsSync'='true' | 'false');
When both table and database level properties are set, the table level property takes precedence. If the table level
property is not set, then the database level property is used to evaluate if the event needs to be processed or not.
If the property is changed from true (meaning events are skipped) to false (meaning events are not skipped), you
need to issue a manual INVALIDATE METADATA command to reset event processor because it doesn't know how
many events have been skipped in the past and cannot know if the object in the event is the latest. In such a case, the
status of the event processor changes to NEEDS_INVALIDATE.
Metrics for Event Based Automatic Metadata Sync
You can use the web UI of the catalogd to check the state of the automatic invalidate event processor.
By default, the debug web UI of catalogd is at http://impala-server-hostname:25020 (non-secure cluster)
or https://impala-server-hostname:25020 (secure cluster).
Under the web UI, there are two pages that presents the metrics for HMS event processor that is responsible for the
event based automatic metadata sync.
• /metrics#events
• /events
This provides a detailed view of the metrics of the event processor, including min, max, mean, median, of the
durations and rate metrics for all the counters listed on the /metrics#events page.
/metrics#events Page
The /metrics#events page provides the following metrics about the HMS event processor.
Name
Description
events-processor.
avg-events-fetch-duration
events-processor.
avg-events-process-duration
events-processor.
events-received
Average duration to fetch a batch of events and process it.
Average time taken to process a batch of events received from the Metastore.
Total number of the Metastore events received.
events-processor.
events-received-15min-rate
Exponentially weighted moving average (EWMA) of number of events received in
last 15 min.
This rate of events can be used to determine if there are spikes in event processor
activity during certain hours of the day.
events-processor.
events-received-1min-rate
Exponentially weighted moving average (EWMA) of number of events received in
last 1 min.
This rate of events can be used to determine if there are spikes in event processor
activity during certain hours of the day.
events-processor.
events-received-5min-rate
Exponentially weighted moving average (EWMA) of number of events received in
last 5 min.
This rate of events can be used to determine if there are spikes in event processor
activity during certain hours of the day.
Apache Impala Guide | 623
Scalability Considerations for Impala
Name
events-processor.
events-skipped
Description
Total number of the Metastore events skipped.
Events can be skipped based on certain flags are table and database level. You can
use this metric to make decisions, such as:
• If most of the events are being skipped, see if you might just turn off the event
processing.
• If most of the events are not skipped, see if you need to add flags on certain
databases.
events-processor.status
Metastore event processor status to see if there are events being received or not.
Possible states are:
• PAUSED
The event processor is paused because catalog is being reset concurrently.
• ACTIVE
The event processor is scheduled at a given frequency.
• ERROR
• The event processor is in error state and event processing has stopped.
• NEEDS_INVALIDATE
The event processor could not resolve certain events and needs a manual
INVALIDATE command to reset the state.
• STOPPED
The event processing has been shutdown. No events will be processed.
• DISABLED
The event processor is not configured to run.
624 | Apache Impala Guide
Partitioning for Impala Tables
Partitioning for Impala Tables
By default, all the data files for a table are located in a single directory. Partitioning is a technique for physically dividing
the data during loading, based on values from one or more columns, to speed up queries that test those columns. For
example, with a school_records table partitioned on a year column, there is a separate data directory for each
different year value, and all the data for that year is stored in a data file in that directory. A query that includes a WHERE
condition such as YEAR=1966, YEAR IN (1989,1999), or YEAR BETWEEN 1984 AND 1989 can examine only the
data files from the appropriate directory or directories, greatly reducing the amount of data to read and test.
See Attaching an External Partitioned Table to an HDFS Directory Structure on page 54 for an example that illustrates
the syntax for creating partitioned tables, the underlying directory structure in HDFS, and how to attach a partitioned
Impala external table to data files stored elsewhere in HDFS.
Parquet is a popular format for partitioned Impala tables because it is well suited to handle huge data volumes. See
Query Performance for Impala Parquet Tables on page 645 for performance considerations for partitioned Parquet
tables.
See NULL on page 170 for details about how NULL values are represented in partitioned tables.
See Using Impala with the Amazon S3 Filesystem on page 692 for details about setting up tables where some or all
partitions reside on the Amazon Simple Storage Service (S3).
When to Use Partitioned Tables
Partitioning is typically appropriate for:
• Tables that are very large, where reading the entire data set takes an impractical amount of time.
• Tables that are always or almost always queried with conditions on the partitioning columns. In our example of
a table partitioned by year, SELECT COUNT(*) FROM school_records WHERE year = 1985 is efficient,
only examining a small fraction of the data; but SELECT COUNT(*) FROM school_records has to process a
separate data file for each year, resulting in more overall work than in an unpartitioned table. You would probably
not partition this way if you frequently queried the table based on last name, student ID, and so on without testing
the year.
• Columns that have reasonable cardinality (number of different values). If a column only has a small number of
values, for example Male or Female, you do not gain much efficiency by eliminating only about 50% of the data
to read for each query. If a column has only a few rows matching each value, the number of directories to process
can become a limiting factor, and the data file in each directory could be too small to take advantage of the Hadoop
mechanism for transmitting data in multi-megabyte blocks. For example, you might partition census data by year,
store sales data by year and month, and web traffic data by year, month, and day. (Some users with high volumes
of incoming data might even partition down to the individual hour and minute.)
• Data that already passes through an extract, transform, and load (ETL) pipeline. The values of the partitioning
columns are stripped from the original data files and represented by directory names, so loading data into a
partitioned table involves some sort of transformation or preprocessing.
SQL Statements for Partitioned Tables
In terms of Impala SQL syntax, partitioning affects these statements:
• CREATE TABLE: you specify a PARTITIONED BY clause when creating the table to identify names and data types
of the partitioning columns. These columns are not included in the main list of columns for the table.
• In CDH 5.7 / Impala 2.5 and higher, you can also use the PARTITIONED BY clause in a CREATE TABLE AS SELECT
statement. This syntax lets you use a single statement to create a partitioned table, copy data into it, and create
new partitions based on the values in the inserted data.
Apache Impala Guide | 625
Partitioning for Impala Tables
• ALTER TABLE: you can add or drop partitions, to work with different portions of a huge data set. You can designate
the HDFS directory that holds the data files for a specific partition. With data partitioned by date values, you might
“age out” data that is no longer relevant.
Note: If you are creating a partition for the first time and specifying its location, for maximum
efficiency, use a single ALTER TABLE statement including both the ADD PARTITION and
LOCATION clauses, rather than separate statements with ADD PARTITION and SET LOCATION
clauses.
• INSERT: When you insert data into a partitioned table, you identify the partitioning columns. One or more values
from each inserted row are not stored in data files, but instead determine the directory where that row value is
stored. You can also specify which partition to load a set of data into, with INSERT OVERWRITE statements; you
can replace the contents of a specific partition but you cannot append data to a specific partition.
By default, if an INSERT statement creates any new subdirectories underneath a partitioned table, those
subdirectories are assigned default HDFS permissions for the impala user. To make each subdirectory have the
same permissions as its parent directory in HDFS, specify the --insert_inherit_permissions startup option
for the impalad daemon.
• Although the syntax of the SELECT statement is the same whether or not the table is partitioned, the way queries
interact with partitioned tables can have a dramatic impact on performance and scalability. The mechanism that
lets queries skip certain partitions during a query is known as partition pruning; see Partition Pruning for Queries
on page 627 for details.
• In Impala 1.4 and later, there is a SHOW PARTITIONS statement that displays information about each partition
in a table. See SHOW Statement on page 363 for details.
Static and Dynamic Partitioning Clauses
Specifying all the partition columns in a SQL statement is called static partitioning, because the statement affects a
single predictable partition. For example, you use static partitioning with an ALTER TABLE statement that affects only
one partition, or with an INSERT statement that inserts all values into the same partition:
insert into t1 partition(x=10, y='a') select c1 from some_other_table;
When you specify some partition key columns in an INSERT statement, but leave out the values, Impala determines
which partition to insert. This technique is called dynamic partitioning:
insert into t1 partition(x, y='b') select c1, c2 from some_other_table;
-- Create new partition if necessary based on variable year, month, and day; insert a
single value.
insert into weather partition (year, month, day) select 'cloudy',2014,4,21;
-- Create new partition if necessary for specified year and month but variable day;
insert a single value.
insert into weather partition (year=2014, month=04, day) select 'sunny',22;
The more key columns you specify in the PARTITION clause, the fewer columns you need in the SELECT list. The
trailing columns in the SELECT list are substituted in order for the partition key columns with no specified value.
Refreshing a Single Partition
The REFRESH statement is typically used with partitioned tables when new data files are loaded into a partition by
some non-Impala mechanism, such as a Hive or Spark job. The REFRESH statement makes Impala aware of the new
data files so that they can be used in Impala queries. Because partitioned tables typically contain a high volume of
data, the REFRESH operation for a full partitioned table can take significant time.
In CDH 5.9 / Impala 2.7 and higher, you can include a PARTITION (partition_spec) clause in the REFRESH
statement so that only a single partition is refreshed. For example, REFRESH big_table PARTITION (year=2017,
626 | Apache Impala Guide
month=9, day=30). The partition spec must include all the partition key columns. See REFRESH Statement on page
291 for more details and examples of REFRESH syntax and usage.
Partitioning for Impala Tables
Permissions for Partition Subdirectories
By default, if an INSERT statement creates any new subdirectories underneath a partitioned table, those subdirectories
are assigned default HDFS permissions for the impala user. To make each subdirectory have the same permissions
as its parent directory in HDFS, specify the --insert_inherit_permissions startup option for the impalad
daemon.
Partition Pruning for Queries
Partition pruning refers to the mechanism where a query can skip reading the data files corresponding to one or more
partitions. If you can arrange for queries to prune large numbers of unnecessary partitions from the query execution
plan, the queries use fewer resources and are thus proportionally faster and more scalable.
For example, if a table is partitioned by columns YEAR, MONTH, and DAY, then WHERE clauses such as WHERE year =
2013, WHERE year < 2010, or WHERE year BETWEEN 1995 AND 1998 allow Impala to skip the data files in all
partitions outside the specified range. Likewise, WHERE year = 2013 AND month BETWEEN 1 AND 3 could prune
even more partitions, reading the data files for only a portion of one year.
Checking if Partition Pruning Happens for a Query
To check the effectiveness of partition pruning for a query, check the EXPLAIN output for the query before running
it. For example, this example shows a table with 3 partitions, where the query only reads 1 of them. The notation
#partitions=1/3 in the EXPLAIN plan confirms that Impala can do the appropriate partition pruning.
[localhost:21000] > insert into census partition (year=2010) values ('Smith'),('Jones');
[localhost:21000] > insert into census partition (year=2011) values
('Smith'),('Jones'),('Doe');
[localhost:21000] > insert into census partition (year=2012) values ('Smith'),('Doe');
[localhost:21000] > select name from census where year=2010;
+-------+
| name |
+-------+
| Smith |
| Jones |
+-------+
[localhost:21000] > explain select name from census where year=2010;
+------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 1:EXCHANGE |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 1 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=predicate_propagation.census #partitions=1/3 size=12B |
+------------------------------------------------------------------+
For a report of the volume of data that was actually read and processed at each stage of the query, check the output
of the SUMMARY command immediately after running the query. For a more detailed analysis, look at the output of
the PROFILE command; it includes this same summary report near the start of the profile output.
Apache Impala Guide | 627
Partitioning for Impala Tables
What SQL Constructs Work with Partition Pruning
Impala can even do partition pruning in cases where the partition key column is not directly compared to a constant,
by applying the transitive property to other parts of the WHERE clause. This technique is known as predicate propagation,
and is available in Impala 1.2.2 and later. In this example, the census table includes another column indicating when
the data was collected, which happens in 10-year intervals. Even though the query does not compare the partition key
column (YEAR) to a constant value, Impala can deduce that only the partition YEAR=2010 is required, and again only
reads 1 out of 3 partitions.
[localhost:21000] > drop table census;
[localhost:21000] > create table census (name string, census_year int) partitioned by
(year int);
[localhost:21000] > insert into census partition (year=2010) values
('Smith',2010),('Jones',2010);
[localhost:21000] > insert into census partition (year=2011) values
('Smith',2020),('Jones',2020),('Doe',2020);
[localhost:21000] > insert into census partition (year=2012) values
('Smith',2020),('Doe',2020);
[localhost:21000] > select name from census where year = census_year and census_year=2010;
+-------+
| name |
+-------+
| Smith |
| Jones |
+-------+
[localhost:21000] > explain select name from census where year = census_year and
census_year=2010;
+------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------+
| PLAN FRAGMENT 0 |
| PARTITION: UNPARTITIONED |
| |
| 1:EXCHANGE |
| |
| PLAN FRAGMENT 1 |
| PARTITION: RANDOM |
| |
| STREAM DATA SINK |
| EXCHANGE ID: 1 |
| UNPARTITIONED |
| |
| 0:SCAN HDFS |
| table=predicate_propagation.census #partitions=1/3 size=22B |
| predicates: census_year = 2010, year = census_year |
+------------------------------------------------------------------+
If a view applies to a partitioned table, any partition pruning considers the clauses on both the original query and any
additional WHERE predicates in the query that refers to the view. Prior to Impala 1.4, only the WHERE clauses on the
original query from the CREATE VIEW statement were used for partition pruning.
In queries involving both analytic functions and partitioned tables, partition pruning only occurs for columns named
in the PARTITION BY clause of the analytic function call. For example, if an analytic function query has a clause such
as WHERE year=2016, the way to make the query prune all other YEAR partitions is to include PARTITION BY year
in the analytic function call; for example, OVER (PARTITION BY year,other_columns
other_analytic_clauses).
Dynamic Partition Pruning
The original mechanism uses to prune partitions is static partition pruning, in which the conditions in the WHERE clause
are analyzed to determine in advance which partitions can be safely skipped. In Impala 2.5 / CDH 5.7 and higher, Impala
can perform dynamic partition pruning, where information about the partitions is collected during the query, and
Impala prunes unnecessary partitions in ways that were impractical to predict in advance.
628 | Apache Impala Guide
Partitioning for Impala Tables
For example, if partition key columns are compared to literal values in a WHERE clause, Impala can perform static
partition pruning during the planning phase to only read the relevant partitions:
-- The query only needs to read 3 partitions whose key values are known ahead of time.
-- That's static partition pruning.
SELECT COUNT(*) FROM sales_table WHERE year IN (2005, 2010, 2015);
Dynamic partition pruning involves using information only available at run time, such as the result of a subquery. The
following example shows a simple dynamic partition pruning.
CREATE TABLE yy (s STRING) PARTITIONED BY (year INT);
INSERT INTO yy PARTITION (year) VALUES ('1999', 1999), ('2000', 2000),
('2001', 2001), ('2010', 2010), ('2018', 2018);
COMPUTE STATS yy;
CREATE TABLE yy2 (s STRING, year INT);
INSERT INTO yy2 VALUES ('1999', 1999), ('2000', 2000), ('2001', 2001);
COMPUTE STATS yy2;
-- The following query reads an unknown number of partitions, whose key values
-- are only known at run time. The runtime filters line shows the
-- information used in query fragment 02 to decide which partitions to skip.
EXPLAIN SELECT s FROM yy WHERE year IN (SELECT year FROM yy2);
+--------------------------------------------------------------------------+
| PLAN-ROOT SINK |
| | |
| 04:EXCHANGE [UNPARTITIONED] |
| | |
| 02:HASH JOIN [LEFT SEMI JOIN, BROADCAST] |
| | hash predicates: year = year |
| | runtime filters: RF000 <- year |
| | |
| |--03:EXCHANGE [BROADCAST] |
| | | |
| | 01:SCAN HDFS [default.yy2] |
| | partitions=1/1 files=1 size=620B |
| | |
| 00:SCAN HDFS [default.yy] |
| partitions=5/5 files=5 size=1.71KB |
| runtime filters: RF000 -> year |
+--------------------------------------------------------------------------+
SELECT s FROM yy WHERE year IN (SELECT year FROM yy2); -- Returns 3 rows from yy
PROFILE;
In the above example, Impala evaluates the subquery, sends the subquery results to all Impala nodes participating in
the query, and then each impalad daemon uses the dynamic partition pruning optimization to read only the partitions
with the relevant key values.
The output query plan from the EXPLAIN statement shows that runtime filters are enabled. The plan also shows that
it expects to read all 5 partitions of the yy table, indicating that static partition pruning will not happen.
The Filter summary in the PROFILE output shows that the scan node filtered out based on a runtime filter of dynamic
partition pruning.
Filter 0 (1.00 MB):
- Files processed: 3
- Files rejected: 1 (1)
- Files total: 3 (3)
Dynamic partition pruning is especially effective for queries involving joins of several large partitioned tables. Evaluating
the ON clauses of the join predicates might normally require reading data from all partitions of certain tables. If the
WHERE clauses of the query refer to the partition key columns, Impala can now often skip reading many of the partitions
while evaluating the ON clauses. The dynamic partition pruning optimization reduces the amount of I/O and the amount
of intermediate data stored and transmitted across the network during the query.
Apache Impala Guide | 629
Partitioning for Impala Tables
When the spill-to-disk feature is activated for a join node within a query, Impala does not produce any runtime filters
for that join operation on that host. Other join nodes within the query are not affected.
Dynamic partition pruning is part of the runtime filtering feature, which applies to other kinds of queries in addition
to queries against partitioned tables. See Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on page 588 for
full details about this feature.
Partition Key Columns
The columns you choose as the partition keys should be ones that are frequently used to filter query results in important,
large-scale queries. Popular examples are some combination of year, month, and day when the data has associated
time values, and geographic region when the data is associated with some place.
• For time-based data, split out the separate parts into their own columns, because Impala cannot partition based
on a TIMESTAMP column.
• The data type of the partition columns does not have a significant effect on the storage required, because the
values from those columns are not stored in the data files, rather they are represented as strings inside HDFS
directory names.
• In CDH 5.7 / Impala 2.5 and higher, you can enable the OPTIMIZE_PARTITION_KEY_SCANS query option to
speed up queries that only refer to partition key columns, such as SELECT MAX(year). This setting is not enabled
by default because the query behavior is slightly different if the table contains partition directories without actual
data inside. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (CDH 5.7 or higher only) on page 348 for details.
• Partitioned tables can contain complex type columns. All the partition key columns must be scalar types.
• Remember that when Impala queries data stored in HDFS, it is most efficient to use multi-megabyte files to take
advantage of the HDFS block size. For Parquet tables, the block size (and ideal size of the data files) is 256 MB in
Impala 2.0 and later. Therefore, avoid specifying too many partition key columns, which could result in individual
partitions containing only small amounts of data. For example, if you receive 1 GB of data per day, you might
partition by year, month, and day; while if you receive 5 GB of data per minute, you might partition by year, month,
day, hour, and minute. If you have data with a geographic component, you might partition based on postal code
if you have many megabytes of data for each postal code, but if not, you might partition by some larger region
such as city, state, or country. state
If you frequently run aggregate functions such as MIN(), MAX(), and COUNT(DISTINCT) on partition key columns,
consider enabling the OPTIMIZE_PARTITION_KEY_SCANS query option, which optimizes such queries. This feature
is available in CDH 5.7 / Impala 2.5 and higher. See OPTIMIZE_PARTITION_KEY_SCANS Query Option (CDH 5.7 or higher
only) on page 348 for the kinds of queries that this option applies to, and slight differences in how partitions are evaluated
when this query option is enabled.
Setting Different File Formats for Partitions
Partitioned tables have the flexibility to use different file formats for different partitions. (For background information
about the different file formats Impala supports, see How Impala Works with Hadoop File Formats on page 634.) For
example, if you originally received data in text format, then received new data in RCFile format, and eventually began
receiving data in Parquet format, all that data could reside in the same table for queries. You just need to ensure that
the table is structured so that the data files that use different file formats reside in separate partitions.
For example, here is how you might switch from text to Parquet data as you receive data for different years:
[localhost:21000] > create table census (name string) partitioned by (year smallint);
[localhost:21000] > alter table census add partition (year=2012); -- Text format;
[localhost:21000] > alter table census add partition (year=2013); -- Text format switches
to Parquet before data loaded;
[localhost:21000] > alter table census partition (year=2013) set fileformat parquet;
630 | Apache Impala Guide
Partitioning for Impala Tables
[localhost:21000] > insert into census partition (year=2012) values
('Smith'),('Jones'),('Lee'),('Singh');
[localhost:21000] > insert into census partition (year=2013) values
('Flores'),('Bogomolov'),('Cooper'),('Appiah');
At this point, the HDFS directory for year=2012 contains a text-format data file, while the HDFS directory for year=2013
contains a Parquet data file. As always, when loading non-trivial data, you would use INSERT ... SELECT or LOAD
DATA to import data in large batches, rather than INSERT ... VALUES which produces small files that are inefficient
for real-world queries.
For other file types that Impala cannot create natively, you can switch into Hive and issue the ALTER TABLE ...
SET FILEFORMAT statements and INSERT or LOAD DATA statements there. After switching back to Impala, issue a
REFRESH table_name statement so that Impala recognizes any partitions or new data added through Hive.
Managing Partitions
You can add, drop, set the expected file format, or set the HDFS location of the data files for individual partitions within
an Impala table. See ALTER TABLE Statement on page 205 for syntax details, and Setting Different File Formats for
Partitions on page 630 for tips on managing tables containing partitions with different file formats.
Note: If you are creating a partition for the first time and specifying its location, for maximum efficiency,
use a single ALTER TABLE statement including both the ADD PARTITION and LOCATION clauses,
rather than separate statements with ADD PARTITION and SET LOCATION clauses.
What happens to the data files when a partition is dropped depends on whether the partitioned table is designated
as internal or external. For an internal (managed) table, the data files are deleted. For example, if data in the partitioned
table is a copy of raw data files stored elsewhere, you might save disk space by dropping older partitions that are no
longer required for reporting, knowing that the original data is still available if needed later. For an external table, the
data files are left alone. For example, dropping a partition without deleting the associated files lets Impala consider a
smaller set of partitions, improving query efficiency and reducing overhead for DDL operations on the table; if the data
is needed again later, you can add the partition again. See Overview of Impala Tables on page 196 for details and
examples.
Using Partitioning with Kudu Tables
Kudu tables use a more fine-grained partitioning scheme than tables containing HDFS data files. You specify a PARTITION
BY clause with the CREATE TABLE statement to identify how to divide the values from the partition key columns.
See Partitioning for Kudu Tables on page 675 for details and examples of the partitioning techniques for Kudu tables.
Keeping Statistics Up to Date for Partitioned Tables
Because the COMPUTE STATS statement can be resource-intensive to run on a partitioned table as new partitions are
added, Impala includes a variation of this statement that allows computing statistics on a per-partition basis such that
stats can be incrementally updated when new partitions are added.
Apache Impala Guide | 631
Partitioning for Impala Tables
Important:
For a particular table, use either COMPUTE STATS or COMPUTE INCREMENTAL STATS. The two kinds
of stats do not interoperate with each other at the table level. Without dropping the stats, if you run
COMPUTE INCREMENTAL STATS it will overwrite the full compute stats or if you run COMPUTE STATS
it will drop all incremental stats for consistency.
When you run COMPUTE INCREMENTAL STATS on a table for the first time, the statistics are computed
again from scratch regardless of whether the table already has statistics. Therefore, expect a one-time
resource-intensive operation for scanning the entire table when running COMPUTE INCREMENTAL
STATS for the first time on a given table.
In Impala 3.0 and lower, approximately 400 bytes of metadata per column per partition are needed
for caching. Tables with a big number of partitions and many columns can add up to a significant
memory overhead as the metadata must be cached on the catalogd host and on every impalad
host that is eligible to be a coordinator. If this metadata for all tables exceeds 2 GB, you might
experience service downtime. In Impala 3.1 and higher, the issue was alleviated with an improved
handling of incremental stats.
The COMPUTE INCREMENTAL STATS variation computes statistics only for partitions that were added or changed
since the last COMPUTE INCREMENTAL STATS statement, rather than the entire table. It is typically used for tables
where a full COMPUTE STATS operation takes too long to be practical each time a partition is added or dropped. See
Generating Table and Column Statistics on page 578 for full usage details.
-- Initially the table has no incremental stats, as indicated
-- 'false' under Incremental stats.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | -1 | 1 | 223.74KB | NOT CACHED | PARQUET | false
| Children | -1 | 1 | 230.05KB | NOT CACHED | PARQUET | false
| Electronics | -1 | 1 | 232.67KB | NOT CACHED | PARQUET | false
| Home | -1 | 1 | 232.56KB | NOT CACHED | PARQUET | false
| Jewelry | -1 | 1 | 223.72KB | NOT CACHED | PARQUET | false
| Men | -1 | 1 | 231.25KB | NOT CACHED | PARQUET | false
| Music | -1 | 1 | 237.90KB | NOT CACHED | PARQUET | false
| Shoes | -1 | 1 | 234.90KB | NOT CACHED | PARQUET | false
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | -1 | 1 | 226.27KB | NOT CACHED | PARQUET | false
| Total | -1 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After the first COMPUTE INCREMENTAL STATS,
-- all partitions have stats. The first
-- COMPUTE INCREMENTAL STATS scans the whole
-- table, discarding any previous stats from
-- a traditional COMPUTE STATS statement.
compute incremental stats item_partitioned;
+-------------------------------------------+
| summary |
+-------------------------------------------+
| Updated 10 partition(s) and 21 column(s). |
+-------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
632 | Apache Impala Guide
Partitioning for Impala Tables
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 10 | 2.25MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- Add a new partition...
alter table item_partitioned add partition (i_category='Camping');
-- Add or replace files in HDFS outside of Impala,
-- rendering the stats for a partition obsolete.
!import_data_into_sports_partition.sh
refresh item_partitioned;
drop incremental stats item_partitioned partition (i_category='Sports');
-- Now some partitions have incremental stats
-- and some do not.
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | -1 | 1 | 408.02KB | NOT CACHED | PARQUET | false
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | -1 | 1 | 227.97KB | NOT CACHED | PARQUET | false
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
-- After another COMPUTE INCREMENTAL STATS,
-- all partitions have incremental stats, and only the 2
-- partitions without incremental stats were scanned.
compute incremental stats item_partitioned;
+------------------------------------------+
| summary |
+------------------------------------------+
| Updated 2 partition(s) and 21 column(s). |
+------------------------------------------+
show table stats item_partitioned;
+-------------+-------+--------+----------+--------------+---------+------------------
| i_category | #Rows | #Files | Size | Bytes Cached | Format | Incremental stats
+-------------+-------+--------+----------+--------------+---------+------------------
| Books | 1733 | 1 | 223.74KB | NOT CACHED | PARQUET | true
| Camping | 5328 | 1 | 408.02KB | NOT CACHED | PARQUET | true
| Children | 1786 | 1 | 230.05KB | NOT CACHED | PARQUET | true
| Electronics | 1812 | 1 | 232.67KB | NOT CACHED | PARQUET | true
| Home | 1807 | 1 | 232.56KB | NOT CACHED | PARQUET | true
| Jewelry | 1740 | 1 | 223.72KB | NOT CACHED | PARQUET | true
| Men | 1811 | 1 | 231.25KB | NOT CACHED | PARQUET | true
| Music | 1860 | 1 | 237.90KB | NOT CACHED | PARQUET | true
| Shoes | 1835 | 1 | 234.90KB | NOT CACHED | PARQUET | true
| Sports | 1783 | 1 | 227.97KB | NOT CACHED | PARQUET | true
| Women | 1790 | 1 | 226.27KB | NOT CACHED | PARQUET | true
| Total | 17957 | 11 | 2.65MB | 0B | |
+-------------+-------+--------+----------+--------------+---------+------------------
Apache Impala Guide | 633
How Impala Works with Hadoop File Formats
How Impala Works with Hadoop File Formats
Impala supports several familiar file formats used in Apache Hadoop. Impala can load and query data files produced
by other Hadoop components such as Spark, and data files produced by Impala can be used by other components also.
The following sections discuss the procedures, limitations, and performance considerations for using each file format
with Impala.
The file format used for an Impala table has significant performance consequences. Some file formats include
compression support that affects the size of data on the disk and, consequently, the amount of I/O and CPU resources
required to deserialize data. The amounts of I/O and CPU resources required can be a limiting factor in query performance
since querying often begins with moving and decompressing data. To reduce the potential impact of this part of the
process, data is often compressed. By compressing data, a smaller total number of bytes are transferred from disk to
memory. This reduces the amount of time taken to transfer the data, but a tradeoff occurs when the CPU decompresses
the content.
For the file formats that Impala cannot write to, create the table from within Impala whenever possible and insert data
using another component such as Hive or Spark. See the table below for specific file formats.
The following table lists the file formats that Impala supports.
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
Parquet
Structured
Snappy, gzip, zstd;
currently Snappy by
default
Yes.
ORC
Structured
gzip, Snappy, LZO, LZ4;
currently gzip by
default
The ORC support is an
experimental feature since CDH
6.1 / Impala 3.1 & Impala 2.12.
Text
Unstructured
LZO, gzip, bzip2,
Snappy
To disable it, set
--enable_orc_scanner to
false when starting the cluster.
Yes. For CREATE TABLE with no
STORED AS clause, the default file
format is uncompressed text, with
values separated by ASCII 0x01
characters (typically represented
as Ctrl-A).
Avro
Structured
Snappy, gzip, deflate
Yes, in Impala 1.4.0 and higher. In
lower versions, create the table
using Hive.
RCFile
Structured
Snappy, gzip, deflate,
bzip2
Yes.
Yes: CREATE TABLE, INSERT,
LOAD DATA, and query.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Yes if uncompressed.
No if compressed.
If LZO compression is used, you
must create the table and load
data in Hive.
If other kinds of compression are
used, you must load data through
LOAD DATA, Hive, or manually in
HDFS.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
634 | Apache Impala Guide
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
How Impala Works with Hadoop File Formats
SequenceFile
Structured
Snappy, gzip, deflate,
bzip2
Yes.
Impala supports the following compression codecs:
Snappy
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Recommended for its effective balance between compression ratio and decompression speed. Snappy compression
is very fast, but gzip provides greater space savings. Supported for text, RC, Sequence, and Avro files in Impala 2.0
and higher.
Gzip
Recommended when achieving the highest level of compression (and therefore greatest disk-space savings) is
desired. Supported for text, RC, Sequence and Avro files in Impala 2.0 and higher.
Deflate
Not supported for text files.
Bzip2
Supported for text, RC, and Sequence files in Impala 2.0 and higher.
LZO
For text files only. Impala can query LZO-compressed text tables, but currently cannot create them or insert data
into them. You need to perform these operations in Hive.
Zstd
For Parquet files only.
Choosing the File Format for a Table
Different file formats and compression codecs work better for different data sets. Choosing the proper format for your
data can yield performance improvements. Use the following considerations to decide which combination of file format
and compression to use for a particular table:
• If you are working with existing files that are already in a supported file format, use the same format for the Impala
table if performance is acceptable. If the original format does not yield acceptable query performance or resource
usage, consider creating a new Impala table with different file format or compression characteristics, and doing
a one-time conversion by rewriting the data to the new table.
• Text files are convenient to produce through many different tools, and are human-readable for ease of verification
and debugging. Those characteristics are why text is the default format for an Impala CREATE TABLE statement.
However, when performance and resource usage are the primary considerations, use one of the structured file
formats that include metadata and built-in compression.
A typical workflow might involve bringing data into an Impala table by copying CSV or TSV files into the appropriate
data directory, and then using the INSERT ... SELECT syntax to rewrite the data into a table using a different,
more compact file format.
Apache Impala Guide | 635
How Impala Works with Hadoop File Formats
Using Text Data Files with Impala Tables
Impala supports using text files as the storage format for input and output. Text files are a convenient format to use
for interchange with other applications or scripts that produce or read delimited text files, such as CSV or TSV with
commas or tabs for delimiters.
Text files are also very flexible in their column definitions. For example, a text file could have more fields than the
Impala table, and those extra fields are ignored during queries; or it could have fewer fields than the Impala table, and
those missing fields are treated as NULL values in queries. You could have fields that were treated as numbers or
timestamps in a table, then use ALTER TABLE ... REPLACE COLUMNS to switch them to strings, or the reverse.
Table 2: Text Format Support in Impala
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
Text
Unstructured
LZO, gzip, bzip2,
Snappy
Yes. For CREATE TABLE with no
STORED AS clause, the default file
format is uncompressed text, with
values separated by ASCII 0x01
characters (typically represented
as Ctrl-A).
Yes if uncompressed.
No if compressed.
If LZO compression is used, you
must create the table and load
data in Hive.
If other kinds of compression are
used, you must load data through
LOAD DATA, Hive, or manually in
HDFS.
Query Performance for Impala Text Tables
Data stored in text format is relatively bulky, and not as efficient to query as binary formats such as Parquet. You
typically use text tables with Impala if that is the format you receive the data and you do not have control over that
process, or if you are a relatively new Hadoop user and not familiar with techniques to generate files in other formats.
(Because the default format for CREATE TABLE is text, you might create your first Impala tables as text without giving
performance much thought.) Either way, look for opportunities to use more efficient file formats for the tables used
in your most performance-critical queries.
For frequently queried data, you might load the original text data files into one Impala table, then use an INSERT
statement to transfer the data to another table that uses the Parquet file format; the data is converted automatically
as it is stored in the destination table.
For more compact data, consider using LZO compression for the text files. LZO is the only compression codec that
Impala supports for text data, because the “splittable” nature of LZO data files lets different nodes work on different
parts of the same file in parallel. See Using LZO-Compressed Text Files on page 640 for details.
In Impala 2.0 and later, you can also use text data compressed in the gzip, bzip2, or Snappy formats. Because these
compressed formats are not “splittable” in the way that LZO is, there is less opportunity for Impala to parallelize queries
on them. Therefore, use these types of compressed data only for convenience if that is the format in which you receive
the data. Prefer to use LZO compression for text data if you have the choice, or convert the data to Parquet using an
INSERT ... SELECT statement to copy the original data into a Parquet table.
Note:
Impala supports bzip files created by the bzip2 command, but not bzip files with multiple streams
created by the pbzip2 command. Impala decodes only the data from the first part of such files,
leading to incomplete results.
The maximum size that Impala can accommodate for an individual bzip file is 1 GB (after
uncompression).
636 | Apache Impala Guide
How Impala Works with Hadoop File Formats
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
Creating Text Tables
To create a table using text data files:
If the exact format of the text data files (such as the delimiter character) is not significant, use the CREATE TABLE
statement with no extra clauses at the end to create a text-format table. For example:
create table my_table(id int, s string, n int, t timestamp, b boolean);
The data files created by any INSERT statements will use the Ctrl-A character (hex 01) as a separator between each
column value.
A common use case is to import existing text files into an Impala table. The syntax is more verbose; the significant part
is the FIELDS TERMINATED BY clause, which must be preceded by the ROW FORMAT DELIMITED clause. The statement
can end with a STORED AS TEXTFILE clause, but that clause is optional because text format tables are the default.
For example:
create table csv(id int, s string, n int, t timestamp, b boolean)
row format delimited
fields terminated by ',';
create table tsv(id int, s string, n int, t timestamp, b boolean)
row format delimited
fields terminated by '\t';
create table pipe_separated(id int, s string, n int, t timestamp, b boolean)
row format delimited
fields terminated by '|'
stored as textfile;
You can create tables with specific separator characters to import text files in familiar formats such as CSV, TSV, or
pipe-separated. You can also use these tables to produce output data files, by copying data into them through the
INSERT ... SELECT syntax and then extracting the data files from the Impala data directory.
In Impala 1.3.1 and higher, you can specify a delimiter character '\0' to use the ASCII 0 (nul) character for text tables:
create table nul_separated(id int, s string, n int, t timestamp, b boolean)
row format delimited
fields terminated by '\0'
stored as textfile;
Note:
Do not surround string values with quotation marks in text data files that you construct. If you need
to include the separator character inside a field value, for example to put a string value with a comma
inside a CSV-format data file, specify an escape character on the CREATE TABLE statement with the
ESCAPED BY clause, and insert that character immediately before any separator characters that need
escaping.
Issue a DESCRIBE FORMATTED table_name statement to see the details of how each table is represented internally
in Impala.
Apache Impala Guide | 637
How Impala Works with Hadoop File Formats
Complex type considerations: Although you can create tables in this file format using the complex types (ARRAY,
STRUCT, and MAP) available in CDH 5.5 / Impala 2.3 and higher, currently, Impala can query these types only in Parquet
tables. The one exception to the preceding rule is COUNT(*) queries on RCFile tables that include complex types. Such
queries are allowed in CDH 5.8 / Impala 2.6 and higher.
Data Files for Text Tables
When Impala queries a table with data in text format, it consults all the data files in the data directory for that table,
with some exceptions:
• Impala ignores any hidden files, that is, files whose names start with a dot or an underscore.
• Impala queries ignore files with extensions commonly used for temporary work files by Hadoop tools. Any files
with extensions .tmp or .copying are not considered part of the Impala table. The suffix matching is
case-insensitive, so for example Impala ignores both .copying and .COPYING suffixes.
• Impala uses suffixes to recognize when text data files are compressed text. For Impala to recognize the compressed
text files, they must have the appropriate file extension corresponding to the compression codec, either .gz,
.bz2, or .snappy. The extensions can be in uppercase or lowercase.
• Otherwise, the file names are not significant. When you put files into an HDFS directory through ETL jobs, or point
Impala to an existing HDFS directory with the CREATE EXTERNAL TABLE statement, or move data files under
external control with the LOAD DATA statement, Impala preserves the original file names.
File names for data produced through Impala INSERT statements are given unique names to avoid file name conflicts.
An INSERT ... SELECT statement produces one data file from each node that processes the SELECT part of the
statement. An INSERT ... VALUES statement produces a separate data file for each statement; because Impala is
more efficient querying a small number of huge files than a large number of tiny files, the INSERT ... VALUES syntax
is not recommended for loading a substantial volume of data. If you find yourself with a table that is inefficient due to
too many small data files, reorganize the data into a few large files by doing INSERT ... SELECT to transfer the data
to a new table.
Special values within text data files:
• Impala recognizes the literal strings inf for infinity and nan for “Not a Number”, for FLOAT and DOUBLE columns.
• Impala recognizes the literal string \N to represent NULL. When using Sqoop, specify the options
--null-non-string and --null-string to ensure all NULL values are represented correctly in the Sqoop
output files. \N needs to be escaped as in the below example:
--null-string '\\N' --null-non-string '\\N'
• By default, Sqoop writes NULL values using the string null, which causes a conversion error when such rows are
evaluated by Impala. A workaround for existing tables and data files is to change the table properties through
ALTER TABLE name SET TBLPROPERTIES("serialization.null.format"="null").
• In CDH 5.8 / Impala 2.6 and higher, Impala can optionally skip an arbitrary number of header lines from text input
files on HDFS based on the skip.header.line.count value in the TBLPROPERTIES field of the table metadata.
For example:
create table header_line(first_name string, age int)
row format delimited fields terminated by ',';
-- Back in the shell, load data into the table with commands such as:
-- cat >data.csv
-- Name,Age
-- Alice,25
-- Bob,19
-- hdfs dfs -put data.csv /user/hive/warehouse/header_line
refresh header_line;
-- Initially, the Name,Age header line is treated as a row of the table.
638 | Apache Impala Guide
How Impala Works with Hadoop File Formats
select * from header_line limit 10;
+------------+------+
| first_name | age |
+------------+------+
| Name | NULL |
| Alice | 25 |
| Bob | 19 |
+------------+------+
alter table header_line set tblproperties('skip.header.line.count'='1');
-- Once the table property is set, queries skip the specified number of lines
-- at the beginning of each text data file. Therefore, all the files in the table
-- should follow the same convention for header lines.
select * from header_line limit 10;
+------------+-----+
| first_name | age |
+------------+-----+
| Alice | 25 |
| Bob | 19 |
+------------+-----+
Loading Data into Impala Text Tables
To load an existing text file into an Impala text table, use the LOAD DATA statement and specify the path of the file in
HDFS. That file is moved into the appropriate Impala data directory.
To load multiple existing text files into an Impala text table, use the LOAD DATA statement and specify the HDFS path
of the directory containing the files. All non-hidden files are moved into the appropriate Impala data directory.
To convert data to text from any other file format supported by Impala, use a SQL statement such as:
-- Text table with default delimiter, the hex 01 character.
CREATE TABLE text_table AS SELECT * FROM other_file_format_table;
-- Text table with user-specified delimiter. Currently, you cannot specify
-- the delimiter as part of CREATE TABLE LIKE or CREATE TABLE AS SELECT.
-- But you can change an existing text table to have a different delimiter.
CREATE TABLE csv LIKE other_file_format_table;
ALTER TABLE csv SET SERDEPROPERTIES ('serialization.format'=',', 'field.delim'=',');
INSERT INTO csv SELECT * FROM other_file_format_table;
This can be a useful technique to see how Impala represents special values within a text-format data file. Use the
DESCRIBE FORMATTED statement to see the HDFS directory where the data files are stored, then use Linux commands
such as hdfs dfs -ls hdfs_directory and hdfs dfs -cat hdfs_file to display the contents of an
Impala-created text file.
To create a few rows in a text table for test purposes, you can use the INSERT ... VALUES syntax:
INSERT INTO text_table VALUES ('string_literal',100,hex('hello world'));
Note: Because Impala and the HDFS infrastructure are optimized for multi-megabyte files, avoid the
INSERT ... VALUES notation when you are inserting many rows. Each INSERT ... VALUES
statement produces a new tiny file, leading to fragmentation and reduced performance. When creating
any substantial volume of new data, use one of the bulk loading techniques such as LOAD DATA or
INSERT ... SELECT. Or, use an HBase table for single-row INSERT operations, because HBase
tables are not subject to the same fragmentation issues as tables stored on HDFS.
When you create a text file for use with an Impala text table, specify \N to represent a NULL value. For the differences
between NULL and empty strings, see NULL on page 170.
If a text file has fewer fields than the columns in the corresponding Impala table, all the corresponding columns are
set to NULL when the data in that file is read by an Impala query.
Apache Impala Guide | 639
How Impala Works with Hadoop File Formats
If a text file has more fields than the columns in the corresponding Impala table, the extra fields are ignored when the
data in that file is read by an Impala query.
You can also use manual HDFS operations such as hdfs dfs -put or hdfs dfs -cp to put data files in the data
directory for an Impala table. When you copy or move new data files into the HDFS directory for the Impala table, issue
a REFRESH table_name statement in impala-shell before issuing the next query against that table, to make Impala
recognize the newly added files.
Using LZO-Compressed Text Files
Impala supports using text data files that employ LZO compression. Cloudera recommends compressing text data files
when practical. Impala queries are usually I/O-bound; reducing the amount of data read from disk typically speeds up
a query, despite the extra CPU work to uncompress the data in memory.
Impala can work with LZO-compressed text files. LZO-compressed files are preferable to text files compressed by other
codecs, because LZO-compressed files are “splittable”, meaning that different portions of a file can be uncompressed
and processed independently by different nodes.
Impala does not currently support writing LZO-compressed text files.
Because Impala can query LZO-compressed files but currently cannot write them, you use Hive to do the initial CREATE
TABLE and load the data, then switch back to Impala to run queries. For instructions on setting up LZO compression
for Hive CREATE TABLE and INSERT statements, see the LZO page on the Hive wiki. Once you have created an LZO
text table, you can also manually add LZO-compressed text files to it, produced by the lzop command or similar
method.
Preparing to Use LZO-Compressed Text Files
Before using LZO-compressed tables in Impala, do the following one-time setup for each machine in the cluster. Install
the necessary packages using either the Cloudera public repository, a private repository you establish, or by using
packages. You must do these steps manually, whether or not the cluster is managed by the Cloudera Manager product.
1. Prepare your systems to work with LZO by downloading and installing the appropriate libraries:
On systems managed by Cloudera Manager using parcels:
See the setup instructions for the LZO parcel in the Cloudera Manager documentation for Cloudera Manager.
2. Configure Impala to use LZO:
Use one of the following sets of commands to refresh your package management system's repository information,
install the base LZO support for Hadoop, and install the LZO support for Impala.
For RHEL/CentOS systems:
sudo yum update
sudo yum install hadoop-lzo
sudo yum install impala-lzo
For SUSE systems:
$ sudo apt-get update
$ sudo zypper install hadoop-lzo
$ sudo zypper install impala-lzo
For Debian/Ubuntu systems:
sudo zypper update
sudo apt-get install hadoop-lzo
sudo apt-get install impala-lzo
640 | Apache Impala Guide
How Impala Works with Hadoop File Formats
Note:
The level of the impala-lzo package is closely tied to the version of Impala you use. Any time
you upgrade Impala, re-do the installation command for impala-lzo on each applicable machine
to make sure you have the appropriate version of that package.
3. For core-site.xml on the client and server (that is, in the configuration directories for both Impala and Hadoop),
append com.hadoop.compression.lzo.LzopCodec to the comma-separated list of codecs. For example:
io.compression.codecs
org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.DeflateCodec,
org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzopCodec
Note:
If this is the first time you have edited the Hadoop core-site.xml file, note that the
/etc/hadoop/conf directory is typically a symbolic link, so the canonical core-site.xml
might reside in a different directory:
$ ls -l /etc/hadoop
total 8
lrwxrwxrwx. 1 root root 29 Feb 26 2013 conf ->
/etc/alternatives/hadoop-conf
lrwxrwxrwx. 1 root root 10 Feb 26 2013 conf.dist -> conf.empty
drwxr-xr-x. 2 root root 4096 Feb 26 2013 conf.empty
drwxr-xr-x. 2 root root 4096 Oct 28 15:46 conf.pseudo
If the io.compression.codecs property is missing from core-site.xml, only add
com.hadoop.compression.lzo.LzopCodec to the new property value, not all the names
from the preceding example.
4. Restart the MapReduce and Impala services.
Creating LZO Compressed Text Tables
A table containing LZO-compressed text files must be created in Hive with the following storage clause:
STORED AS
INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
Also, certain Hive settings need to be in effect. For example:
hive> SET mapreduce.output.fileoutputformat.compress=true;
hive> SET hive.exec.compress.output=true;
hive> SET
mapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzopCodec;
hive> CREATE TABLE lzo_t (s string) STORED AS
> INPUTFORMAT 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat';
hive> INSERT INTO TABLE lzo_t SELECT col1, col2 FROM uncompressed_text_table;
Once you have created LZO-compressed text tables, you can convert data stored in other tables (regardless of file
format) by using the INSERT ... SELECT statement in Hive.
Apache Impala Guide | 641
How Impala Works with Hadoop File Formats
Files in an LZO-compressed table must use the .lzo extension. Examine the files in the HDFS data directory after doing
the INSERT in Hive, to make sure the files have the right extension. If the required settings are not in place, you end
up with regular uncompressed files, and Impala cannot access the table because it finds data files with the wrong
(uncompressed) format.
After loading data into an LZO-compressed text table, index the files so that they can be split. You index the files by
running a Java class, com.hadoop.compression.lzo.DistributedLzoIndexer, through the Linux command
line. This Java class is included in the hadoop-lzo package.
Run the indexer using a command like the following:
$ hadoop jar /usr/lib/hadoop/lib/hadoop-lzo-version-gplextras.jar
com.hadoop.compression.lzo.DistributedLzoIndexer /hdfs_location_of_table/
Note: If the path of the JAR file in the preceding example is not recognized, do a find command to
locate hadoop-lzo-*-gplextras.jar and use that path.
Indexed files have the same name as the file they index, with the .index extension. If the data files are not indexed,
Impala queries still work, but the queries read the data from remote DataNodes, which is very inefficient.
Once the LZO-compressed tables are created, and data is loaded and indexed, you can query them through Impala.
As always, the first time you start impala-shell after creating a table in Hive, issue an INVALIDATE METADATA
statement so that Impala recognizes the new table. (In Impala 1.2 and higher, you only have to run INVALIDATE
METADATA on one node, rather than on all the Impala nodes.)
Using gzip, bzip2, or Snappy-Compressed Text Files
In Impala 2.0 and later, Impala supports using text data files that employ gzip, bzip2, or Snappy compression. These
compression types are primarily for convenience within an existing ETL pipeline rather than maximum performance.
Although it requires less I/O to read compressed text than the equivalent uncompressed text, files compressed by
these codecs are not “splittable” and therefore cannot take full advantage of the Impala parallel query capability.
As each bzip2- or Snappy-compressed text file is processed, the node doing the work reads the entire file into memory
and then decompresses it. Therefore, the node must have enough memory to hold both the compressed and
uncompressed data from the text file. The memory required to hold the uncompressed data is difficult to estimate in
advance, potentially causing problems on systems with low memory limits or with resource management enabled. In
Impala 2.1 and higher, this memory overhead is reduced for gzip-compressed text files. The gzipped data is decompressed
as it is read, rather than all at once.
To create a table to hold gzip, bzip2, or Snappy-compressed text, create a text table with no special compression
options. Specify the delimiter and escape character if required, using the ROW FORMAT clause.
Because Impala can query compressed text files but currently cannot write them, produce the compressed text files
outside Impala and use the LOAD DATA statement, manual HDFS commands to move them to the appropriate Impala
data directory. (Or, you can use CREATE EXTERNAL TABLE and point the LOCATION attribute at a directory containing
existing compressed text files.)
For Impala to recognize the compressed text files, they must have the appropriate file extension corresponding to the
compression codec, either .gz, .bz2, or .snappy. The extensions can be in uppercase or lowercase.
The following example shows how you can create a regular text table, put different kinds of compressed and
uncompressed files into it, and Impala automatically recognizes and decompresses each one based on their file
extensions:
create table csv_compressed (a string, b string, c string)
row format delimited fields terminated by ",";
insert into csv_compressed values
('one - uncompressed', 'two - uncompressed', 'three - uncompressed'),
('abc - uncompressed', 'xyz - uncompressed', '123 - uncompressed');
...make equivalent .gz, .bz2, and .snappy files and load them into same table directory...
642 | Apache Impala Guide
How Impala Works with Hadoop File Formats
select * from csv_compressed;
+--------------------+--------------------+----------------------+
| a | b | c |
+--------------------+--------------------+----------------------+
| one - snappy | two - snappy | three - snappy |
| one - uncompressed | two - uncompressed | three - uncompressed |
| abc - uncompressed | xyz - uncompressed | 123 - uncompressed |
| one - bz2 | two - bz2 | three - bz2 |
| abc - bz2 | xyz - bz2 | 123 - bz2 |
| one - gzip | two - gzip | three - gzip |
| abc - gzip | xyz - gzip | 123 - gzip |
+--------------------+--------------------+----------------------+
$ hdfs dfs -ls
'hdfs://127.0.0.1:8020/user/hive/warehouse/file_formats.db/csv_compressed/';
...truncated for readability...
75
hdfs://127.0.0.1:8020/user/hive/warehouse/file_formats.db/csv_compressed/csv_compressed.snappy
79
hdfs://127.0.0.1:8020/user/hive/warehouse/file_formats.db/csv_compressed/csv_compressed_bz2.csv.bz2
80
hdfs://127.0.0.1:8020/user/hive/warehouse/file_formats.db/csv_compressed/csv_compressed_gzip.csv.gz
116
hdfs://127.0.0.1:8020/user/hive/warehouse/file_formats.db/csv_compressed/dd414df64d67d49b_data.0.
Using the Parquet File Format with Impala Tables
Impala allows you to create, manage, and query Parquet tables. Parquet is a column-oriented binary file format intended
to be highly efficient for the types of large-scale queries that Impala is best at. Parquet is especially good for queries
scanning particular columns within a table, for example, to query “wide” tables with many columns, or to perform
aggregation operations such as SUM() and AVG() that need to process most or all of the values from a column. Each
Parquet data file written by Impala contains the values for a set of rows (referred to as the “row group”). Within a data
file, the values from each column are organized so that they are all adjacent, enabling good compression for the values
from that column. Queries against a Parquet table can retrieve and analyze these values from any column quickly and
with minimal I/O.
See How Impala Works with Hadoop File Formats on page 634 for the summary of Parquet format support.
Creating Parquet Tables in Impala
To create a table named PARQUET_TABLE that uses the Parquet format, you would use a command like the following,
substituting your own table name, column names, and data types:
[impala-host:21000] > create table parquet_table_name (x INT, y STRING) STORED AS PARQUET;
Or, to clone the column names and data types of an existing table:
[impala-host:21000] > create table parquet_table_name LIKE other_table_name STORED AS
PARQUET;
In Impala 1.4.0 and higher, you can derive column definitions from a raw Parquet data file, even without an existing
Impala table. For example, you can create an external table pointing to an HDFS directory, and base the column
definitions on one of the files in that directory:
CREATE EXTERNAL TABLE ingest_existing_files LIKE PARQUET
'/user/etl/destination/datafile1.dat'
STORED AS PARQUET
LOCATION '/user/etl/destination';
Apache Impala Guide | 643
How Impala Works with Hadoop File Formats
Or, you can refer to an existing data file and create a new empty table with suitable column definitions. Then you can
use INSERT to create new data files or LOAD DATA to transfer existing data files into the new table.
CREATE TABLE columns_from_data_file LIKE PARQUET '/user/etl/destination/datafile1.dat'
STORED AS PARQUET;
The default properties of the newly created table are the same as for any other CREATE TABLE statement. For example,
the default file format is text; if you want the new table to use the Parquet file format, include the STORED AS PARQUET
file also.
In this example, the new table is partitioned by year, month, and day. These partition key columns are not part of the
data file, so you specify them in the CREATE TABLE statement:
CREATE TABLE columns_from_data_file LIKE PARQUET '/user/etl/destination/datafile1.dat'
PARTITION (year INT, month TINYINT, day TINYINT)
STORED AS PARQUET;
See CREATE TABLE Statement on page 234 for more details about the CREATE TABLE LIKE PARQUET syntax.
Once you have created a table, to insert data into that table, use a command similar to the following, again with your
own table names:
[impala-host:21000] > insert overwrite table parquet_table_name select * from
other_table_name;
If the Parquet table has a different number of columns or different column names than the other table, specify the
names of columns from the other table rather than * in the SELECT statement.
Loading Data into Parquet Tables
Choose from the following techniques for loading data into Parquet tables, depending on whether the original data is
already in an Impala table, or exists as raw data files outside Impala.
If you already have data in an Impala or Hive table, perhaps in a different file format or partitioning scheme, you can
transfer the data to a Parquet table using the Impala INSERT...SELECT syntax. You can convert, filter, repartition,
and do other things to the data as part of this same INSERT statement. See Compressions for Parquet Data Files on
page 647 for some examples showing how to insert data into Parquet tables.
When inserting into partitioned tables, especially using the Parquet file format, you can include a hint in the INSERT
statement to fine-tune the overall performance of the operation and its resource usage. See Optimizer Hints in Impala
for using hints in the INSERT statements.
Any INSERT statement for a Parquet table requires enough free space in the HDFS filesystem to write one block.
Because Parquet data files use a block size of 1 GB by default, an INSERT might fail (even for a very small amount of
data) if your HDFS is running low on space.
Avoid the INSERT...VALUES syntax for Parquet tables, because INSERT...VALUES produces a separate tiny data
file for each INSERT...VALUES statement, and the strength of Parquet is in its handling of data (compressing,
parallelizing, and so on) in large chunks.
If you have one or more Parquet data files produced outside of Impala, you can quickly make the data queryable
through Impala by one of the following methods:
• The LOAD DATA statement moves a single data file or a directory full of data files into the data directory for an
Impala table. It does no validation or conversion of the data. The original data files must be somewhere in HDFS,
not the local filesystem.
• The CREATE TABLE statement with the LOCATION clause creates a table where the data continues to reside
outside the Impala data directory. The original data files must be somewhere in HDFS, not the local filesystem.
For extra safety, if the data is intended to be long-lived and reused by other applications, you can use the CREATE
EXTERNAL TABLE syntax so that the data files are not deleted by an Impala DROP TABLE statement.
• If the Parquet table already exists, you can copy Parquet data files directly into it, then use the REFRESH statement
to make Impala recognize the newly added data. Remember to preserve the block size of the Parquet data files
644 | Apache Impala Guide
How Impala Works with Hadoop File Formats
by using the hadoop distcp -pb command rather than a -put or -cp operation on the Parquet files. See
Example of Copying Parquet Data Files on page 648 for an example of this kind of operation.
Note:
Currently, Impala always decodes the column data in Parquet files based on the ordinal position of
the columns, not by looking up the position of each column based on its name. Parquet files produced
outside of Impala must write column data in the same order as the columns are declared in the Impala
table. Any optional columns that are omitted from the data files must be the rightmost columns in
the Impala table definition.
If you created compressed Parquet files through some tool other than Impala, make sure that any
compression codecs are supported in Parquet by Impala. For example, Impala does not currently
support LZO compression in Parquet files. Also doublecheck that you used any recommended
compatibility settings in the other tool, such as spark.sql.parquet.binaryAsString when
writing Parquet files through Spark.
Recent versions of Sqoop can produce Parquet output files using the --as-parquetfile option.
If the data exists outside Impala and is in some other format, combine both of the preceding techniques. First, use a
LOAD DATA or CREATE EXTERNAL TABLE ... LOCATION statement to bring the data into an Impala table that
uses the appropriate file format. Then, use an INSERT...SELECT statement to copy the data to the Parquet table,
converting to Parquet format as part of the process.
Loading data into Parquet tables is a memory-intensive operation, because the incoming data is buffered until it reaches
one data block in size, then that chunk of data is organized and compressed in memory before being written out. The
memory consumption can be larger when inserting data into partitioned Parquet tables, because a separate data file
is written for each combination of partition key column values, potentially requiring several large chunks to be
manipulated in memory at once.
When inserting into a partitioned Parquet table, Impala redistributes the data among the nodes to reduce memory
consumption. You might still need to temporarily increase the memory dedicated to Impala during the insert operation,
or break up the load operation into several INSERT statements, or both.
Note: All the preceding techniques assume that the data you are loading matches the structure of
the destination table, including column order, column names, and partition layout. To transform or
reorganize the data, start by loading the data into a Parquet table that matches the underlying structure
of the data, then use one of the table-copying techniques such as CREATE TABLE AS SELECT or
INSERT ... SELECT to reorder or rename columns, divide the data among multiple partitions, and
so on. For example to take a single comprehensive Parquet data file and load it into a partitioned
table, you would use an INSERT ... SELECT statement with dynamic partitioning to let Impala
create separate data files with the appropriate partition values; for an example, see INSERT Statement
on page 277.
Query Performance for Impala Parquet Tables
Query performance for Parquet tables depends on the number of columns needed to process the SELECT list and
WHERE clauses of the query, the way data is divided into large data files with block size equal to file size, the reduction
in I/O by reading the data for each column in compressed format, which data files can be skipped (for partitioned
tables), and the CPU overhead of decompressing the data for each column.
For example, the following is an efficient query for a Parquet table:
select avg(income) from census_data where state = 'CA';
The query processes only 2 columns out of a large number of total columns. If the table is partitioned by the STATE
column, it is even more efficient because the query only has to read and decode 1 column from each data file, and it
Apache Impala Guide | 645
How Impala Works with Hadoop File Formats
can read only the data files in the partition directory for the state 'CA', skipping the data files for all the other states,
which will be physically located in other directories.
The following is a relatively inefficient query for a Parquet table:
select * from census_data;
Impala would have to read the entire contents of each large data file, and decompress the contents of each column
for each row group, negating the I/O optimizations of the column-oriented format. This query might still be faster for
a Parquet table than a table with some other file format, but it does not take advantage of the unique strengths of
Parquet data files.
Impala can optimize queries on Parquet tables, especially join queries, better when statistics are available for all the
tables. Issue the COMPUTE STATS statement for each table after substantial amounts of data are loaded into or
appended to it. See COMPUTE STATS Statement on page 219 for details.
The runtime filtering feature, available in CDH 5.7 / Impala 2.5 and higher, works best with Parquet tables. The per-row
filtering aspect only applies to Parquet tables. See Runtime Filtering for Impala Queries (CDH 5.7 or higher only) on
page 588 for details.
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
In CDH 5.12 and higher, Parquet files written by Impala include embedded metadata specifying the minimum and
maximum values for each column, within each row group and each data page within the row group. Impala-written
Parquet files typically contain a single row group; a row group can contain many data pages. Impala uses this information
(currently, only the metadata for each row group) when reading each Parquet data file during a query, to quickly
determine whether each row group within the file potentially includes any rows that match the conditions in the WHERE
clause. For example, if the column X within a particular Parquet file has a minimum value of 1 and a maximum value
of 100, then a query including the clause WHERE x > 200 can quickly determine that it is safe to skip that particular
file, instead of scanning all the associated column values. This optimization technique is especially effective for tables
that use the SORT BY clause for the columns most frequently checked in WHERE clauses, because any INSERT operation
on such tables produces Parquet data files with relatively narrow ranges of column values within each file.
Partitioning for Parquet Tables
As explained in Partitioning for Impala Tables on page 625, partitioning is an important performance technique for
Impala generally. This section explains some of the performance considerations for partitioned Parquet tables.
The Parquet file format is ideal for tables containing many columns, where most queries only refer to a small subset
of the columns. As explained in How Parquet Data Files Are Organized on page 652, the physical layout of Parquet data
files lets Impala read only a small fraction of the data for many queries. The performance benefits of this approach are
amplified when you use Parquet tables in combination with partitioning. Impala can skip the data files for certain
partitions entirely, based on the comparisons in the WHERE clause that refer to the partition key columns. For example,
queries on partitioned tables often analyze data for time intervals based on columns such as YEAR, MONTH, and/or
DAY, or for geographic regions. Remember that Parquet data files use a large block size, so when deciding how finely
to partition the data, try to find a granularity where each partition contains 256 MB or more of data, rather than
creating a large number of smaller files split among many partitions.
Inserting into a partitioned Parquet table can be a resource-intensive operation, because each Impala node could
potentially be writing a separate data file to HDFS for each combination of different values for the partition key columns.
The large number of simultaneous open files could exceed the HDFS “transceivers” limit. To avoid exceeding this limit,
consider the following techniques:
646 | Apache Impala Guide
How Impala Works with Hadoop File Formats
• Load different subsets of data using separate INSERT statements with specific values for the PARTITION clause,
such as PARTITION (year=2010).
• Increase the “transceivers” value for HDFS, sometimes spelled “xcievers” (sic). The property value in the
hdfs-site.xml configuration file is dfs.datanode.max.transfer.threads. For example, if you were loading
12 years of data partitioned by year, month, and day, even a value of 4096 might not be high enough. This blog
post explores the considerations for setting this value higher or lower, using HBase examples for illustration.
• Use the COMPUTE STATS statement to collect column statistics on the source table from which data is being
copied, so that the Impala query can estimate the number of different values in the partition key columns and
distribute the work accordingly.
Compressions for Parquet Data Files
When Impala writes Parquet data files using the INSERT statement, the underlying compression is controlled by the
COMPRESSION_CODEC query option. (Prior to Impala 2.0, the query option name was PARQUET_COMPRESSION_CODEC.)
The allowed values for this query option are snappy (the default), gzip, zstd, and none. The option value is not
case-sensitive. If the option is set to an unrecognized value, all kinds of queries will fail due to the invalid option setting,
not just queries involving Parquet tables.
Example of Parquet Table with Snappy Compression
By default, the underlying data files for a Parquet table are compressed with Snappy. The combination of fast compression
and decompression makes it a good choice for many data sets. To ensure Snappy compression is used, for example
after experimenting with other compression codecs, set the COMPRESSION_CODEC query option to snappy before
inserting the data:
[localhost:21000] > create database parquet_compression;
[localhost:21000] > use parquet_compression;
[localhost:21000] > create table parquet_snappy like raw_text_data;
[localhost:21000] > set COMPRESSION_CODEC=snappy;
[localhost:21000] > insert into parquet_snappy select * from raw_text_data;
Inserted 1000000000 rows in 181.98s
Example of Parquet Table with GZip Compression
If you need more intensive compression (at the expense of more CPU cycles for uncompressing during queries), set
the COMPRESSION_CODEC query option to gzip before inserting the data:
[localhost:21000] > create table parquet_gzip like raw_text_data;
[localhost:21000] > set COMPRESSION_CODEC=gzip;
[localhost:21000] > insert into parquet_gzip select * from raw_text_data;
Inserted 1000000000 rows in 1418.24s
Example of Uncompressed Parquet Table
If your data compresses very poorly, or you want to avoid the CPU overhead of compression and decompression
entirely, set the COMPRESSION_CODEC query option to none before inserting the data:
[localhost:21000] > create table parquet_none like raw_text_data;
[localhost:21000] > set COMPRESSION_CODEC=none;
[localhost:21000] > insert into parquet_none select * from raw_text_data;
Inserted 1000000000 rows in 146.90s
Examples of Sizes and Speeds for Compressed Parquet Tables
Here are some examples showing differences in data sizes and query speeds for 1 billion rows of synthetic data,
compressed with each kind of codec. As always, run similar tests with realistic data sets of your own. The actual
compression ratios, and relative insert and query speeds, will vary depending on the characteristics of the actual data.
Apache Impala Guide | 647
How Impala Works with Hadoop File Formats
In this case, switching from Snappy to GZip compression shrinks the data by an additional 40% or so, while switching
from Snappy compression to no compression expands the data also by about 40%:
$ hdfs dfs -du -h /user/hive/warehouse/parquet_compression.db
23.1 G /user/hive/warehouse/parquet_compression.db/parquet_snappy
13.5 G /user/hive/warehouse/parquet_compression.db/parquet_gzip
32.8 G /user/hive/warehouse/parquet_compression.db/parquet_none
Because Parquet data files are typically large, each directory will have a different number of data files and the row
groups will be arranged differently.
At the same time, the less aggressive the compression, the faster the data can be decompressed. In this case using a
table with a billion rows, a query that evaluates all the values for a particular column runs faster with no compression
than with Snappy compression, and faster with Snappy compression than with Gzip compression. Query performance
depends on several other factors, so as always, run your own benchmarks with your own data to determine the ideal
tradeoff between data size, CPU efficiency, and speed of insert and query operations.
[localhost:21000] > desc parquet_snappy;
Query finished, fetching results ...
+-----------+---------+---------+
| name | type | comment |
+-----------+---------+---------+
| id | int | |
| val | int | |
| zfill | string | |
| name | string | |
| assertion | boolean | |
+-----------+---------+---------+
Returned 5 row(s) in 0.14s
[localhost:21000] > select avg(val) from parquet_snappy;
Query finished, fetching results ...
+-----------------+
| _c0 |
+-----------------+
| 250000.93577915 |
+-----------------+
Returned 1 row(s) in 4.29s
[localhost:21000] > select avg(val) from parquet_gzip;
Query finished, fetching results ...
+-----------------+
| _c0 |
+-----------------+
| 250000.93577915 |
+-----------------+
Returned 1 row(s) in 6.97s
[localhost:21000] > select avg(val) from parquet_none;
Query finished, fetching results ...
+-----------------+
| _c0 |
+-----------------+
| 250000.93577915 |
+-----------------+
Returned 1 row(s) in 3.67s
Example of Copying Parquet Data Files
Here is a final example, to illustrate how the data files using the various compression codecs are all compatible with
each other for read operations. The metadata about the compression format is written into each data file, and can be
decoded during queries regardless of the COMPRESSION_CODEC setting in effect at the time. In this example, we copy
data files from the PARQUET_SNAPPY, PARQUET_GZIP, and PARQUET_NONE tables used in the previous examples,
each containing 1 billion rows, all to the data directory of a new table PARQUET_EVERYTHING. A couple of sample
queries demonstrate that the new table now contains 3 billion rows featuring a variety of compression codecs for the
data files.
648 | Apache Impala Guide
How Impala Works with Hadoop File Formats
First, we create the table in Impala so that there is a destination directory in HDFS to put the data files:
[localhost:21000] > create table parquet_everything like parquet_snappy;
Query: create table parquet_everything like parquet_snappy
Then in the shell, we copy the relevant data files into the data directory for this new table. Rather than using hdfs
dfs -cp as with typical files, we use hadoop distcp -pb to ensure that the special block size of the Parquet data
files is preserved.
$ hadoop distcp -pb /user/hive/warehouse/parquet_compression.db/parquet_snappy \
/user/hive/warehouse/parquet_compression.db/parquet_everything
...MapReduce output...
$ hadoop distcp -pb /user/hive/warehouse/parquet_compression.db/parquet_gzip \
/user/hive/warehouse/parquet_compression.db/parquet_everything
...MapReduce output...
$ hadoop distcp -pb /user/hive/warehouse/parquet_compression.db/parquet_none \
/user/hive/warehouse/parquet_compression.db/parquet_everything
...MapReduce output...
Back in the impala-shell interpreter, we use the REFRESH statement to alert the Impala server to the new data
files for this table, then we can run queries demonstrating that the data files represent 3 billion rows, and the values
for one of the numeric columns match what was in the original smaller tables:
[localhost:21000] > refresh parquet_everything;
Query finished, fetching results ...
Returned 0 row(s) in 0.32s
[localhost:21000] > select count(*) from parquet_everything;
Query finished, fetching results ...
+------------+
| _c0 |
+------------+
| 3000000000 |
+------------+
Returned 1 row(s) in 8.18s
[localhost:21000] > select avg(val) from parquet_everything;
Query finished, fetching results ...
+-----------------+
| _c0 |
+-----------------+
| 250000.93577915 |
+-----------------+
Returned 1 row(s) in 13.35s
Parquet Tables for Impala Complex Types
In CDH 5.5 / Impala 2.3 and higher, Impala supports the complex types ARRAY, STRUCT, and MAP See Complex Types
(CDH 5.5 or higher only) on page 139 for details. Because these data types are currently supported only for the Parquet
file format, if you plan to use them, become familiar with the performance and storage aspects of Parquet first.
Exchanging Parquet Data Files with Other Hadoop Components
You can read and write Parquet data files from other CDH components.
Originally, it was not possible to create Parquet data through Impala and reuse that table within Hive. Now that Parquet
support is available for Hive, reusing existing Impala Parquet data files in Hive requires updating the table metadata.
Use the following command if you are already running Impala 1.1.1 or higher:
ALTER TABLE table_name SET FILEFORMAT PARQUET;
If you are running a level of Impala that is older than 1.1.1, do the metadata update through Hive:
ALTER TABLE table_name SET SERDE 'parquet.hive.serde.ParquetHiveSerDe';
ALTER TABLE table_name SET FILEFORMAT
Apache Impala Guide | 649
How Impala Works with Hadoop File Formats
INPUTFORMAT "parquet.hive.DeprecatedParquetInputFormat"
OUTPUTFORMAT "parquet.hive.DeprecatedParquetOutputFormat";
Impala 1.1.1 and higher can reuse Parquet data files created by Hive, without any action required.
Impala supports the scalar data types that you can encode in a Parquet data file, but not composite or nested types
such as maps or arrays. In CDH 5.4 / Impala 2.2 and higher, Impala can query Parquet data files that include composite
or nested types, as long as the query only refers to columns with scalar types.
If you copy Parquet data files between nodes, or even between different directories on the same node, make sure to
preserve the block size by using the command hadoop distcp -pb. To verify that the block size was preserved, issue
the command hdfs fsck -blocks HDFS_path_of_impala_table_dir and check that the average block size is
at or near 256 MB (or whatever other size is defined by the PARQUET_FILE_SIZE query option).. (The hadoop distcp
operation typically leaves some directories behind, with names matching _distcp_logs_*, that you can delete from
the destination directory afterward.) Issue the command hadoop distcp for details about distcp command syntax.
Impala can query Parquet files that use the PLAIN, PLAIN_DICTIONARY, BIT_PACKED, and RLE encodings. Currently,
Impala does not support RLE_DICTIONARY encoding. When creating files outside of Impala for use by Impala, make
sure to use one of the supported encodings. In particular, for MapReduce jobs, parquet.writer.version must not
be defined (especially as PARQUET_2_0) for writing the configurations of Parquet MR jobs. Data using the version 2.0
of Parquet writer might not be consumable by Impala, due to use of the RLE_DICTIONARY encoding. Use the default
version of the Parquet writer and refrain from overriding the default writer version by setting the
parquet.writer.version property or via WriterVersion.PARQUET_2_0 in the Parquet API.
To examine the internal structure and data of Parquet files, you can use the parquet-tools command that comes
with CDH. Make sure this command is in your $PATH. (Typically, it is symlinked from /usr/bin; sometimes, depending
on your installation setup, you might need to locate it under a CDH-specific bin directory.) The arguments to this
command let you perform operations such as:
• cat: Print a file's contents to standard out. In CDH 5.5 and higher, you can use the -j option to output JSON.
• head: Print the first few records of a file to standard output.
• schema: Print the Parquet schema for the file.
• meta: Print the file footer metadata, including key-value properties (like Avro schema), compression ratios,
encodings, compression used, and row group information.
• dump: Print all data and metadata.
Use parquet-tools -h to see usage information for all the arguments. Here are some examples showing
parquet-tools usage:
$ # Be careful doing this for a big file! Use parquet-tools head to be safe.
$ parquet-tools cat sample.parq
year = 1992
month = 1
day = 2
dayofweek = 4
dep_time = 748
crs_dep_time = 750
arr_time = 851
crs_arr_time = 846
carrier = US
flight_num = 53
actual_elapsed_time = 63
crs_elapsed_time = 56
arrdelay = 5
depdelay = -2
origin = CMH
dest = IND
distance = 182
cancelled = 0
diverted = 0
year = 1992
month = 1
650 | Apache Impala Guide
How Impala Works with Hadoop File Formats
day = 3
...
$ parquet-tools head -n 2 sample.parq
year = 1992
month = 1
day = 2
dayofweek = 4
dep_time = 748
crs_dep_time = 750
arr_time = 851
crs_arr_time = 846
carrier = US
flight_num = 53
actual_elapsed_time = 63
crs_elapsed_time = 56
arrdelay = 5
depdelay = -2
origin = CMH
dest = IND
distance = 182
cancelled = 0
diverted = 0
year = 1992
month = 1
day = 3
...
$ parquet-tools schema sample.parq
message schema {
optional int32 year;
optional int32 month;
optional int32 day;
optional int32 dayofweek;
optional int32 dep_time;
optional int32 crs_dep_time;
optional int32 arr_time;
optional int32 crs_arr_time;
optional binary carrier;
optional int32 flight_num;
...
$ parquet-tools meta sample.parq
creator: impala version 2.2.0-cdh5.4.3 (build
517bb0f71cd604a00369254ac6d88394df83e0f6)
file schema: schema
-------------------------------------------------------------------
year: OPTIONAL INT32 R:0 D:1
month: OPTIONAL INT32 R:0 D:1
day: OPTIONAL INT32 R:0 D:1
dayofweek: OPTIONAL INT32 R:0 D:1
dep_time: OPTIONAL INT32 R:0 D:1
crs_dep_time: OPTIONAL INT32 R:0 D:1
arr_time: OPTIONAL INT32 R:0 D:1
crs_arr_time: OPTIONAL INT32 R:0 D:1
carrier: OPTIONAL BINARY R:0 D:1
flight_num: OPTIONAL INT32 R:0 D:1
...
row group 1: RC:20636601 TS:265103674
-------------------------------------------------------------------
year: INT32 SNAPPY DO:4 FPO:35 SZ:10103/49723/4.92 VC:20636601
Apache Impala Guide | 651
How Impala Works with Hadoop File Formats
ENC:PLAIN_DICTIONARY,RLE,PLAIN
month: INT32 SNAPPY DO:10147 FPO:10210 SZ:11380/35732/3.14 VC:20636601
ENC:PLAIN_DICTIONARY,RLE,PLAIN
day: INT32 SNAPPY DO:21572 FPO:21714 SZ:3071658/9868452/3.21 VC:20636601
ENC:PLAIN_DICTIONARY,RLE,PLAIN
dayofweek: INT32 SNAPPY DO:3093276 FPO:3093319 SZ:2274375/5941876/2.61
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
dep_time: INT32 SNAPPY DO:5367705 FPO:5373967 SZ:28281281/28573175/1.01
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
crs_dep_time: INT32 SNAPPY DO:33649039 FPO:33654262 SZ:10220839/11574964/1.13
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
arr_time: INT32 SNAPPY DO:43869935 FPO:43876489 SZ:28562410/28797767/1.01
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
crs_arr_time: INT32 SNAPPY DO:72432398 FPO:72438151 SZ:10908972/12164626/1.12
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
carrier: BINARY SNAPPY DO:83341427 FPO:83341558 SZ:114916/128611/1.12
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
flight_num: INT32 SNAPPY DO:83456393 FPO:83488603 SZ:10216514/11474301/1.12
VC:20636601 ENC:PLAIN_DICTIONARY,RLE,PLAIN
...
How Parquet Data Files Are Organized
Although Parquet is a column-oriented file format, do not expect to find one data file for each column. Parquet keeps
all the data for a row within the same data file, to ensure that the columns for a row are always available on the same
node for processing. What Parquet does is to set a large HDFS block size and a matching maximum data file size, to
ensure that I/O and network transfer requests apply to large batches of data.
Within that data file, the data for a set of rows is rearranged so that all the values from the first column are organized
in one contiguous block, then all the values from the second column, and so on. Putting the values from the same
column next to each other lets Impala use effective compression techniques on the values in that column.
Note:
Impala INSERT statements write Parquet data files using an HDFS block size that matches the data
file size, to ensure that each data file is represented by a single HDFS block, and the entire file can be
processed on a single node without requiring any remote reads.
If you create Parquet data files outside of Impala, such as through a MapReduce or Pig job, ensure
that the HDFS block size is greater than or equal to the file size, so that the “one file per block”
relationship is maintained. Set the dfs.block.size or the dfs.blocksize property large enough
that each file fits within a single HDFS block, even if that size is larger than the normal HDFS block size.
If the block size is reset to a lower value during a file copy, you will see lower performance for queries
involving those files, and the PROFILE statement will reveal that some I/O is being done suboptimally,
through remote reads. See Example of Copying Parquet Data Files on page 648 for an example showing
how to preserve the block size when copying Parquet data files.
When Impala retrieves or tests the data for a particular column, it opens all the data files, but only reads the portion
of each file containing the values for that column. The column values are stored consecutively, minimizing the I/O
required to process the values within a single column. If other columns are named in the SELECT list or WHERE clauses,
the data for all columns in the same row is available within that same data file.
If an INSERT statement brings in less than one Parquet block's worth of data, the resulting data file is smaller than
ideal. Thus, if you do split up an ETL job to use multiple INSERT statements, try to keep the volume of data for each
INSERT statement to approximately 256 MB, or a multiple of 256 MB.
RLE and Dictionary Encoding for Parquet Data Files
Parquet uses some automatic compression techniques, such as run-length encoding (RLE) and dictionary encoding,
based on analysis of the actual data values. Once the data values are encoded in a compact form, the encoded data
can optionally be further compressed using a compression algorithm. Parquet data files created by Impala can use
652 | Apache Impala Guide
How Impala Works with Hadoop File Formats
Snappy, GZip, or no compression; the Parquet spec also allows LZO compression, but currently Impala does not support
LZO-compressed Parquet files.
RLE and dictionary encoding are compression techniques that Impala applies automatically to groups of Parquet data
values, in addition to any Snappy or GZip compression applied to the entire data files. These automatic optimizations
can save you time and planning that are normally needed for a traditional data warehouse. For example, dictionary
encoding reduces the need to create numeric IDs as abbreviations for longer string values.
Run-length encoding condenses sequences of repeated data values. For example, if many consecutive rows all contain
the same value for a country code, those repeating values can be represented by the value followed by a count of how
many times it appears consecutively.
Dictionary encoding takes the different values present in a column, and represents each one in compact 2-byte form
rather than the original value, which could be several bytes. (Additional compression is applied to the compacted
values, for extra space savings.) This type of encoding applies when the number of different values for a column is less
than 2**16 (16,384). It does not apply to columns of data type BOOLEAN, which are already very short. TIMESTAMP
columns sometimes have a unique value for each row, in which case they can quickly exceed the 2**16 limit on distinct
values. The 2**16 limit on different values within a column is reset for each data file, so if several different data files
each contained 10,000 different city names, the city name column in each data file could still be condensed using
dictionary encoding.
Compacting Data Files for Parquet Tables
If you reuse existing table structures or ETL processes for Parquet tables, you might encounter a “many small files”
situation, which is suboptimal for query efficiency. For example, statements like these might produce inefficiently
organized data files:
-- In an N-node cluster, each node produces a data file
-- for the INSERT operation. If you have less than
-- N GB of data to copy, some files are likely to be
-- much smaller than the default Parquet block size.
insert into parquet_table select * from text_table;
-- Even if this operation involves an overall large amount of data,
-- when split up by year/month/day, each partition might only
-- receive a small amount of data. Then the data files for
-- the partition might be divided between the N nodes in the cluster.
-- A multi-gigabyte copy operation might produce files of only
-- a few MB each.
insert into partitioned_parquet_table partition (year, month, day)
select year, month, day, url, referer, user_agent, http_code, response_time
from web_stats;
Here are techniques to help you produce large data files in Parquet INSERT operations, and to compact existing
too-small data files:
• When inserting into a partitioned Parquet table, use statically partitioned INSERT statements where the partition
key values are specified as constant values. Ideally, use a separate INSERT statement for each partition.
• You might set the NUM_NODES option to 1 briefly, during INSERT or CREATE TABLE AS SELECT statements.
Normally, those statements produce one or more data files per data node. If the write operation involves small
amounts of data, a Parquet table, and/or a partitioned table, the default behavior could produce many small files
when intuitively you might expect only a single output file. SET NUM_NODES=1 turns off the “distributed” aspect
of the write operation, making it more likely to produce only one or a few data files.
• Be prepared to reduce the number of partition key columns from what you are used to with traditional analytic
database systems.
• Do not expect Impala-written Parquet files to fill up the entire Parquet block size. Impala estimates on the
conservative side when figuring out how much data to write to each Parquet file. Typically, the of uncompressed
data in memory is substantially reduced on disk by the compression and encoding techniques in the Parquet file
format. The final data file size varies depending on the compressibility of the data. Therefore, it is not an indication
of a problem if 256 MB of text data is turned into 2 Parquet data files, each less than 256 MB.
Apache Impala Guide | 653
How Impala Works with Hadoop File Formats
• If you accidentally end up with a table with many small data files, consider using one or more of the preceding
techniques and copying all the data into a new Parquet table, either through CREATE TABLE AS SELECT or
INSERT ... SELECT statements.
To avoid rewriting queries to change table names, you can adopt a convention of always running important queries
against a view. Changing the view definition immediately switches any subsequent queries to use the new underlying
tables:
create view production_table as select * from table_with_many_small_files;
-- CTAS or INSERT...SELECT all the data into a more efficient layout...
alter view production_table as select * from table_with_few_big_files;
select * from production_table where c1 = 100 and c2 < 50 and ...;
Schema Evolution for Parquet Tables
Schema evolution refers to using the statement ALTER TABLE ... REPLACE COLUMNS to change the names, data
type, or number of columns in a table. You can perform schema evolution for Parquet tables as follows:
• The Impala ALTER TABLE statement never changes any data files in the tables. From the Impala side, schema
evolution involves interpreting the same data files in terms of a new table definition. Some types of schema
changes make sense and are represented correctly. Other types of changes cannot be represented in a sensible
way, and produce special result values or conversion errors during queries.
• The INSERT statement always creates data using the latest table definition. You might end up with data files with
different numbers of columns or internal data representations if you do a sequence of INSERT and ALTER TABLE
... REPLACE COLUMNS statements.
• If you use ALTER TABLE ... REPLACE COLUMNS to define additional columns at the end, when the original
data files are used in a query, these final columns are considered to be all NULL values.
• If you use ALTER TABLE ... REPLACE COLUMNS to define fewer columns than before, when the original data
files are used in a query, the unused columns still present in the data file are ignored.
• Parquet represents the TINYINT, SMALLINT, and INT types the same internally, all stored in 32-bit integers.
– That means it is easy to promote a TINYINT column to SMALLINT or INT, or a SMALLINT column to INT.
The numbers are represented exactly the same in the data file, and the columns being promoted would not
contain any out-of-range values.
– If you change any of these column types to a smaller type, any values that are out-of-range for the new type
are returned incorrectly, typically as negative numbers.
– You cannot change a TINYINT, SMALLINT, or INT column to BIGINT, or the other way around. Although
the ALTER TABLE succeeds, any attempt to query those columns results in conversion errors.
– Any other type conversion for columns produces a conversion error during queries. For example, INT to
STRING, FLOAT to DOUBLE, TIMESTAMP to STRING, DECIMAL(9,0) to DECIMAL(5,2), and so on.
You might find that you have Parquet files where the columns do not line up in the same order as in your Impala table.
For example, you might have a Parquet file that was part of a table with columns C1,C2,C3,C4, and now you want
to reuse the same Parquet file in a table with columns C4,C2. By default, Impala expects the columns in the data file
to appear in the same order as the columns defined for the table, making it impractical to do some kinds of file reuse
or schema evolution. In CDH 5.8 / Impala 2.6 and higher, the query option
PARQUET_FALLBACK_SCHEMA_RESOLUTION=name lets Impala resolve columns by name, and therefore handle
out-of-order or extra columns in the data file. For example:
create database schema_evolution;
use schema_evolution;
create table t1 (c1 int, c2 boolean, c3 string, c4 timestamp)
stored as parquet;
654 | Apache Impala Guide
How Impala Works with Hadoop File Formats
insert into t1 values
(1, true, 'yes', now()),
(2, false, 'no', now() + interval 1 day);
select * from t1;
+----+-------+-----+-------------------------------+
| c1 | c2 | c3 | c4 |
+----+-------+-----+-------------------------------+
| 1 | true | yes | 2016-06-28 14:53:26.554369000 |
| 2 | false | no | 2016-06-29 14:53:26.554369000 |
+----+-------+-----+-------------------------------+
desc formatted t1;
...
| Location: | /user/hive/warehouse/schema_evolution.db/t1 |
...
-- Make T2 have the same data file as in T1, including 2
-- unused columns and column order different than T2 expects.
load data inpath '/user/hive/warehouse/schema_evolution.db/t1'
into table t2;
+----------------------------------------------------------+
| summary |
+----------------------------------------------------------+
| Loaded 1 file(s). Total files in destination location: 1 |
+----------------------------------------------------------+
-- 'position' is the default setting.
-- Impala cannot read the Parquet file if the column order does not match.
set PARQUET_FALLBACK_SCHEMA_RESOLUTION=position;
PARQUET_FALLBACK_SCHEMA_RESOLUTION set to position
select * from t2;
WARNINGS:
File 'schema_evolution.db/t2/45331705_data.0.parq'
has an incompatible Parquet schema for column 'schema_evolution.t2.c4'.
Column type: TIMESTAMP, Parquet schema: optional int32 c1 [i:0 d:1 r:0]
File 'schema_evolution.db/t2/45331705_data.0.parq'
has an incompatible Parquet schema for column 'schema_evolution.t2.c4'.
Column type: TIMESTAMP, Parquet schema: optional int32 c1 [i:0 d:1 r:0]
-- With the 'name' setting, Impala can read the Parquet data files
-- despite mismatching column order.
set PARQUET_FALLBACK_SCHEMA_RESOLUTION=name;
PARQUET_FALLBACK_SCHEMA_RESOLUTION set to name
select * from t2;
+-------------------------------+-------+
| c4 | c2 |
+-------------------------------+-------+
| 2016-06-28 14:53:26.554369000 | true |
| 2016-06-29 14:53:26.554369000 | false |
+-------------------------------+-------+
See PARQUET_FALLBACK_SCHEMA_RESOLUTION Query Option (CDH 5.8 or higher only) on page 353 for more details.
Data Type Considerations for Parquet Tables
The Parquet format defines a set of data types whose names differ from the names of the corresponding Impala data
types. If you are preparing Parquet files using other Hadoop components such as Pig or MapReduce, you might need
to work with the type names defined by Parquet. The following tables list the Parquet-defined types and the equivalent
types in Impala.
Primitive types
Parquet type
BINARY
Impala type
STRING
Apache Impala Guide | 655
How Impala Works with Hadoop File Formats
Parquet type
BOOLEAN
DOUBLE
FLOAT
INT32
INT64
INT96
Logical types
Impala type
BOOLEAN
DOUBLE
FLOAT
INT
BIGINT
TIMESTAMP
Parquet uses type annotations to extend the types that it can store, by specifying how the primitive types should be
interpreted.
Parquet primitive type and annotation
Impala type
BINARY annotated with the UTF8 OriginalType
BINARY annotated with the STRING LogicalType
BINARY annotated with the ENUM OriginalType
STRING
STRING
STRING
BINARY annotated with the DECIMAL OriginalType
DECIMAL
INT64 annotated with the TIMESTAMP_MILLIS OriginalType
TIMESTAMP (in CDH 6.2 or higher)
or
BIGINT (for backward compatibility)
INT64 annotated with the TIMESTAMP_MICROS OriginalType
TIMESTAMP (in CDH 6.2 or higher)
or
BIGINT (for backward compatibility)
INT64 annotated with the TIMESTAMP LogicalType
TIMESTAMP (in CDH 6.2 or higher)
or
BIGINT (for backward compatibility)
Complex types:
For the complex types (ARRAY, MAP, and STRUCT) available in CDH 5.5 / Impala 2.3 and higher, Impala only supports
queries against those types in Parquet tables.
Using the ORC File Format with Impala Tables
Impala can read ORC data files as an experimental feature since Impala 3.1.
To enable the feature, set --enable_orc_scanner to true when starting the cluster.
Note: Impala's ORC support is not yet production quality. Parquet remains the preferred file format
for use with Impala and offers significantly better performance and a more complete set of features.
656 | Apache Impala Guide
How Impala Works with Hadoop File Formats
Table 3: ORC Format Support in Impala
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
ORC
Structured
gzip, Snappy, LZO, LZ4;
currently gzip by
default
The ORC support is an
experimental feature since CDH
6.1 / Impala 3.1 & Impala 2.12.
To disable it, set
--enable_orc_scanner to
false when starting the cluster.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Creating ORC Tables and Loading Data
If you do not have an existing data file to use, begin by creating one in the appropriate format.
To create an ORC table:
In the impala-shell interpreter, issue a command similar to:
CREATE TABLE orc_table (column_specs) STORED AS ORC;
Because Impala can query some kinds of tables that it cannot currently write to, after creating tables of certain file
formats, you might use the Hive shell to load the data. See How Impala Works with Hadoop File Formats on page 634
for details. After loading data into a table through Hive or other mechanism outside of Impala, issue a REFRESH
table_name statement the next time you connect to the Impala node, before querying the table, to make Impala
recognize the new data.
For example, here is how you might create some ORC tables in Impala (by specifying the columns explicitly, or cloning
the structure of another table), load data through Hive, and query them through Impala:
$ impala-shell -i localhost
[localhost:21000] default> CREATE TABLE orc_table (x INT) STORED AS ORC;
[localhost:21000] default> CREATE TABLE orc_clone LIKE some_other_table STORED AS ORC;
[localhost:21000] default> quit;
$ hive
hive> INSERT INTO TABLE orc_table SELECT x FROM some_other_table;
3 Rows loaded to orc_table
Time taken: 4.169 seconds
hive> quit;
$ impala-shell -i localhost
[localhost:21000] default> SELECT * FROM orc_table;
Fetched 0 row(s) in 0.11s
[localhost:21000] default> -- Make Impala recognize the data loaded through Hive;
[localhost:21000] default> REFRESH orc_table;
[localhost:21000] default> SELECT * FROM orc_table;
+---+
| x |
+---+
| 1 |
| 2 |
| 3 |
+---+
Fetched 3 row(s) in 0.11s
Enabling Compression for ORC Tables
ORC tables are in zlib (Deflate in Impala) compression in default. You may want to use Snappy or LZO compression on
existing tables for different balance between compression ratio and decompression speed. In Hive-1.1.0, the supported
Apache Impala Guide | 657
How Impala Works with Hadoop File Formats
compressions for ORC tables are NONE, ZLIB, SNAPPY and LZO. For example, to enable Snappy compression, you would
specify the following additional settings when loading data through the Hive shell:
hive> SET hive.exec.compress.output=true;
hive> SET orc.compress=SNAPPY;
hive> INSERT OVERWRITE TABLE new_table SELECT * FROM old_table;
If you are converting partitioned tables, you must complete additional steps. In such a case, specify additional settings
similar to the following:
hive> CREATE TABLE new_table (your_cols) PARTITIONED BY (partition_cols) STORED AS
new_format;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE new_table PARTITION(comma_separated_partition_cols) SELECT
* FROM old_table;
Remember that Hive does not require that you specify a source format for it. Consider the case of converting a table
with two partition columns called year and month to a Snappy compressed ORC table. Combining the components
outlined previously to complete this table conversion, you would specify settings similar to the following:
hive> CREATE TABLE tbl_orc (int_col INT, string_col STRING) STORED AS ORC;
hive> SET hive.exec.compress.output=true;
hive> SET orc.compress=SNAPPY;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_orc SELECT * FROM tbl;
To complete a similar process for a table that includes partitions, you would specify settings similar to the following:
hive> CREATE TABLE tbl_orc (int_col INT, string_col STRING) PARTITIONED BY (year INT)
STORED AS ORC;
hive> SET hive.exec.compress.output=true;
hive> SET orc.compress=SNAPPY;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_orc PARTITION(year) SELECT * FROM tbl;
Note:
The compression type is specified in the following command:
SET orc.compress=SNAPPY;
You could elect to specify alternative codecs such as NONE, GZIP, LZO here.
Query Performance for Impala ORC Tables
In general, expect query performance with ORC tables to be faster than with tables using text data, but slower than
with Parquet tables since there're bunch of optimizations for Parquet. See Using the Parquet File Format with Impala
Tables on page 643 for information about using the Parquet file format for high-performance analytic queries.
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
658 | Apache Impala Guide
Data Type Considerations for ORC Tables
The ORC format defines a set of data types whose names differ from the names of the corresponding Impala data
types. If you are preparing ORC files using other Hadoop components such as Pig or MapReduce, you might need to
work with the type names defined by ORC. The following figure lists the ORC-defined types and the equivalent types
in Impala.
How Impala Works with Hadoop File Formats
Primitive types:
BINARY -> STRING
BOOLEAN -> BOOLEAN
DOUBLE -> DOUBLE
FLOAT -> FLOAT
TINYINT -> TINYINT
SMALLINT -> SMALLINT
INT -> INT
BIGINT -> BIGINT
TIMESTAMP -> TIMESTAMP
DATE (not supported)
Complex types:
Complex types are currently not supported on ORC. However, queries materializing only scalar type columns are
allowed:
$ hive
hive> CREATE TABLE orc_nested_table (id INT, a ARRAY) STORED AS ORC;
hive> INSERT INTO TABLE orc_nested_table SELECT 1, ARRAY(1,2,3);
OK
Time taken: 2.629 seconds
hive> quit;
$ impala-shell -i localhost
[localhost:21000] default> INVALIDATE METADATA orc_nested_table;
[localhost:21000] default> SELECT 1 FROM orc_nested_table t, t.a;
ERROR: NotImplementedException: Scan of table 't' in format 'ORC' is not supported
because the table has a column 'a' with a complex type 'ARRAY'.
Complex types are supported for these file formats: PARQUET.
[localhost:21000] default> SELECT COUNT(*) FROM orc_nested_table;
+----------+
| count(*) |
+----------+
| 1 |
+----------+
Fetched 1 row(s) in 0.12s
[localhost:21000] default> SELECT id FROM orc_nested_table;
+----+
| id |
+----+
| 1 |
+----+
Fetched 1 row(s) in 0.12s
Using the Avro File Format with Impala Tables
Impala supports using tables whose data files use the Avro file format. Impala can query Avro tables. In Impala 1.4.0
and higher, Impala can create Avro tables, but cannot insert data into them. For insert operations, use Hive, then switch
back to Impala to run queries.
Apache Impala Guide | 659
How Impala Works with Hadoop File Formats
Table 4: Avro Format Support in Impala
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
Avro
Structured
Snappy, gzip, deflate
Yes, in Impala 1.4.0 and higher. In
lower versions, create the table
using Hive.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Creating Avro Tables
To create a new table using the Avro file format, issue the CREATE TABLE statement through Impala with the STORED
AS AVRO clause, or through Hive. If you create the table through Impala, you must include column definitions that
match the fields specified in the Avro schema. With Hive, you can omit the columns and just specify the Avro schema.
In CDH 5.5 / Impala 2.3 and higher, the CREATE TABLE for Avro tables can include SQL-style column definitions rather
than specifying Avro notation through the TBLPROPERTIES clause. Impala issues warning messages if there are any
mismatches between the types specified in the SQL column definitions and the underlying types; for example, any
TINYINT or SMALLINT columns are treated as INT in the underlying Avro files, and therefore are displayed as INT in
any DESCRIBE or SHOW CREATE TABLE output.
Note:
Currently, Avro tables cannot contain TIMESTAMP columns. If you need to store date and time values
in Avro tables, as a workaround you can use a STRING representation of the values, convert the values
to BIGINT with the UNIX_TIMESTAMP() function, or create separate numeric columns for individual
date and time fields using the EXTRACT() function.
The following examples demonstrate creating an Avro table in Impala, using either an inline column specification or
one taken from a JSON file stored in HDFS:
[localhost:21000] > CREATE TABLE avro_only_sql_columns
> (
> id INT,
> bool_col BOOLEAN,
> tinyint_col TINYINT, /* Gets promoted to INT */
> smallint_col SMALLINT, /* Gets promoted to INT */
> int_col INT,
> bigint_col BIGINT,
> float_col FLOAT,
> double_col DOUBLE,
> date_string_col STRING,
> string_col STRING
> )
> STORED AS AVRO;
[localhost:21000] > CREATE TABLE impala_avro_table
> (bool_col BOOLEAN, int_col INT, long_col BIGINT, float_col FLOAT,
double_col DOUBLE, string_col STRING, nullable_int INT)
> STORED AS AVRO
> TBLPROPERTIES ('avro.schema.literal'='{
> "name": "my_record",
> "type": "record",
> "fields": [
> {"name":"bool_col", "type":"boolean"},
> {"name":"int_col", "type":"int"},
> {"name":"long_col", "type":"long"},
> {"name":"float_col", "type":"float"},
> {"name":"double_col", "type":"double"},
> {"name":"string_col", "type":"string"},
> {"name": "nullable_int", "type": ["null", "int"]}]}');
660 | Apache Impala Guide
How Impala Works with Hadoop File Formats
[localhost:21000] > CREATE TABLE avro_examples_of_all_types (
> id INT,
> bool_col BOOLEAN,
> tinyint_col TINYINT,
> smallint_col SMALLINT,
> int_col INT,
> bigint_col BIGINT,
> float_col FLOAT,
> double_col DOUBLE,
> date_string_col STRING,
> string_col STRING
> )
> STORED AS AVRO
> TBLPROPERTIES
('avro.schema.url'='hdfs://localhost:8020/avro_schemas/alltypes.json');
The following example demonstrates creating an Avro table in Hive:
hive> CREATE TABLE hive_avro_table
> ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
> STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
> OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
> TBLPROPERTIES ('avro.schema.literal'='{
> "name": "my_record",
> "type": "record",
> "fields": [
> {"name":"bool_col", "type":"boolean"},
> {"name":"int_col", "type":"int"},
> {"name":"long_col", "type":"long"},
> {"name":"float_col", "type":"float"},
> {"name":"double_col", "type":"double"},
> {"name":"string_col", "type":"string"},
> {"name": "nullable_int", "type": ["null", "int"]}]}');
Each field of the record becomes a column of the table. Note that any other information, such as the record name, is
ignored.
Note: For nullable Avro columns, make sure to put the "null" entry before the actual type name.
In Impala, all columns are nullable; Impala currently does not have a NOT NULL clause. Any non-nullable
property is only enforced on the Avro side.
Most column types map directly from Avro to Impala under the same names. These are the exceptions and special
cases to consider:
• The DECIMAL type is defined in Avro as a BYTE type with the logicalType property set to "decimal" and a
specified precision and scale.
• The Avro long type maps to BIGINT in Impala.
If you create the table through Hive, switch back to impala-shell and issue an INVALIDATE METADATA table_name
statement. Then you can run queries for that table through impala-shell.
In rare instances, a mismatch could occur between the Avro schema and the column definitions in the metastore
database. In CDH 5.5 / Impala 2.3 and higher, Impala checks for such inconsistencies during a CREATE TABLE statement
and each time it loads the metadata for a table (for example, after INVALIDATE METADATA). Impala uses the following
rules to determine how to treat mismatching columns, a process known as schema reconciliation:
• If there is a mismatch in the number of columns, Impala uses the column definitions from the Avro schema.
• If there is a mismatch in column name or type, Impala uses the column definition from the Avro schema. Because
a CHAR or VARCHAR column in Impala maps to an Avro STRING, this case is not considered a mismatch and the
column is preserved as CHAR or VARCHAR in the reconciled schema. Prior to CDH 5.9 / Impala 2.7 the column name
and comment for such CHAR and VARCHAR columns was also taken from the SQL column definition. In CDH 5.9 /
Apache Impala Guide | 661
How Impala Works with Hadoop File Formats
Impala 2.7 and higher, the column name and comment from the Avro schema file take precedence for such
columns, and only the CHAR or VARCHAR type is preserved from the SQL column definition.
• An Impala TIMESTAMP column definition maps to an Avro STRING and is presented as a STRING in the reconciled
schema, because Avro has no binary TIMESTAMP representation. As a result, no Avro table can have a TIMESTAMP
column; this restriction is the same as in earlier Impala releases.
Complex type considerations: Although you can create tables in this file format using the complex types (ARRAY,
STRUCT, and MAP) available in CDH 5.5 / Impala 2.3 and higher, currently, Impala can query these types only in Parquet
tables. The one exception to the preceding rule is COUNT(*) queries on RCFile tables that include complex types. Such
queries are allowed in CDH 5.8 / Impala 2.6 and higher.
Using a Hive-Created Avro Table in Impala
If you have an Avro table created through Hive, you can use it in Impala as long as it contains only Impala-compatible
data types. It cannot contain:
• Complex types: array, map, record, struct, union other than [supported_type,null] or
[null,supported_type]
• The Avro-specific types enum, bytes, and fixed
• Any scalar type other than those listed in Data Types on page 101
Because Impala and Hive share the same metastore database, Impala can directly access the table definitions and data
for tables that were created in Hive.
If you create an Avro table in Hive, issue an INVALIDATE METADATA the next time you connect to Impala through
impala-shell. This is a one-time operation to make Impala aware of the new table. You can issue the statement
while connected to any Impala node, and the catalog service broadcasts the change to all other Impala nodes.
If you load new data into an Avro table through Hive, either through a Hive LOAD DATA or INSERT statement, or by
manually copying or moving files into the data directory for the table, issue a REFRESH table_name statement the
next time you connect to Impala through impala-shell. You can issue the statement while connected to any Impala
node, and the catalog service broadcasts the change to all other Impala nodes. If you issue the LOAD DATA statement
through Impala, you do not need a REFRESH afterward.
Impala only supports fields of type boolean, int, long, float, double, and string, or unions of these types with
null; for example, ["string", "null"]. Unions with null essentially create a nullable type.
Specifying the Avro Schema through JSON
While you can embed a schema directly in your CREATE TABLE statement, as shown above, column width restrictions
in the Hive metastore limit the length of schema you can specify. If you encounter problems with long schema literals,
try storing your schema as a JSON file in HDFS instead. Specify your schema in HDFS using table properties similar to
the following:
tblproperties ('avro.schema.url'='hdfs//your-name-node:port/path/to/schema.json');
Loading Data into an Avro Table
Currently, Impala cannot write Avro data files. Therefore, an Avro table cannot be used as the destination of an Impala
INSERT statement or CREATE TABLE AS SELECT.
To copy data from another table, issue any INSERT statements through Hive. For information about loading data into
Avro tables through Hive, see the Avro page on the Hive wiki.
If you already have data files in Avro format, you can also issue LOAD DATA in either Impala or Hive. Impala can move
existing Avro data files into an Avro table, it just cannot create new Avro data files.
662 | Apache Impala Guide
Enabling Compression for Avro Tables
To enable compression for Avro tables, specify settings in the Hive shell to enable compression and to specify a codec,
then issue a CREATE TABLE statement as in the preceding examples. Impala supports the snappy and deflate
codecs for Avro tables.
How Impala Works with Hadoop File Formats
For example:
hive> set hive.exec.compress.output=true;
hive> set avro.output.codec=snappy;
How Impala Handles Avro Schema Evolution
Starting in Impala 1.1, Impala can deal with Avro data files that employ schema evolution, where different data files
within the same table use slightly different type definitions. (You would perform the schema evolution operation by
issuing an ALTER TABLE statement in the Hive shell.) The old and new types for any changed columns must be
compatible, for example a column might start as an int and later change to a bigint or float.
As with any other tables where the definitions are changed or data is added outside of the current impalad node,
ensure that Impala loads the latest metadata for the table if the Avro schema is modified through Hive. Issue a REFRESH
table_name or INVALIDATE METADATA table_name statement. REFRESH reloads the metadata immediately,
INVALIDATE METADATA reloads the metadata the next time the table is accessed.
When Avro data files or columns are not consulted during a query, Impala does not check for consistency. Thus, if you
issue SELECT c1, c2 FROM t1, Impala does not return any error if the column c3 changed in an incompatible way.
If a query retrieves data from some partitions but not others, Impala does not check the data files for the unused
partitions.
In the Hive DDL statements, you can specify an avro.schema.literal table property (if the schema definition is
short) or an avro.schema.url property (if the schema definition is long, or to allow convenient editing for the
definition).
For example, running the following SQL code in the Hive shell creates a table using the Avro file format and puts some
sample data into it:
CREATE TABLE avro_table (a string, b string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
TBLPROPERTIES (
'avro.schema.literal'='{
"type": "record",
"name": "my_record",
"fields": [
{"name": "a", "type": "int"},
{"name": "b", "type": "string"}
]}');
INSERT OVERWRITE TABLE avro_table SELECT 1, "avro" FROM functional.alltypes LIMIT 1;
Once the Avro table is created and contains data, you can query it through the impala-shell command:
[localhost:21000] > select * from avro_table;
+---+------+
| a | b |
+---+------+
| 1 | avro |
+---+------+
Now in the Hive shell, you change the type of a column and add a new column with a default value:
-- Promote column "a" from INT to FLOAT (no need to update Avro schema)
ALTER TABLE avro_table CHANGE A A FLOAT;
Apache Impala Guide | 663
How Impala Works with Hadoop File Formats
-- Add column "c" with default
ALTER TABLE avro_table ADD COLUMNS (c int);
ALTER TABLE avro_table SET TBLPROPERTIES (
'avro.schema.literal'='{
"type": "record",
"name": "my_record",
"fields": [
{"name": "a", "type": "int"},
{"name": "b", "type": "string"},
{"name": "c", "type": "int", "default": 10}
]}');
Once again in impala-shell, you can query the Avro table based on its latest schema definition. Because the table
metadata was changed outside of Impala, you issue a REFRESH statement first so that Impala has up-to-date metadata
for the table.
[localhost:21000] > refresh avro_table;
[localhost:21000] > select * from avro_table;
+---+------+----+
| a | b | c |
+---+------+----+
| 1 | avro | 10 |
+---+------+----+
Data Type Considerations for Avro Tables
The Avro format defines a set of data types whose names differ from the names of the corresponding Impala data
types. If you are preparing Avro files using other Hadoop components such as Pig or MapReduce, you might need to
work with the type names defined by Avro. The following figure lists the Avro-defined types and the equivalent types
in Impala.
Primitive Types (Avro -> Impala)
--------------------------------
STRING -> STRING
STRING -> CHAR
STRING -> VARCHAR
INT -> INT
BOOLEAN -> BOOLEAN
LONG -> BIGINT
FLOAT -> FLOAT
DOUBLE -> DOUBLE
Logical Types
-------------
BYTES + logicalType = "decimal" -> DECIMAL
Avro Types with No Impala Equivalent
------------------------------------
RECORD, MAP, ARRAY, UNION, ENUM, FIXED, NULL
Impala Types with No Avro Equivalent
------------------------------------
TIMESTAMP
The Avro specification allows string values up to 2**64 bytes in length. Impala queries for Avro tables use 32-bit integers
to hold string lengths. In CDH 5.7 / Impala 2.5 and higher, Impala truncates CHAR and VARCHAR values in Avro tables
to (2**31)-1 bytes. If a query encounters a STRING value longer than (2**31)-1 bytes in an Avro table, the query fails.
In earlier releases, encountering such long values in an Avro table could cause a crash.
Query Performance for Impala Avro Tables
In general, expect query performance with Avro tables to be faster than with tables using text data, but slower than
with Parquet tables. See Using the Parquet File Format with Impala Tables on page 643 for information about using the
Parquet file format for high-performance analytic queries.
664 | Apache Impala Guide
How Impala Works with Hadoop File Formats
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
Using the RCFile File Format with Impala Tables
Impala supports using RCFile data files.
Table 5: RCFile Format Support in Impala
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
RCFile
Structured
Snappy, gzip, deflate,
bzip2
Yes.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Creating RCFile Tables and Loading Data
If you do not have an existing data file to use, begin by creating one in the appropriate format.
To create an RCFile table:
In the impala-shell interpreter, issue a command similar to:
create table rcfile_table (column_specs) stored as rcfile;
Because Impala can query some kinds of tables that it cannot currently write to, after creating tables of certain file
formats, you might use the Hive shell to load the data. See How Impala Works with Hadoop File Formats on page 634
for details. After loading data into a table through Hive or other mechanism outside of Impala, issue a REFRESH
table_name statement the next time you connect to the Impala node, before querying the table, to make Impala
recognize the new data.
For example, here is how you might create some RCFile tables in Impala (by specifying the columns explicitly, or cloning
the structure of another table), load data through Hive, and query them through Impala:
$ impala-shell -i localhost
[localhost:21000] > create table rcfile_table (x int) stored as rcfile;
[localhost:21000] > create table rcfile_clone like some_other_table stored as rcfile;
[localhost:21000] > quit;
$ hive
hive> insert into table rcfile_table select x from some_other_table;
3 Rows loaded to rcfile_table
Time taken: 19.015 seconds
hive> quit;
$ impala-shell -i localhost
[localhost:21000] > select * from rcfile_table;
Returned 0 row(s) in 0.23s
[localhost:21000] > -- Make Impala recognize the data loaded through Hive;
[localhost:21000] > refresh rcfile_table;
[localhost:21000] > select * from rcfile_table;
+---+
| x |
+---+
Apache Impala Guide | 665
How Impala Works with Hadoop File Formats
| 1 |
| 2 |
| 3 |
+---+
Returned 3 row(s) in 0.23s
Complex type considerations: Although you can create tables in this file format using the complex types (ARRAY,
STRUCT, and MAP) available in CDH 5.5 / Impala 2.3 and higher, currently, Impala can query these types only in Parquet
tables. The one exception to the preceding rule is COUNT(*) queries on RCFile tables that include complex types. Such
queries are allowed in CDH 5.8 / Impala 2.6 and higher.
Enabling Compression for RCFile Tables
You may want to enable compression on existing tables. Enabling compression provides performance gains in most
cases and is supported for RCFile tables. For example, to enable Snappy compression, you would specify the following
additional settings when loading data through the Hive shell:
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
hive> INSERT OVERWRITE TABLE new_table SELECT * FROM old_table;
If you are converting partitioned tables, you must complete additional steps. In such a case, specify additional settings
similar to the following:
hive> CREATE TABLE new_table (your_cols) PARTITIONED BY (partition_cols) STORED AS
new_format;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE new_table PARTITION(comma_separated_partition_cols) SELECT
* FROM old_table;
Remember that Hive does not require that you specify a source format for it. Consider the case of converting a table
with two partition columns called year and month to a Snappy compressed RCFile. Combining the components outlined
previously to complete this table conversion, you would specify settings similar to the following:
hive> CREATE TABLE tbl_rc (int_col INT, string_col STRING) STORED AS RCFILE;
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_rc SELECT * FROM tbl;
To complete a similar process for a table that includes partitions, you would specify settings similar to the following:
hive> CREATE TABLE tbl_rc (int_col INT, string_col STRING) PARTITIONED BY (year INT)
STORED AS RCFILE;
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_rc PARTITION(year) SELECT * FROM tbl;
666 | Apache Impala Guide
How Impala Works with Hadoop File Formats
Note:
The compression type is specified in the following command:
SET
mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
You could elect to specify alternative codecs such as GzipCodec here.
Query Performance for Impala RCFile Tables
In general, expect query performance with RCFile tables to be faster than with tables using text data, but slower than
with Parquet tables. See Using the Parquet File Format with Impala Tables on page 643 for information about using the
Parquet file format for high-performance analytic queries.
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
Using the SequenceFile File Format with Impala Tables
Impala supports using SequenceFile data files.
Table 6: SequenceFile Format Support in Impala
File Type
Format
Compression Codecs
Impala Can CREATE?
Impala Can INSERT?
SequenceFile
Structured
Snappy, gzip, deflate,
bzip2
Yes.
No. Import data by using LOAD
DATA on data files already in the
right format, or use INSERT in
Hive followed by REFRESH
table_name in Impala.
Creating SequenceFile Tables and Loading Data
If you do not have an existing data file to use, begin by creating one in the appropriate format.
To create a SequenceFile table:
In the impala-shell interpreter, issue a command similar to:
create table sequencefile_table (column_specs) stored as sequencefile;
Because Impala can query some kinds of tables that it cannot currently write to, after creating tables of certain file
formats, you might use the Hive shell to load the data. See How Impala Works with Hadoop File Formats on page 634
for details. After loading data into a table through Hive or other mechanism outside of Impala, issue a REFRESH
table_name statement the next time you connect to the Impala node, before querying the table, to make Impala
recognize the new data.
Apache Impala Guide | 667
How Impala Works with Hadoop File Formats
For example, here is how you might create some SequenceFile tables in Impala (by specifying the columns explicitly,
or cloning the structure of another table), load data through Hive, and query them through Impala:
$ impala-shell -i localhost
[localhost:21000] > create table seqfile_table (x int) stored as sequencefile;
[localhost:21000] > create table seqfile_clone like some_other_table stored as
sequencefile;
[localhost:21000] > quit;
$ hive
hive> insert into table seqfile_table select x from some_other_table;
3 Rows loaded to seqfile_table
Time taken: 19.047 seconds
hive> quit;
$ impala-shell -i localhost
[localhost:21000] > select * from seqfile_table;
Returned 0 row(s) in 0.23s
[localhost:21000] > -- Make Impala recognize the data loaded through Hive;
[localhost:21000] > refresh seqfile_table;
[localhost:21000] > select * from seqfile_table;
+---+
| x |
+---+
| 1 |
| 2 |
| 3 |
+---+
Returned 3 row(s) in 0.23s
Complex type considerations: Although you can create tables in this file format using the complex types (ARRAY,
STRUCT, and MAP) available in CDH 5.5 / Impala 2.3 and higher, currently, Impala can query these types only in Parquet
tables. The one exception to the preceding rule is COUNT(*) queries on RCFile tables that include complex types. Such
queries are allowed in CDH 5.8 / Impala 2.6 and higher.
Enabling Compression for SequenceFile Tables
You may want to enable compression on existing tables. Enabling compression provides performance gains in most
cases and is supported for SequenceFile tables. For example, to enable Snappy compression, you would specify the
following additional settings when loading data through the Hive shell:
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
hive> insert overwrite table new_table select * from old_table;
If you are converting partitioned tables, you must complete additional steps. In such a case, specify additional settings
similar to the following:
hive> create table new_table (your_cols) partitioned by (partition_cols) stored as
new_format;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> insert overwrite table new_table partition(comma_separated_partition_cols) select
* from old_table;
Remember that Hive does not require that you specify a source format for it. Consider the case of converting a table
with two partition columns called year and month to a Snappy compressed SequenceFile. Combining the components
outlined previously to complete this table conversion, you would specify settings similar to the following:
hive> create table TBL_SEQ (int_col int, string_col string) STORED AS SEQUENCEFILE;
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
668 | Apache Impala Guide
How Impala Works with Hadoop File Formats
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_seq SELECT * FROM tbl;
To complete a similar process for a table that includes partitions, you would specify settings similar to the following:
hive> CREATE TABLE tbl_seq (int_col INT, string_col STRING) PARTITIONED BY (year INT)
STORED AS SEQUENCEFILE;
hive> SET hive.exec.compress.output=true;
hive> SET mapred.max.split.size=256000000;
hive> SET mapred.output.compression.type=BLOCK;
hive> SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
hive> SET hive.exec.dynamic.partition.mode=nonstrict;
hive> SET hive.exec.dynamic.partition=true;
hive> INSERT OVERWRITE TABLE tbl_seq PARTITION(year) SELECT * FROM tbl;
Note:
The compression type is specified in the following command:
SET
mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
You could elect to specify alternative codecs such as GzipCodec here.
Query Performance for Impala SequenceFile Tables
In general, expect query performance with SequenceFile tables to be faster than with tables using text data, but slower
than with Parquet tables. See Using the Parquet File Format with Impala Tables on page 643 for information about
using the Parquet file format for high-performance analytic queries.
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
Apache Impala Guide | 669
Using Impala to Query Kudu Tables
Using Impala to Query Kudu Tables
You can use Impala to query tables stored by Apache Kudu. This capability allows convenient access to a storage system
that is tuned for different kinds of workloads than the default with Impala.
By default, Impala tables are stored on HDFS using data files with various file formats. HDFS files are ideal for bulk
loads (append operations) and queries using full-table scans, but do not support in-place updates or deletes. Kudu is
an alternative storage engine used by Impala which can do both in-place updates (for mixed read/write workloads)
and fast scans (for data-warehouse/analytic operations). Using Kudu tables with Impala can simplify the ETL pipeline
by avoiding extra steps to segregate and reorganize newly arrived data.
Certain Impala SQL statements and clauses, such as DELETE, UPDATE, UPSERT, and PRIMARY KEY work only with
Kudu tables. Other statements and clauses, such as LOAD DATA, TRUNCATE TABLE, and INSERT OVERWRITE, are
not applicable to Kudu tables.
Benefits of Using Kudu Tables with Impala
The combination of Kudu and Impala works best for tables where scan performance is important, but data arrives
continuously, in small batches, or needs to be updated without being completely replaced. HDFS-backed tables can
require substantial overhead to replace or reorganize data files as new data arrives. Impala can perform efficient
lookups and scans within Kudu tables, and Impala can also perform update or delete operations efficiently. You can
also use the Kudu Java, C++, and Python APIs to do ingestion or transformation operations outside of Impala, and
Impala can query the current data at any time.
Configuring Impala for Use with Kudu
The -kudu_master_hosts configuration property must be set correctly for the impalad daemon, for CREATE TABLE
... STORED AS KUDU statements to connect to the appropriate Kudu server. Typically, the required value for this
setting is kudu_host:7051. In a high-availability Kudu deployment, specify the names of multiple Kudu hosts separated
by commas.
If the -kudu_master_hosts configuration property is not set, you can still associate the appropriate value for each
table by specifying a TBLPROPERTIES('kudu.master_addresses') clause in the CREATE TABLE statement or
changing the TBLPROPERTIES('kudu.master_addresses') value with an ALTER TABLE statement.
If you are using Cloudera Manager, for each Impala service, navigate to the Configuration tab and specify the Kudu
service you want to use in the Kudu Service field.
Cluster Topology for Kudu Tables
With HDFS-backed tables, you are typically concerned with the number of DataNodes in the cluster, how many and
how large HDFS data files are read during a query, and therefore the amount of work performed by each DataNode
and the network communication to combine intermediate results and produce the final result set.
With Kudu tables, the topology considerations are different, because:
• The underlying storage is managed and organized by Kudu, not represented as HDFS data files.
• Kudu handles some of the underlying mechanics of partitioning the data. You can specify the partitioning scheme
with combinations of hash and range partitioning, so that you can decide how much effort to expend to manage
the partitions as new data arrives. For example, you can construct partitions that apply to date ranges rather than
a separate partition for each day or each hour.
• Data is physically divided based on units of storage called tablets. Tablets are stored by tablet servers. Each tablet
server can store multiple tablets, and each tablet is replicated across multiple tablet servers, managed automatically
670 | Apache Impala Guide
by Kudu. Where practical, co-locate the tablet servers on the same hosts as the Impala daemons, although that
is not required.
Using Impala to Query Kudu Tables
Kudu Replication Factor
By default, Kudu tables created through Impala use a tablet replication factor of 3. To change the replication factor
for a Kudu table, specify the replication factor using the TBLPROPERTIES in the CREATE TABLE Statement statement
as below where n is the replication factor you want to use:
TBLPROPERTIES ('kudu.num_tablet_replicas' = 'n')
The number of replicas for a Kudu table must be odd.
Altering the kudu.num_tablet_replicas property after table creation currently has no effect.
Impala DDL Enhancements for Kudu Tables (CREATE TABLE and ALTER TABLE)
You can use the Impala CREATE TABLE and ALTER TABLE statements to create and fine-tune the characteristics of
Kudu tables. Because Kudu tables have features and properties that do not apply to other kinds of Impala tables,
familiarize yourself with Kudu-related concepts and syntax first. For the general syntax of the CREATE TABLE statement
for Kudu tables, see CREATE TABLE Statement on page 234.
Primary Key Columns for Kudu Tables
Kudu tables introduce the notion of primary keys to Impala for the first time. The primary key is made up of one or
more columns, whose values are combined and used as a lookup key during queries. The tuple represented by these
columns must be unique and cannot contain any NULL values, and can never be updated once inserted. For a Kudu
table, all the partition key columns must come from the set of primary key columns.
The primary key has both physical and logical aspects:
• On the physical side, it is used to map the data values to particular tablets for fast retrieval. Because the tuples
formed by the primary key values are unique, the primary key columns are typically highly selective.
• On the logical side, the uniqueness constraint allows you to avoid duplicate data in a table. For example, if an
INSERT operation fails partway through, only some of the new rows might be present in the table. You can re-run
the same INSERT, and only the missing rows will be added. Or if data in the table is stale, you can run an UPSERT
statement that brings the data up to date, without the possibility of creating duplicate copies of existing rows.
Note:
Impala only allows PRIMARY KEY clauses and NOT NULL constraints on columns for Kudu tables.
These constraints are enforced on the Kudu side.
Kudu-Specific Column Attributes for CREATE TABLE
For the general syntax of the CREATE TABLE statement for Kudu tables, see CREATE TABLE Statement on page 234.
The following sections provide more detail for some of the Kudu-specific keywords you can use in column definitions.
The column list in a CREATE TABLE statement can include the following attributes, which only apply to Kudu tables:
PRIMARY KEY
| [NOT] NULL
| ENCODING codec
| COMPRESSION algorithm
Apache Impala Guide | 671
Using Impala to Query Kudu Tables
| DEFAULT constant_expression
| BLOCK_SIZE number
See the following sections for details about each column attribute.
PRIMARY KEY Attribute
The primary key for a Kudu table is a column, or set of columns, that uniquely identifies every row. The primary key
value also is used as the natural sort order for the values from the table. The primary key value for each row is based
on the combination of values for the columns.
Because all of the primary key columns must have non-null values, specifying a column in the PRIMARY KEY clause
implicitly adds the NOT NULL attribute to that column.
The primary key columns must be the first ones specified in the CREATE TABLE statement. For a single-column primary
key, you can include a PRIMARY KEY attribute inline with the column definition. For a multi-column primary key, you
include a PRIMARY KEY (c1, c2, ...) clause as a separate entry at the end of the column list.
You can specify the PRIMARY KEY attribute either inline in a single column definition, or as a separate clause at the
end of the column list:
CREATE TABLE pk_inline
(
col1 BIGINT PRIMARY KEY,
col2 STRING,
col3 BOOLEAN
) PARTITION BY HASH(col1) PARTITIONS 2 STORED AS KUDU;
CREATE TABLE pk_at_end
(
col1 BIGINT,
col2 STRING,
col3 BOOLEAN,
PRIMARY KEY (col1)
) PARTITION BY HASH(col1) PARTITIONS 2 STORED AS KUDU;
When the primary key is a single column, these two forms are equivalent. If the primary key consists of more than one
column, you must specify the primary key using a separate entry in the column list:
CREATE TABLE pk_multiple_columns
(
col1 BIGINT,
col2 STRING,
col3 BOOLEAN,
PRIMARY KEY (col1, col2)
) PARTITION BY HASH(col2) PARTITIONS 2 STORED AS KUDU;
The SHOW CREATE TABLE statement always represents the PRIMARY KEY specification as a separate item in the
column list:
CREATE TABLE inline_pk_rewritten (id BIGINT PRIMARY KEY, s STRING)
PARTITION BY HASH(id) PARTITIONS 2 STORED AS KUDU;
SHOW CREATE TABLE inline_pk_rewritten;
+------------------------------------------------------------------------------+
| result |
+------------------------------------------------------------------------------+
| CREATE TABLE user.inline_pk_rewritten ( |
| id BIGINT NOT NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, |
| s STRING NULL ENCODING AUTO_ENCODING COMPRESSION DEFAULT_COMPRESSION, |
| PRIMARY KEY (id) |
| ) |
| PARTITION BY HASH (id) PARTITIONS 2 |
| STORED AS KUDU |
672 | Apache Impala Guide
Using Impala to Query Kudu Tables
| TBLPROPERTIES ('kudu.master_addresses'='host.example.com') |
+------------------------------------------------------------------------------+
The notion of primary key only applies to Kudu tables. Every Kudu table requires a primary key. The primary key consists
of one or more columns. You must specify any primary key columns first in the column list.
The contents of the primary key columns cannot be changed by an UPDATE or UPSERT statement. Including too many
columns in the primary key (more than 5 or 6) can also reduce the performance of write operations. Therefore, pick
the most selective and most frequently tested non-null columns for the primary key specification. If a column must
always have a value, but that value might change later, leave it out of the primary key and use a NOT NULL clause for
that column instead. If an existing row has an incorrect or outdated key column value, delete the old row and insert
an entirely new row with the correct primary key.
NULL | NOT NULL Attribute
For Kudu tables, you can specify which columns can contain nulls or not. This constraint offers an extra level of
consistency enforcement for Kudu tables. If an application requires a field to always be specified, include a NOT NULL
clause in the corresponding column definition, and Kudu prevents rows from being inserted with a NULL in that column.
For example, a table containing geographic information might require the latitude and longitude coordinates to always
be specified. Other attributes might be allowed to be NULL. For example, a location might not have a designated place
name, its altitude might be unimportant, and its population might be initially unknown, to be filled in later.
Because all of the primary key columns must have non-null values, specifying a column in the PRIMARY KEY clause
implicitly adds the NOT NULL attribute to that column.
For non-Kudu tables, Impala allows any column to contain NULL values, because it is not practical to enforce a “not
null” constraint on HDFS data files that could be prepared using external tools and ETL processes.
CREATE TABLE required_columns
(
id BIGINT PRIMARY KEY,
latitude DOUBLE NOT NULL,
longitude DOUBLE NOT NULL,
place_name STRING,
altitude DOUBLE,
population BIGINT
) PARTITION BY HASH(id) PARTITIONS 2 STORED AS KUDU;
During performance optimization, Kudu can use the knowledge that nulls are not allowed to skip certain checks on
each input row, speeding up queries and join operations. Therefore, specify NOT NULL constraints when appropriate.
The NULL clause is the default condition for all columns that are not part of the primary key. You can omit it, or specify
it to clarify that you have made a conscious design decision to allow nulls in a column.
Because primary key columns cannot contain any NULL values, the NOT NULL clause is not required for the primary
key columns, but you might still specify it to make your code self-describing.
DEFAULT Attribute
You can specify a default value for columns in Kudu tables. The default value can be any constant expression, for
example, a combination of literal values, arithmetic and string operations. It cannot contain references to columns or
non-deterministic function calls.
The following example shows different kinds of expressions for the DEFAULT clause. The requirement to use a constant
value means that you can fill in a placeholder value such as NULL, empty string, 0, -1, 'N/A' and so on, but you cannot
reference functions or column names. Therefore, you cannot use DEFAULT to do things such as automatically making
an uppercase copy of a string value, storing Boolean values based on tests of other columns, or add or subtract one
from another column representing a sequence number.
CREATE TABLE default_vals
Apache Impala Guide | 673
Using Impala to Query Kudu Tables
(
id BIGINT PRIMARY KEY,
name STRING NOT NULL DEFAULT 'unknown',
address STRING DEFAULT upper('no fixed address'),
age INT DEFAULT -1,
earthling BOOLEAN DEFAULT TRUE,
planet_of_origin STRING DEFAULT 'Earth',
optional_col STRING DEFAULT NULL
) PARTITION BY HASH(id) PARTITIONS 2 STORED AS KUDU;
Note:
When designing an entirely new schema, prefer to use NULL as the placeholder for any unknown or
missing values, because that is the universal convention among database systems. Null values can be
stored efficiently, and easily checked with the IS NULL or IS NOT NULL operators. The DEFAULT
attribute is appropriate when ingesting data that already has an established convention for representing
unknown or missing values, or where the vast majority of rows have some common non-null value.
ENCODING Attribute
Each column in a Kudu table can optionally use an encoding, a low-overhead form of compression that reduces the
size on disk, then requires additional CPU cycles to reconstruct the original values during queries. Typically, highly
compressible data benefits from the reduced I/O to read the data back from disk.
The encoding keywords that Impala recognizes are:
• AUTO_ENCODING: use the default encoding based on the column type, which are bitshuffle for the numeric type
columns and dictionary for the string type columns.
• PLAIN_ENCODING: leave the value in its original binary format.
• RLE: compress repeated values (when sorted in primary key order) by including a count.
• DICT_ENCODING: when the number of different string values is low, replace the original string with a numeric ID.
• BIT_SHUFFLE: rearrange the bits of the values to efficiently compress sequences of values that are identical or
vary only slightly based on primary key order. The resulting encoded data is also compressed with LZ4.
• PREFIX_ENCODING: compress common prefixes in string values; mainly for use internally within Kudu.
The following example shows the Impala keywords representing the encoding types. (The Impala keywords match the
symbolic names used within Kudu.) For usage guidelines on the different kinds of encoding, see the Kudu documentation.
The DESCRIBE output shows how the encoding is reported after the table is created, and that omitting the encoding
(in this case, for the ID column) is the same as specifying DEFAULT_ENCODING.
CREATE TABLE various_encodings
(
id BIGINT PRIMARY KEY,
c1 BIGINT ENCODING PLAIN_ENCODING,
c2 BIGINT ENCODING AUTO_ENCODING,
c3 TINYINT ENCODING BIT_SHUFFLE,
c4 DOUBLE ENCODING BIT_SHUFFLE,
c5 BOOLEAN ENCODING RLE,
c6 STRING ENCODING DICT_ENCODING,
c7 STRING ENCODING PREFIX_ENCODING
) PARTITION BY HASH(id) PARTITIONS 2 STORED AS KUDU;
-- Some columns are omitted from the output for readability.
describe various_encodings;
+------+---------+-------------+----------+-----------------+
| name | type | primary_key | nullable | encoding |
+------+---------+-------------+----------+-----------------+
| id | bigint | true | false | AUTO_ENCODING |
674 | Apache Impala Guide
Using Impala to Query Kudu Tables
| c1 | bigint | false | true | PLAIN_ENCODING |
| c2 | bigint | false | true | AUTO_ENCODING |
| c3 | tinyint | false | true | BIT_SHUFFLE |
| c4 | double | false | true | BIT_SHUFFLE |
| c5 | boolean | false | true | RLE |
| c6 | string | false | true | DICT_ENCODING |
| c7 | string | false | true | PREFIX_ENCODING |
+------+---------+-------------+----------+-----------------+
COMPRESSION Attribute
You can specify a compression algorithm to use for each column in a Kudu table. This attribute imposes more CPU
overhead when retrieving the values than the ENCODING attribute does. Therefore, use it primarily for columns with
long strings that do not benefit much from the less-expensive ENCODING attribute.
The choices for COMPRESSION are LZ4, SNAPPY, and ZLIB.
Note:
Columns that use the BITSHUFFLE encoding are already compressed using LZ4, and so typically do
not need any additional COMPRESSION attribute.
The following example shows design considerations for several STRING columns with different distribution characteristics,
leading to choices for both the ENCODING and COMPRESSION attributes. The country values come from a specific set
of strings, therefore this column is a good candidate for dictionary encoding. The post_id column contains an ascending
sequence of integers, where several leading bits are likely to be all zeroes, therefore this column is a good candidate
for bitshuffle encoding. The body column and the corresponding columns for translated versions tend to be long unique
strings that are not practical to use with any of the encoding schemes, therefore they employ the COMPRESSION
attribute instead. The ideal compression codec in each case would require some experimentation to determine how
much space savings it provided and how much CPU overhead it added, based on real-world data.
CREATE TABLE blog_posts
(
user_id STRING ENCODING DICT_ENCODING,
post_id BIGINT ENCODING BIT_SHUFFLE,
subject STRING ENCODING PLAIN_ENCODING,
body STRING COMPRESSION LZ4,
spanish_translation STRING COMPRESSION SNAPPY,
esperanto_translation STRING COMPRESSION ZLIB,
PRIMARY KEY (user_id, post_id)
) PARTITION BY HASH(user_id, post_id) PARTITIONS 2 STORED AS KUDU;
BLOCK_SIZE Attribute
Although Kudu does not use HDFS files internally, and thus is not affected by the HDFS block size, it does have an
underlying unit of I/O called the block size. The BLOCK_SIZE attribute lets you set the block size for any column.
Note: The block size attribute is a relatively advanced feature. This is an unsupported feature and is
considered experimental.
Partitioning for Kudu Tables
Kudu tables use special mechanisms to distribute data among the underlying tablet servers. Although we refer to such
tables as partitioned tables, they are distinguished from traditional Impala partitioned tables by use of different clauses
on the CREATE TABLE statement. Kudu tables use PARTITION BY, HASH, RANGE, and range specification clauses
rather than the PARTITIONED BY clause for HDFS-backed tables, which specifies only a column name and creates a
new partition for each different value.
Apache Impala Guide | 675
Using Impala to Query Kudu Tables
For background information and architectural details about the Kudu partitioning mechanism, see the Kudu white
paper, section 3.2.
Note:
The Impala DDL syntax for Kudu tables is different than in early Kudu versions, which used an
experimental fork of the Impala code. For example, the DISTRIBUTE BY clause is now PARTITION
BY, the INTO n BUCKETS clause is now PARTITIONS n and the range partitioning syntax is reworked
to replace the SPLIT ROWS clause with more expressive syntax involving comparison operators.
Hash Partitioning
Hash partitioning is the simplest type of partitioning for Kudu tables. For hash-partitioned Kudu tables, inserted rows
are divided up between a fixed number of “buckets” by applying a hash function to the values of the columns specified
in the HASH clause. Hashing ensures that rows with similar values are evenly distributed, instead of clumping together
all in the same bucket. Spreading new rows across the buckets this way lets insertion operations work in parallel across
multiple tablet servers. Separating the hashed values can impose additional overhead on queries, where queries with
range-based predicates might have to read multiple tablets to retrieve all the relevant values.
-- 1M rows with 50 hash partitions = approximately 20,000 rows per partition.
-- The values in each partition are not sequential, but rather based on a hash function.
-- Rows 1, 99999, and 123456 might be in the same partition.
CREATE TABLE million_rows (id string primary key, s string)
PARTITION BY HASH(id) PARTITIONS 50
STORED AS KUDU;
-- Because the ID values are unique, we expect the rows to be roughly
-- evenly distributed between the buckets in the destination table.
INSERT INTO million_rows SELECT * FROM billion_rows ORDER BY id LIMIT 1e6;
Note:
The largest number of buckets that you can create with a PARTITIONS clause varies depending on
the number of tablet servers in the cluster, while the smallest is 2. For simplicity, some of the simple
CREATE TABLE statements throughout this section use PARTITIONS 2 to illustrate the minimum
requirements for a Kudu table. For large tables, prefer to use roughly 10 partitions per server in the
cluster.
Range Partitioning
Range partitioning lets you specify partitioning precisely, based on single values or ranges of values within one or more
columns. You add one or more RANGE clauses to the CREATE TABLE statement, following the PARTITION BY clause.
Range-partitioned Kudu tables use one or more range clauses, which include a combination of constant expressions,
VALUE or VALUES keywords, and comparison operators. (This syntax replaces the SPLIT ROWS clause used with early
Kudu versions.) For the full syntax, see CREATE TABLE Statement on page 234.
-- 50 buckets, all for IDs beginning with a lowercase letter.
-- Having only a single range enforces the allowed range of values
-- but does not add any extra parallelism.
create table million_rows_one_range (id string primary key, s string)
partition by hash(id) partitions 50,
range (partition 'a' <= values < '{')
stored as kudu;
-- 50 buckets for IDs beginning with a lowercase letter
-- plus 50 buckets for IDs beginning with an uppercase letter.
-- Total number of buckets = number in the PARTITIONS clause x number of ranges.
-- We are still enforcing constraints on the primary key values
676 | Apache Impala Guide
Using Impala to Query Kudu Tables
-- allowed in the table, and the 2 ranges provide better parallelism
-- as rows are inserted or the table is scanned.
create table million_rows_two_ranges (id string primary key, s string)
partition by hash(id) partitions 50,
range (partition 'a' <= values < '{', partition 'A' <= values < '[')
stored as kudu;
-- Same as previous table, with an extra range covering the single key value '00000'.
create table million_rows_three_ranges (id string primary key, s string)
partition by hash(id) partitions 50,
range (partition 'a' <= values < '{', partition 'A' <= values < '[', partition value
= '00000')
stored as kudu;
-- The range partitioning can be displayed with a SHOW command in impala-shell.
show range partitions million_rows_three_ranges;
+---------------------+
| RANGE (id) |
+---------------------+
| VALUE = "00000" |
| "A" <= VALUES < "[" |
| "a" <= VALUES < "{" |
+---------------------+
Note:
When defining ranges, be careful to avoid “fencepost errors” where values at the extreme ends might
be included or omitted by accident. For example, in the tables defined in the preceding code listings,
the range "a" <= VALUES < "{" ensures that any values starting with z, such as za or zzz or
zzz-ZZZ, are all included, by using a less-than operator for the smallest value after all the values
starting with z.
For range-partitioned Kudu tables, an appropriate range must exist before a data value can be created in the table.
Any INSERT, UPDATE, or UPSERT statements fail if they try to create column values that fall outside the specified
ranges. The error checking for ranges is performed on the Kudu side; Impala passes the specified range information
to Kudu, and passes back any error or warning if the ranges are not valid. (A nonsensical range specification causes an
error for a DDL statement, but only a warning for a DML statement.)
Ranges can be non-contiguous:
partition by range (year) (partition 1885 <= values <= 1889, partition 1893 <= values
<= 1897)
partition by range (letter_grade) (partition value = 'A', partition value = 'B',
partition value = 'C', partition value = 'D', partition value = 'F')
The ALTER TABLE statement with the ADD PARTITION or DROP PARTITION clauses can be used to add or remove
ranges from an existing Kudu table.
ALTER TABLE foo ADD PARTITION 30 <= VALUES < 50;
ALTER TABLE foo DROP PARTITION 1 <= VALUES < 5;
When a range is added, the new range must not overlap with any of the previous ranges; that is, it can only fill in gaps
within the previous ranges.
alter table test_scores add range partition value = 'E';
Apache Impala Guide | 677
Using Impala to Query Kudu Tables
alter table year_ranges add range partition 1890 <= values < 1893;
When a range is removed, all the associated rows in the table are deleted. (This is true whether the table is internal
or external.)
alter table test_scores drop range partition value = 'E';
alter table year_ranges drop range partition 1890 <= values < 1893;
Kudu tables can also use a combination of hash and range partitioning.
partition by hash (school) partitions 10,
range (letter_grade) (partition value = 'A', partition value = 'B',
partition value = 'C', partition value = 'D', partition value = 'F')
Working with Partitioning in Kudu Tables
To see the current partitioning scheme for a Kudu table, you can use the SHOW CREATE TABLE statement or the SHOW
PARTITIONS statement. The CREATE TABLE syntax displayed by this statement includes all the hash, range, or both
clauses that reflect the original table structure plus any subsequent ALTER TABLE statements that changed the table
structure.
To see the underlying buckets and partitions for a Kudu table, use the SHOW TABLE STATS or SHOW PARTITIONS
statement.
Handling Date, Time, or Timestamp Data with Kudu
In CDH 5.12 / Impala 2.9 and higher, you can include TIMESTAMP columns in Kudu tables, instead of representing the
date and time as a BIGINT value. The behavior of TIMESTAMP for Kudu tables has some special considerations:
• Any nanoseconds in the original 96-bit value produced by Impala are not stored, because Kudu represents date/time
columns using 64-bit values. The nanosecond portion of the value is rounded, not truncated. Therefore, a
TIMESTAMP value that you store in a Kudu table might not be bit-for-bit identical to the value returned by a query.
• The conversion between the Impala 96-bit representation and the Kudu 64-bit representation introduces some
performance overhead when reading or writing TIMESTAMP columns. You can minimize the overhead during
writes by performing inserts through the Kudu API. Because the overhead during reads applies to each query, you
might continue to use a BIGINT column to represent date/time values in performance-critical applications.
• The Impala TIMESTAMP type has a narrower range for years than the underlying Kudu data type. Impala can
represent years 1400-9999. If year values outside this range are written to a Kudu table by a non-Impala client,
Impala returns NULL by default when reading those TIMESTAMP values during a query. Or, if the ABORT_ON_ERROR
query option is enabled, the query fails when it encounters a value with an out-of-range year.
--- Make a table representing a date/time value as TIMESTAMP.
-- The strings representing the partition bounds are automatically
-- cast to TIMESTAMP values.
create table native_timestamp(id bigint, when_exactly timestamp, event string, primary
key (id, when_exactly))
partition by hash (id) partitions 20,
range (when_exactly)
(
partition '2015-01-01' <= values < '2016-01-01',
partition '2016-01-01' <= values < '2017-01-01',
partition '2017-01-01' <= values < '2018-01-01'
)
stored as kudu;
678 | Apache Impala Guide
Using Impala to Query Kudu Tables
insert into native_timestamp values (12345, now(), 'Working on doc examples');
select * from native_timestamp;
+-------+-------------------------------+-------------------------+
| id | when_exactly | event |
+-------+-------------------------------+-------------------------+
| 12345 | 2017-05-31 16:27:42.667542000 | Working on doc examples |
+-------+-------------------------------+-------------------------+
Because Kudu tables have some performance overhead to convert TIMESTAMP columns to the Impala 96-bit internal
representation, for performance-critical applications you might store date/time information as the number of seconds,
milliseconds, or microseconds since the Unix epoch date of January 1, 1970. Specify the column as BIGINT in the
Impala CREATE TABLE statement, corresponding to an 8-byte integer (an int64) in the underlying Kudu table). Then
use Impala date/time conversion functions as necessary to produce a numeric, TIMESTAMP, or STRING value depending
on the context.
For example, the unix_timestamp() function returns an integer result representing the number of seconds past the
epoch. The now() function produces a TIMESTAMP representing the current date and time, which can be passed as
an argument to unix_timestamp(). And string literals representing dates and date/times can be cast to TIMESTAMP,
and from there converted to numeric values. The following examples show how you might store a date/time column
as BIGINT in a Kudu table, but still use string literals and TIMESTAMP values for convenience.
-- now() returns a TIMESTAMP and shows the format for string literals you can cast to
TIMESTAMP.
select now();
+-------------------------------+
| now() |
+-------------------------------+
| 2017-01-25 23:50:10.132385000 |
+-------------------------------+
-- unix_timestamp() accepts either a TIMESTAMP or an equivalent string literal.
select unix_timestamp(now());
+------------------+
| unix_timestamp() |
+------------------+
| 1485386670 |
+------------------+
select unix_timestamp('2017-01-01');
+------------------------------+
| unix_timestamp('2017-01-01') |
+------------------------------+
| 1483228800 |
+------------------------------+
-- Make a table representing a date/time value as BIGINT.
-- Construct 1 range partition and 20 associated hash partitions for each year.
-- Use date/time conversion functions to express the ranges as human-readable dates.
create table time_series(id bigint, when_exactly bigint, event string, primary key (id,
when_exactly))
partition by hash (id) partitions 20,
range (when_exactly)
(
partition unix_timestamp('2015-01-01') <= values < unix_timestamp('2016-01-01'),
partition unix_timestamp('2016-01-01') <= values < unix_timestamp('2017-01-01'),
partition unix_timestamp('2017-01-01') <= values < unix_timestamp('2018-01-01')
)
stored as kudu;
-- On insert, we can transform a human-readable date/time into a numeric value.
insert into time_series values (12345, unix_timestamp('2017-01-25 23:24:56'), 'Working
on doc examples');
-- On retrieval, we can examine the numeric date/time value or turn it back into a string
for readability.
select id, when_exactly, from_unixtime(when_exactly) as 'human-readable date/time',
Apache Impala Guide | 679
Using Impala to Query Kudu Tables
event
from time_series order by when_exactly limit 100;
+-------+--------------+--------------------------+-------------------------+
| id | when_exactly | human-readable date/time | event |
+-------+--------------+--------------------------+-------------------------+
| 12345 | 1485386696 | 2017-01-25 23:24:56 | Working on doc examples |
+-------+--------------+--------------------------+-------------------------+
Note:
If you do high-precision arithmetic involving numeric date/time values, when dividing millisecond
values by 1000, or microsecond values by 1 million, always cast the integer numerator to a DECIMAL
with sufficient precision and scale to avoid any rounding or loss of precision.
-- 1 million and 1 microseconds = 1.000001 seconds.
select microseconds,
cast (microseconds as decimal(20,7)) / 1e6 as fractional_seconds
from table_with_microsecond_column;
+--------------+----------------------+
| microseconds | fractional_seconds |
+--------------+----------------------+
| 1000001 | 1.000001000000000000 |
+--------------+----------------------+
How Impala Handles Kudu Metadata
Note: This section only applies the Kudu services that are not integrated with the Hive Metastore
(HMS).
By default, much of the metadata for Kudu tables is handled by the underlying storage layer. Kudu tables have less
reliance on the Metastore database, and require less metadata caching on the Impala side. For example, information
about partitions in Kudu tables is managed by Kudu, and Impala does not cache any block locality metadata for Kudu
tables. If the Kudu service is not integrated with the Hive Metastore, Impala will manage Kudu table metadata in the
Hive Metastore.
The REFRESH and INVALIDATE METADATA statements are needed less frequently for Kudu tables than for HDFS-backed
tables. Neither statement is needed when data is added to, removed, or updated in a Kudu table, even if the changes
are made directly to Kudu through a client program using the Kudu API. Run REFRESH table_name or INVALIDATE
METADATA table_name for a Kudu table only after making a change to the Kudu table schema, such as adding or
dropping a column.
Because Kudu manages the metadata for its own tables separately from the metastore database, there is a table name
stored in the metastore database for Impala to use, and a table name on the Kudu side, and these names can be
modified independently through ALTER TABLE statements.
To avoid potential name conflicts, the prefix impala:: and the Impala database name are encoded into the underlying
Kudu table name:
create database some_database;
use some_database;
create table table_name_demo (x int primary key, y int)
partition by hash (x) partitions 2 stored as kudu;
describe formatted table_name_demo;
...
680 | Apache Impala Guide
Using Impala to Query Kudu Tables
kudu.table_name | impala::some_database.table_name_demo
See Kudu Tables on page 198 for examples of how to change the name of the Impala table in the metastore database,
the name of the underlying Kudu table, or both.
Working with Kudu Integrated with Hive Metastore
Starting from Kudu 1.10 and Impala 3.3 in CDH 6.3, Impala supports Kudu services integrated with the Hive Metastore
(HMS). See the HMS integration documentation for more details on Kudu’s Hive Metastore integration.
The following are some of the changes you need to consider when working with Kudu services integrated with the
HMS.
• When Kudu is integrated with the Hive Metastore, Impala must be configured to use the same HMS as Kudu.
• Since there may be no one-to-one mapping between Kudu tables and external tables, only internal tables are
automatically synchronized.
• When you create a table in Kudu, Kudu will create an HMS entry for that table with the internal table type.
• When the Kudu service is integrated with the HMS, internal table entries will be created automatically in the HMS
when tables are created in Kudu without Impala. To access these tables through Impala, run INVALIDATE
METADATA statement so Impala picks up the latest metadata.
Loading Data into Kudu Tables
Kudu tables are well-suited to use cases where data arrives continuously, in small or moderate volumes. To bring data
into Kudu tables, use the Impala INSERT and UPSERT statements. The LOAD DATA statement does not apply to Kudu
tables.
Because Kudu manages its own storage layer that is optimized for smaller block sizes than HDFS, and performs its own
housekeeping to keep data evenly distributed, it is not subject to the “many small files” issue and does not need explicit
reorganization and compaction as the data grows over time. The partitions within a Kudu table can be specified to
cover a variety of possible data distributions, instead of hardcoding a new partition for each new day, hour, and so on,
which can lead to inefficient, hard-to-scale, and hard-to-manage partition schemes with HDFS tables.
Your strategy for performing ETL or bulk updates on Kudu tables should take into account the limitations on consistency
for DML operations.
Make INSERT, UPDATE, and UPSERT operations idempotent: that is, able to be applied multiple times and still produce
an identical result.
If a bulk operation is in danger of exceeding capacity limits due to timeouts or high memory usage, split it into a series
of smaller operations.
Avoid running concurrent ETL operations where the end results depend on precise ordering. In particular, do not rely
on an INSERT ... SELECT statement that selects from the same table into which it is inserting, unless you include
extra conditions in the WHERE clause to avoid reading the newly inserted rows within the same statement.
Because relationships between tables cannot be enforced by Impala and Kudu, and cannot be committed or rolled
back together, do not expect transactional semantics for multi-table operations.
Impala DML Support for Kudu Tables (INSERT, UPDATE, DELETE, UPSERT)
Impala supports certain DML statements for Kudu tables only. The UPDATE and DELETE statements let you modify
data within Kudu tables without rewriting substantial amounts of table data. The UPSERT statement acts as a combination
of INSERT and UPDATE, inserting rows where the primary key does not already exist, and updating the non-primary
key columns where the primary key does already exist in the table.
Apache Impala Guide | 681
Using Impala to Query Kudu Tables
The INSERT statement for Kudu tables honors the unique and NOT NULL requirements for the primary key columns.
Because Impala and Kudu do not support transactions, the effects of any INSERT, UPDATE, or DELETE statement are
immediately visible. For example, you cannot do a sequence of UPDATE statements and only make the changes visible
after all the statements are finished. Also, if a DML statement fails partway through, any rows that were already
inserted, deleted, or changed remain in the table; there is no rollback mechanism to undo the changes.
In particular, an INSERT ... SELECT statement that refers to the table being inserted into might insert more rows
than expected, because the SELECT part of the statement sees some of the new rows being inserted and processes
them again.
Note:
The LOAD DATA statement, which involves manipulation of HDFS data files, does not apply to Kudu
tables.
Starting from CDH 5.12 / Impala 2.9, the INSERT or UPSERT operations into Kudu tables automatically add an exchange
and a sort node to the plan that partitions and sorts the rows according to the partitioning/primary key scheme of the
target table (unless the number of rows to be inserted is small enough to trigger single node execution). Since Kudu
partitions and sorts rows on write, pre-partitioning and sorting takes some of the load off of Kudu and helps large
INSERT operations to complete without timing out. However, this default behavior may slow down the end-to-end
performance of the INSERT or UPSERT operations. Starting from CDH 5.13 / Impala 2.10, you can use the /*
+NOCLUSTERED */ and /* +NOSHUFFLE */ hints together to disable partitioning and sorting before the rows are
sent to Kudu. Additionally, since sorting may consume a large amount of memory, consider setting the MEM_LIMIT
query option for those queries.
Consistency Considerations for Kudu Tables
Kudu tables have consistency characteristics such as uniqueness, controlled by the primary key columns, and non-nullable
columns. The emphasis for consistency is on preventing duplicate or incomplete data from being stored in a table.
Currently, Kudu does not enforce strong consistency for order of operations, total success or total failure of a multi-row
statement, or data that is read while a write operation is in progress. Changes are applied atomically to each row, but
not applied as a single unit to all rows affected by a multi-row DML statement. That is, Kudu does not currently have
atomic multi-row statements or isolation between statements.
If some rows are rejected during a DML operation because of a mismatch with duplicate primary key values, NOT NULL
constraints, and so on, the statement succeeds with a warning. Impala still inserts, deletes, or updates the other rows
that are not affected by the constraint violation.
Consequently, the number of rows affected by a DML operation on a Kudu table might be different than you expect.
Because there is no strong consistency guarantee for information being inserted into, deleted from, or updated across
multiple tables simultaneously, consider denormalizing the data where practical. That is, if you run separate INSERT
statements to insert related rows into two different tables, one INSERT might fail while the other succeeds, leaving
the data in an inconsistent state. Even if both inserts succeed, a join query might happen during the interval between
the completion of the first and second statements, and the query would encounter incomplete inconsistent data.
Denormalizing the data into a single wide table can reduce the possibility of inconsistency due to multi-table operations.
Information about the number of rows affected by a DML operation is reported in impala-shell output, and in the
PROFILE output, but is not currently reported to HiveServer2 clients such as JDBC or ODBC applications.
Security Considerations for Kudu Tables
Security for Kudu tables involves:
• Sentry authorization.
682 | Apache Impala Guide
Using Impala to Query Kudu Tables
Access to Kudu tables must be granted to and revoked from roles with the following considerations:
• Only users with the ALL privilege on SERVER can create external Kudu tables.
• The ALL privileges on SERVER is required to specify the kudu.master_addresses property in the CREATE
TABLE statements for managed tables as well as external tables.
• Access to Kudu tables is enforced at the table level and at the column level.
• The SELECT- and INSERT-specific permissions are supported.
• The DELETE, UPDATE, and UPSERT operations require the ALL privilege.
Because non-SQL APIs can access Kudu data without going through Sentry authorization, currently the Sentry
support is considered preliminary and subject to change.
• Kerberos authentication. See for details.
• TLS encryption. See for details.
• Lineage tracking.
• Auditing.
• Redaction of sensitive information from log files.
Impala Query Performance for Kudu Tables
For queries involving Kudu tables, Impala can delegate much of the work of filtering the result set to Kudu, avoiding
some of the I/O involved in full table scans of tables containing HDFS data files. This type of optimization is especially
effective for partitioned Kudu tables, where the Impala query WHERE clause refers to one or more primary key columns
that are also used as partition key columns. For example, if a partitioned Kudu table uses a HASH clause for col1 and
a RANGE clause for col2, a query using a clause such as WHERE col1 IN (1,2,3) AND col2 > 100 can determine
exactly which tablet servers contain relevant data, and therefore parallelize the query very efficiently.
In CDH 5.14 / Impala 2.11 and higher, Impala can push down additional information to optimize join queries involving
Kudu tables. If the join clause contains predicates of the form column = expression, after Impala constructs a hash
table of possible matching values for the join columns from the bigger table (either an HDFS table or a Kudu table),
Impala can “push down” the minimum and maximum matching column values to Kudu, so that Kudu can more efficiently
locate matching rows in the second (smaller) table. These min/max filters are affected by the RUNTIME_FILTER_MODE,
RUNTIME_FILTER_WAIT_TIME_MS, and DISABLE_ROW_RUNTIME_FILTERING query options; the min/max filters
are not affected by the RUNTIME_BLOOM_FILTER_SIZE, RUNTIME_FILTER_MIN_SIZE, RUNTIME_FILTER_MAX_SIZE,
and MAX_NUM_RUNTIME_FILTERS query options.
See EXPLAIN Statement on page 271 for examples of evaluating the effectiveness of the predicate pushdown for a
specific query against a Kudu table.
The TABLESAMPLE clause of the SELECT statement does not apply to a table reference derived from a view, a subquery,
or anything other than a real base table. This clause only works for tables backed by HDFS or HDFS-like data files,
therefore it does not apply to Kudu or HBase tables.
Apache Impala Guide | 683
Using Impala to Query HBase Tables
Using Impala to Query HBase Tables
You can use Impala to query HBase tables. This is useful for accessing any of your existing HBase tables via SQL and
performing analytics over them. HDFS and Kudu tables are preferred over HBase for analytic workloads and offer
superior performance. Kudu supports efficient inserts, updates and deletes of small numbers of rows and can replace
HBase for most analytics-oriented use cases. See Using Impala to Query Kudu Tables on page 670 for information on
using Impala with Kudu.
From the perspective of an Impala user, coming from an RDBMS background, HBase is a kind of key-value store where
the value consists of multiple fields. The key is mapped to one column in the Impala table, and the various fields of
the value are mapped to the other columns in the Impala table.
For background information on HBase, see the Apache HBase documentation. This is a snapshot of the Apache HBase
site (including documentation) for the level of HBase that comes with CDH.
Overview of Using HBase with Impala
When you use Impala with HBase:
• You create the tables on the Impala side using the Hive shell, because the Impala CREATE TABLE statement
currently does not support custom SerDes and some other syntax needed for these tables:
– You designate it as an HBase table using the STORED BY
'org.apache.hadoop.hive.hbase.HBaseStorageHandler' clause on the Hive CREATE TABLE
statement.
– You map these specially created tables to corresponding tables that exist in HBase, with the clause
TBLPROPERTIES("hbase.table.name" = "table_name_in_hbase") on the Hive CREATE TABLE
statement.
– See Examples of Querying HBase Tables from Impala on page 690 for a full example.
• You define the column corresponding to the HBase row key as a string with the #string keyword, or map it to
a STRING column.
• Because Impala and Hive share the same metastore database, once you create the table in Hive, you can query
or insert into it through Impala. (After creating a new table through Hive, issue the INVALIDATE METADATA
statement in impala-shell to make Impala aware of the new table.)
• You issue queries against the Impala tables. For efficient queries, use WHERE clause to find a single key value or a
range of key values wherever practical, by testing the Impala column corresponding to the HBase row key. Avoid
queries that do full-table scans, which are efficient for regular Impala tables but inefficient in HBase.
To work with an HBase table from Impala, ensure that the impala user has read/write privileges for the HBase table,
using the GRANT command in the HBase shell. For details about HBase security, see the Security chapter in the Apache
HBase documentation.
Configuring HBase for Use with Impala
HBase works out of the box with Impala. There is no mandatory configuration needed to use these two components
together.
To avoid delays if HBase is unavailable during Impala startup or after an INVALIDATE METADATA statement, set
timeout values similar to the following in /etc/impala/conf/hbase-site.xml (for environments not managed
by Cloudera Manager):
hbase.client.retries.number
3
684 | Apache Impala Guide
Using Impala to Query HBase Tables
hbase.rpc.timeout
3000
Currently, Cloudera Manager does not have an Impala-only override for HBase settings, so any HBase configuration
change you make through Cloudera Manager would take affect for all HBase applications. Therefore, this change is
not recommended on systems managed by Cloudera Manager.
Supported Data Types for HBase Columns
To understand how Impala column data types are mapped to fields in HBase, you should have some background
knowledge about HBase first. You set up the mapping by running the CREATE TABLE statement in the Hive shell. See
the Hive wiki for a starting point, and Examples of Querying HBase Tables from Impala on page 690 for examples.
HBase works as a kind of z“bit bucket”, in the sense that HBase does not enforce any typing for the key or value fields.
All the type enforcement is done on the Impala side.
For best performance of Impala queries against HBase tables, most queries will perform comparisons in the WHERE
clause against the column that corresponds to the HBase row key. When creating the table through the Hive shell, use
the STRING data type for the column that corresponds to the HBase row key. Impala can translate predicates (through
operators such as =, <, and BETWEEN) against this column into fast lookups in HBase, but this optimization (“predicate
pushdown”) only works when that column is defined as STRING.
Starting in Impala 1.1, Impala also supports reading and writing to columns that are defined in the Hive CREATE TABLE
statement using binary data types, represented in the Hive table definition using the #binary keyword, often
abbreviated as #b. Defining numeric columns as binary can reduce the overall data volume in the HBase tables. You
should still define the column that corresponds to the HBase row key as a STRING, to allow fast lookups using those
columns.
Performance Considerations for the Impala-HBase Integration
To understand the performance characteristics of SQL queries against data stored in HBase, you should have some
background knowledge about how HBase interacts with SQL-oriented systems first. See the Hive wiki for a starting
point; because Impala shares the same metastore database as Hive, the information about mapping columns from
Hive tables to HBase tables is generally applicable to Impala too.
Impala uses the HBase client API via Java Native Interface (JNI) to query data stored in HBase. This querying does not
read HFiles directly. The extra communication overhead makes it important to choose what data to store in HBase or
in HDFS, and construct efficient queries that can retrieve the HBase data efficiently:
• Use HBase table for queries that return a single row or a small range of rows, not queries that perform a full table
scan of an entire table. (If a query has a HBase table and no WHERE clause referencing that table, that is a strong
indicator that it is an inefficient query for an HBase table.)
• HBase may offer acceptable performance for storing small dimension tables where the table is small enough that
executing a full table scan for every query is efficient enough. However, Kudu is almost always a superior alternative
for storing dimension tables. HDFS tables are also appropriate for dimension tables that do not need to support
update queries, delete queries or insert queries with small numbers of rows.
Query predicates are applied to row keys as start and stop keys, thereby limiting the scope of a particular lookup. If
row keys are not mapped to string columns, then ordering is typically incorrect and comparison operations do not
work. For example, if row keys are not mapped to string columns, evaluating for greater than (>) or less than (<) cannot
be completed.
Predicates on non-key columns can be sent to HBase to scan as SingleColumnValueFilters, providing some
performance gains. In such a case, HBase returns fewer rows than if those same predicates were applied using Impala.
While there is some improvement, it is not as great when start and stop rows are used. This is because the number of
Apache Impala Guide | 685
Using Impala to Query HBase Tables
rows that HBase must examine is not limited as it is when start and stop rows are used. As long as the row key predicate
only applies to a single row, HBase will locate and return that row. Conversely, if a non-key predicate is used, even if
it only applies to a single row, HBase must still scan the entire table to find the correct result.
Interpreting EXPLAIN Output for HBase Queries
For example, here are some queries against the following Impala table, which is mapped to an HBase table. The
examples show excerpts from the output of the EXPLAIN statement, demonstrating what things to look for to indicate
an efficient or inefficient query against an HBase table.
The first column (cust_id) was specified as the key column in the CREATE EXTERNAL TABLE statement; for
performance, it is important to declare this column as STRING. Other columns, such as BIRTH_YEAR and
NEVER_LOGGED_ON, are also declared as STRING, rather than their “natural” types of INT or BOOLEAN, because Impala
can optimize those types more effectively in HBase tables. For comparison, we leave one column, YEAR_REGISTERED,
as INT to show that filtering on this column is inefficient.
describe hbase_table;
Query: describe hbase_table
+-----------------------+--------+---------+
| name | type | comment |
+-----------------------+--------+---------+
| cust_id | string | |
| birth_year | string | |
| never_logged_on | string | |
| private_email_address | string | |
| year_registered | int | |
+-----------------------+--------+---------+
The best case for performance involves a single row lookup using an equality comparison on the column defined as
the row key:
explain select count(*) from hbase_table where cust_id = 'some_user@example.com';
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
| Estimated Per-Host Requirements: Memory=1.01GB VCores=1 |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| hbase.hbase_table |
| |
| 03:AGGREGATE [MERGE FINALIZE] |
| | output: sum(count(*)) |
| | |
| 02:EXCHANGE [PARTITION=UNPARTITIONED] |
| | |
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HBASE [hbase.hbase_table] |
| start key: some_user@example.com |
| stop key: some_user@example.com\0 |
+------------------------------------------------------------------------------------+
Another type of efficient query involves a range lookup on the row key column, using SQL operators such as greater
than (or equal), less than (or equal), or BETWEEN. This example also includes an equality test on a non-key column;
because that column is a STRING, Impala can let HBase perform that test, indicated by the hbase filters: line in
the EXPLAIN output. Doing the filtering within HBase is more efficient than transmitting all the data to Impala and
doing the filtering on the Impala side.
explain select count(*) from hbase_table where cust_id between 'a' and 'b'
and never_logged_on = 'true';
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
...
| 01:AGGREGATE |
| | output: count(*) |
686 | Apache Impala Guide
Using Impala to Query HBase Tables
| | |
| 00:SCAN HBASE [hbase.hbase_table] |
| start key: a |
| stop key: b\0 |
| hbase filters: cols:never_logged_on EQUAL 'true' |
+------------------------------------------------------------------------------------+
The query is less efficient if Impala has to evaluate any of the predicates, because Impala must scan the entire HBase
table. Impala can only push down predicates to HBase for columns declared as STRING. This example tests a column
declared as INT, and the predicates: line in the EXPLAIN output indicates that the test is performed after the data
is transmitted to Impala.
explain select count(*) from hbase_table where year_registered = 2010;
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
...
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HBASE [hbase.hbase_table] |
| predicates: year_registered = 2010 |
+------------------------------------------------------------------------------------+
The same inefficiency applies if the key column is compared to any non-constant value. Here, even though the key
column is a STRING, and is tested using an equality operator, Impala must scan the entire HBase table because the
key column is compared to another column value rather than a constant.
explain select count(*) from hbase_table where cust_id = private_email_address;
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
...
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HBASE [hbase.hbase_table] |
| predicates: cust_id = private_email_address |
+------------------------------------------------------------------------------------+
Currently, tests on the row key using OR or IN clauses are not optimized into direct lookups either. Such limitations
might be lifted in the future, so always check the EXPLAIN output to be sure whether a particular SQL construct results
in an efficient query or not for HBase tables.
explain select count(*) from hbase_table where
cust_id = 'some_user@example.com' or cust_id = 'other_user@example.com';
+----------------------------------------------------------------------------------------+
| Explain String
|
+----------------------------------------------------------------------------------------+
...
| 01:AGGREGATE
|
| | output: count(*)
|
| |
|
| 00:SCAN HBASE [hbase.hbase_table]
|
| predicates: cust_id = 'some_user@example.com' OR cust_id = 'other_user@example.com'
|
+----------------------------------------------------------------------------------------+
explain select count(*) from hbase_table where
cust_id in ('some_user@example.com', 'other_user@example.com');
Apache Impala Guide | 687
Using Impala to Query HBase Tables
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
...
| 01:AGGREGATE |
| | output: count(*) |
| | |
| 00:SCAN HBASE [hbase.hbase_table] |
| predicates: cust_id IN ('some_user@example.com', 'other_user@example.com') |
+------------------------------------------------------------------------------------+
Either rewrite into separate queries for each value and combine the results in the application, or combine the single-row
queries using UNION ALL:
select count(*) from hbase_table where cust_id = 'some_user@example.com';
select count(*) from hbase_table where cust_id = 'other_user@example.com';
explain
select count(*) from hbase_table where cust_id = 'some_user@example.com'
union all
select count(*) from hbase_table where cust_id = 'other_user@example.com';
+------------------------------------------------------------------------------------+
| Explain String |
+------------------------------------------------------------------------------------+
...
| | 04:AGGREGATE |
| | | output: count(*) |
| | | |
| | 03:SCAN HBASE [hbase.hbase_table] |
| | start key: other_user@example.com |
| | stop key: other_user@example.com\0 |
| | |
| 10:MERGE |
...
| 02:AGGREGATE |
| | output: count(*) |
| | |
| 01:SCAN HBASE [hbase.hbase_table] |
| start key: some_user@example.com |
| stop key: some_user@example.com\0 |
+------------------------------------------------------------------------------------+
Configuration Options for Java HBase Applications
If you have an HBase Java application that calls the setCacheBlocks or setCaching methods of the class
org.apache.hadoop.hbase.client.Scan, you can set these same caching behaviors through Impala query options, to
control the memory pressure on the HBase RegionServer. For example, when doing queries in HBase that result in
full-table scans (which by default are inefficient for HBase), you can reduce memory usage and speed up the queries
by turning off the HBASE_CACHE_BLOCKS setting and specifying a large number for the HBASE_CACHING setting.
To set these options, issue commands like the following in impala-shell:
-- Same as calling setCacheBlocks(true) or setCacheBlocks(false).
set hbase_cache_blocks=true;
set hbase_cache_blocks=false;
-- Same as calling setCaching(rows).
set hbase_caching=1000;
Or update the impalad defaults file /etc/default/impala and include settings for HBASE_CACHE_BLOCKS and/or
HBASE_CACHING in the -default_query_options setting for IMPALA_SERVER_ARGS. See Modifying Impala Startup
Options for details.
688 | Apache Impala Guide
Note: In Impala 2.0 and later, these options are settable through the JDBC or ODBC interfaces using
the SET statement.
Using Impala to Query HBase Tables
Use Cases for Querying HBase through Impala
The following are representative use cases for using Impala to query HBase tables:
• Using HBase to store rapidly incrementing counters, such as how many times a web page has been viewed, or on
a social network, how many connections a user has or how many votes a post received. HBase is efficient for
capturing such changeable data: the append-only storage mechanism is efficient for writing each change to disk,
and a query always returns the latest value. An application could query specific totals like these from HBase, and
combine the results with a broader set of data queried from Impala.
• Storing very wide tables in HBase. Wide tables have many columns, possibly thousands, typically recording many
attributes for an important subject such as a user of an online service. These tables are also often sparse, that is,
most of the columns values are NULL, 0, false, empty string, or other blank or placeholder value. (For example,
any particular web site user might have never used some site feature, filled in a certain field in their profile, visited
a particular part of the site, and so on.) A typical query against this kind of table is to look up a single row to retrieve
all the information about a specific subject, rather than summing, averaging, or filtering millions of rows as in
typical Impala-managed tables.
Loading Data into an HBase Table
The Impala INSERT statement works for HBase tables. The INSERT ... VALUES syntax is ideally suited to HBase
tables, because inserting a single row is an efficient operation for an HBase table. (For regular Impala tables, with data
files in HDFS, the tiny data files produced by INSERT ... VALUES are extremely inefficient, so you would not use
that technique with tables containing any significant data volume.)
When you use the INSERT ... SELECT syntax, the result in the HBase table could be fewer rows than you expect.
HBase only stores the most recent version of each unique row key, so if an INSERT ... SELECT statement copies
over multiple rows containing the same value for the key column, subsequent queries will only return one row with
each key column value:
Although Impala does not have an UPDATE statement, you can achieve the same effect by doing successive INSERT
statements using the same value for the key column each time:
Limitations and Restrictions of the Impala and HBase Integration
The Impala integration with HBase has the following limitations and restrictions, some inherited from the integration
between HBase and Hive, and some unique to Impala:
• If you issue a DROP TABLE for an internal (Impala-managed) table that is mapped to an HBase table, the underlying
table is not removed in HBase. The Hive DROP TABLE statement also removes the HBase table in this case.
• The INSERT OVERWRITE statement is not available for HBase tables. You can insert new data, or modify an
existing row by inserting a new row with the same key value, but not replace the entire contents of the table. You
can do an INSERT OVERWRITE in Hive if you need this capability.
• If you issue a CREATE TABLE LIKE statement for a table mapped to an HBase table, the new table is also an
HBase table, but inherits the same underlying HBase table name as the original. The new table is effectively an
alias for the old one, not a new table with identical column structure. Avoid using CREATE TABLE LIKE for HBase
tables, to avoid any confusion.
• Copying data into an HBase table using the Impala INSERT ... SELECT syntax might produce fewer new rows
than are in the query result set. If the result set contains multiple rows with the same value for the key column,
Apache Impala Guide | 689
Using Impala to Query HBase Tables
each row supercedes any previous rows with the same key value. Because the order of the inserted rows is
unpredictable, you cannot rely on this technique to preserve the “latest” version of a particular key value.
• Because the complex data types (ARRAY, STRUCT, and MAP) available in CDH 5.5 / Impala 2.3 and higher are
currently only supported in Parquet tables, you cannot use these types in HBase tables that are queried through
Impala.
• The LOAD DATA statement cannot be used with HBase tables.
• The TABLESAMPLE clause of the SELECT statement does not apply to a table reference derived from a view, a
subquery, or anything other than a real base table. This clause only works for tables backed by HDFS or HDFS-like
data files, therefore it does not apply to Kudu or HBase tables.
Examples of Querying HBase Tables from Impala
The following examples create an HBase table with four column families, create a corresponding table through Hive,
then insert and query the table through Impala.
In HBase shell, the table name is quoted in CREATE and DROP statements. Tables created in HBase begin in “enabled”
state; before dropping them through the HBase shell, you must issue a disable 'table_name' statement.
$ hbase shell
15/02/10 16:07:45
HBase Shell; enter 'help' for list of supported commands.
Type "exit" to leave the HBase Shell
Version 0.94.2-cdh4.2.0, rUnknown, Fri Feb 15 11:51:18 PST 2013
hbase(main):001:0> create 'hbasealltypessmall', 'boolsCF', 'intsCF', 'floatsCF',
'stringsCF'
0 row(s) in 4.6520 seconds
=> Hbase::Table - hbasealltypessmall
hbase(main):006:0> quit
Issue the following CREATE TABLE statement in the Hive shell. (The Impala CREATE TABLE statement currently does
not support the STORED BY clause, so you switch into Hive to create the table, then back to Impala and the
impala-shell interpreter to issue the queries.)
This example creates an external table mapped to the HBase table, usable by both Impala and Hive. It is defined as an
external table so that when dropped by Impala or Hive, the original HBase table is not touched at all.
The WITH SERDEPROPERTIES clause specifies that the first column (ID) represents the row key, and maps the
remaining columns of the SQL table to HBase column families. The mapping relies on the ordinal order of the columns
in the table, not the column names in the CREATE TABLE statement. The first column is defined to be the lookup key;
the STRING data type produces the fastest key-based lookups for HBase tables.
Note: For Impala with HBase tables, the most important aspect to ensure good performance is to
use a STRING column as the row key, as shown in this example.
$ hive
...
hive> use hbase;
OK
Time taken: 4.095 seconds
hive> CREATE EXTERNAL TABLE hbasestringids (
> id string,
> bool_col boolean,
> tinyint_col tinyint,
> smallint_col smallint,
> int_col int,
> bigint_col bigint,
> float_col float,
690 | Apache Impala Guide
Using Impala to Query HBase Tables
> double_col double,
> date_string_col string,
> string_col string,
> timestamp_col timestamp)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" =
>
":key,boolsCF:bool_col,intsCF:tinyint_col,intsCF:smallint_col,intsCF:int_col,intsCF:\
> bigint_col,floatsCF:float_col,floatsCF:double_col,stringsCF:date_string_col,\
> stringsCF:string_col,stringsCF:timestamp_col"
> )
> TBLPROPERTIES("hbase.table.name" = "hbasealltypessmall");
OK
Time taken: 2.879 seconds
hive> quit;
Once you have established the mapping to an HBase table, you can issue DML statements and queries from Impala.
The following example shows a series of INSERT statements followed by a query. The ideal kind of query from a
performance standpoint retrieves a row from the table based on a row key mapped to a string column. An initial
INVALIDATE METADATA table_name statement makes the table created through Hive visible to Impala.
$ impala-shell -i localhost -d hbase
Starting Impala Shell without Kerberos authentication
Connected to localhost:21000
...
Query: use `hbase`
[localhost:21000] > invalidate metadata hbasestringids;
Fetched 0 row(s) in 0.09s
[localhost:21000] > desc hbasestringids;
+-----------------+-----------+---------+
| name | type | comment |
+-----------------+-----------+---------+
| id | string | |
| bool_col | boolean | |
| double_col | double | |
| float_col | float | |
| bigint_col | bigint | |
| int_col | int | |
| smallint_col | smallint | |
| tinyint_col | tinyint | |
| date_string_col | string | |
| string_col | string | |
| timestamp_col | timestamp | |
+-----------------+-----------+---------+
Fetched 11 row(s) in 0.02s
[localhost:21000] > insert into hbasestringids values
('0001',true,3.141,9.94,1234567,32768,4000,76,'2014-12-31','Hello world',now());
Inserted 1 row(s) in 0.26s
[localhost:21000] > insert into hbasestringids values
('0002',false,2.004,6.196,1500,8000,129,127,'2014-01-01','Foo bar',now());
Inserted 1 row(s) in 0.12s
[localhost:21000] > select * from hbasestringids where id = '0001';
+------+----------+------------+-------------------+------------+---------+--------------+-------------+-----------------+-------------+-------------------------------+
| id | bool_col | double_col | float_col | bigint_col | int_col | smallint_col
| tinyint_col | date_string_col | string_col | timestamp_col |
+------+----------+------------+-------------------+------------+---------+--------------+-------------+-----------------+-------------+-------------------------------+
| 0001 | true | 3.141 | 9.939999580383301 | 1234567 | 32768 | 4000
| 76 | 2014-12-31 | Hello world | 2015-02-10 16:36:59.764838000 |
+------+----------+------------+-------------------+------------+---------+--------------+-------------+-----------------+-------------+-------------------------------+
Fetched 1 row(s) in 0.54s
Note: After you create a table in Hive, such as the HBase mapping table in this example, issue an
INVALIDATE METADATA table_name statement the next time you connect to Impala, make Impala
aware of the new table. (Prior to Impala 1.2.4, you could not specify the table name if Impala was not
aware of the table yet; in Impala 1.2.4 and higher, specifying the table name avoids reloading the
metadata for other tables that are not changed.)
Apache Impala Guide | 691
Using Impala with the Amazon S3 Filesystem
Using Impala with the Amazon S3 Filesystem
Important:
In CDH 5.8 / Impala 2.6 and higher, Impala supports both queries (SELECT) and DML (INSERT, LOAD
DATA, CREATE TABLE AS SELECT) for data residing on Amazon S3. With the inclusion of write
support, the Impala support for S3 is now considered ready for production use.
You can use Impala to query data residing on the Amazon S3 filesystem. This capability allows convenient access to a
storage system that is remotely managed, accessible from anywhere, and integrated with various cloud-based services.
Impala can query files in any supported file format from S3. The S3 storage location can be for an entire table, or
individual partitions in a partitioned table.
The default Impala tables use data files stored on HDFS, which are ideal for bulk loads and queries using full-table
scans. In contrast, queries against S3 data are less performant, making S3 suitable for holding “cold” data that is only
queried occasionally, while more frequently accessed “hot” data resides in HDFS. In a partitioned table, you can set
the LOCATION attribute for individual partitions to put some partitions on HDFS and others on S3, typically depending
on the age of the data.
See Specifying Impala Credentials to Access Data in S3 for information about configuring Impala to use Amazon S3
filesystem.
How Impala SQL Statements Work with S3
Impala SQL statements work with data on S3 as follows:
• The CREATE TABLE Statement on page 234 or ALTER TABLE Statement on page 205 statements can specify that a
table resides on the S3 filesystem by encoding an s3a:// prefix for the LOCATION property. ALTER TABLE can
also set the LOCATION property for an individual partition, so that some data in a table resides on S3 and other
data in the same table resides on HDFS.
• Once a table or partition is designated as residing on S3, the SELECT Statement on page 295 statement transparently
accesses the data files from the appropriate storage layer.
• If the S3 table is an internal table, the DROP TABLE Statement on page 268 statement removes the corresponding
data files from S3 when the table is dropped.
• The TRUNCATE TABLE Statement (CDH 5.5 or higher only) on page 381 statement always removes the corresponding
data files from S3 when the table is truncated.
• The LOAD DATA Statement on page 288 can move data files residing in HDFS into an S3 table.
• The INSERT Statement on page 277 statement, or the CREATE TABLE AS SELECT form of the CREATE TABLE
statement, can copy data from an HDFS table or another S3 table into an S3 table. The S3_SKIP_INSERT_STAGING
Query Option (CDH 5.8 or higher only) on page 358 query option chooses whether or not to use a fast code path
for these write operations to S3, with the tradeoff of potential inconsistency in the case of a failure during the
statement.
For usage information about Impala SQL statements with S3 tables, see Creating Impala Databases, Tables, and Partitions
for Data Stored on S3 on page 694 and Using Impala DML Statements for S3 Data on page 693.
692 | Apache Impala Guide
Using Impala with the Amazon S3 Filesystem
Loading Data into S3 for Impala Queries
If your ETL pipeline involves moving data into S3 and then querying through Impala, you can either use Impala DML
statements to create, move, or copy the data, or use the same data loading techniques as you would for non-Impala
data.
Using Impala DML Statements for S3 Data
In CDH 5.8 / Impala 2.6 and higher, the Impala DML statements (INSERT, LOAD DATA, and CREATE TABLE AS
SELECT) can write data into a table or partition that resides in the Amazon Simple Storage Service (S3). The syntax of
the DML statements is the same as for any other tables, because the S3 location for tables and partitions is specified
by an s3a:// prefix in the LOCATION attribute of CREATE TABLE or ALTER TABLE statements. If you bring data into
S3 using the normal S3 transfer mechanisms instead of Impala DML statements, issue a REFRESH statement for the
table before using Impala to query the S3 data.
Because of differences between S3 and traditional filesystems, DML operations for S3 tables can take longer than for
tables on HDFS. For example, both the LOAD DATA statement and the final stage of the INSERT and CREATE TABLE
AS SELECT statements involve moving files from one directory to another. (In the case of INSERT and CREATE TABLE
AS SELECT, the files are moved from a temporary staging directory to the final destination directory.) Because S3
does not support a “rename” operation for existing objects, in these cases Impala actually copies the data files from
one location to another and then removes the original files. In CDH 5.8 / Impala 2.6, the S3_SKIP_INSERT_STAGING
query option provides a way to speed up INSERT statements for S3 tables and partitions, with the tradeoff that a
problem during statement execution could leave data in an inconsistent state. It does not apply to INSERT OVERWRITE
or LOAD DATA statements. See S3_SKIP_INSERT_STAGING Query Option (CDH 5.8 or higher only) on page 358 for
details.
Manually Loading Data into Impala Tables on S3
As an alternative, or on earlier Impala releases without DML support for S3, you can use the Amazon-provided methods
to bring data files into S3 for querying through Impala. See the Amazon S3 web site for details.
Important:
For best compatibility with the S3 write support in CDH 5.8 / Impala 2.6 and higher:
• Use native Hadoop techniques to create data files in S3 for querying through Impala.
• Use the PURGE clause of DROP TABLE when dropping internal (managed) tables.
By default, when you drop an internal (managed) table, the data files are moved to the HDFS trashcan.
This operation is expensive for tables that reside on the Amazon S3 filesystem. Therefore, for S3 tables,
prefer to use DROP TABLE table_name PURGE rather than the default DROP TABLE statement.
The PURGE clause makes Impala delete the data files immediately, skipping the HDFS trashcan. For
the PURGE clause to work effectively, you must originally create the data files on S3 using one of the
tools from the Hadoop ecosystem, such as hadoop fs -cp, or INSERT in Impala or Hive.
Alternative file creation techniques (less compatible with the PURGE clause) include:
• The Amazon AWS / S3 web interface to upload from a web browser.
• The Amazon AWS CLI to manipulate files from the command line.
• Other S3-enabled software, such as the S3Tools client software.
After you upload data files to a location already mapped to an Impala table or partition, or if you delete files in S3 from
such a location, issue the REFRESH table_name statement to make Impala aware of the new set of data files.
Apache Impala Guide | 693
Using Impala with the Amazon S3 Filesystem
Creating Impala Databases, Tables, and Partitions for Data Stored on S3
Impala reads data for a table or partition from S3 based on the LOCATION attribute for the table or partition. Specify
the S3 details in the LOCATION clause of a CREATE TABLE or ALTER TABLE statement. The notation for the LOCATION
clause is s3a://bucket_name/path/to/file. The filesystem prefix is always s3a:// because Impala does not
support the s3:// or s3n:// prefixes.
For a partitioned table, either specify a separate LOCATION clause for each new partition, or specify a base LOCATION
for the table and set up a directory structure in S3 to mirror the way Impala partitioned tables are structured in HDFS.
Although, strictly speaking, S3 filenames do not have directory paths, Impala treats S3 filenames with / characters the
same as HDFS pathnames that include directories.
You point a nonpartitioned table or an individual partition at S3 by specifying a single directory path in S3, which could
be any arbitrary directory. To replicate the structure of an entire Impala partitioned table or database in S3 requires
more care, with directories and subdirectories nested and named to match the equivalent directory tree in HDFS.
Consider setting up an empty staging area if necessary in HDFS, and recording the complete directory structure so that
you can replicate it in S3.
For convenience when working with multiple tables with data files stored in S3, you can create a database with a
LOCATION attribute pointing to an S3 path. Specify a URL of the form s3a://bucket/root/path/for/database
for the LOCATION attribute of the database. Any tables created inside that database automatically create directories
underneath the one specified by the database LOCATION attribute.
For example, the following session creates a partitioned table where only a single partition resides on S3. The partitions
for years 2013 and 2014 are located on HDFS. The partition for year 2015 includes a LOCATION attribute with an
s3a:// URL, and so refers to data residing on S3, under a specific path underneath the bucket impala-demo.
[localhost:21000] > create database db_on_hdfs;
[localhost:21000] > use db_on_hdfs;
[localhost:21000] > create table mostly_on_hdfs (x int) partitioned by (year int);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2013);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2014);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2015)
> location 's3a://impala-demo/dir1/dir2/dir3/t1';
The following session creates a database and two partitioned tables residing entirely on S3, one partitioned by a single
column and the other partitioned by multiple columns. Because a LOCATION attribute with an s3a:// URL is specified
for the database, the tables inside that database are automatically created on S3 underneath the database directory.
To see the names of the associated subdirectories, including the partition key values, we use an S3 client tool to examine
how the directory structure is organized on S3. For example, Impala partition directories such as month=1 do not
include leading zeroes, which sometimes appear in partition directories created through Hive.
[localhost:21000] > create database db_on_s3 location 's3a://impala-demo/dir1/dir2/dir3';
[localhost:21000] > use db_on_s3;
[localhost:21000] > create table partitioned_on_s3 (x int) partitioned by (year int);
[localhost:21000] > alter table partitioned_on_s3 add partition (year=2013);
[localhost:21000] > alter table partitioned_on_s3 add partition (year=2014);
[localhost:21000] > alter table partitioned_on_s3 add partition (year=2015);
[localhost:21000] > !aws s3 ls s3://impala-demo/dir1/dir2/dir3 --recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_s3/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_s3/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_s3/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_s3/year=2015/
[localhost:21000] > create table partitioned_multiple_keys (x int)
> partitioned by (year smallint, month tinyint, day tinyint);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=1);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=31);
[localhost:21000] > alter table partitioned_multiple_keys
694 | Apache Impala Guide
Using Impala with the Amazon S3 Filesystem
> add partition (year=2015,month=2,day=28);
[localhost:21000] > !aws s3 ls s3://impala-demo/dir1/dir2/dir3 --recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:47:13 0 dir1/dir2/dir3/partitioned_multiple_keys/
2015-03-17 16:47:44 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=1/
2015-03-17 16:47:50 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=31/
2015-03-17 16:47:57 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=2/day=28/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_s3/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_s3/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_s3/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_s3/year=2015/
The CREATE DATABASE and CREATE TABLE statements create the associated directory paths if they do not already
exist. You can specify multiple levels of directories, and the CREATE statement creates all appropriate levels, similar
to using mkdir -p.
Use the standard S3 file upload methods to actually put the data files into the right locations. You can also put the
directory paths and data files in place before creating the associated Impala databases or tables, and Impala automatically
uses the data from the appropriate location after the associated databases and tables are created.
You can switch whether an existing table or partition points to data in HDFS or S3. For example, if you have an Impala
table or partition pointing to data files in HDFS or S3, and you later transfer those data files to the other filesystem,
use an ALTER TABLE statement to adjust the LOCATION attribute of the corresponding table or partition to reflect
that change. Because Impala does not have an ALTER DATABASE statement, this location-switching technique is not
practical for entire databases that have a custom LOCATION attribute.
Internal and External Tables Located on S3
Just as with tables located on HDFS storage, you can designate S3-based tables as either internal (managed by Impala)
or external, by using the syntax CREATE TABLE or CREATE EXTERNAL TABLE respectively. When you drop an internal
table, the files associated with the table are removed, even if they are on S3 storage. When you drop an external table,
the files associated with the table are left alone, and are still available for access by other tools or components. See
Overview of Impala Tables on page 196 for details.
If the data on S3 is intended to be long-lived and accessed by other tools in addition to Impala, create any associated
S3 tables with the CREATE EXTERNAL TABLE syntax, so that the files are not deleted from S3 when the table is
dropped.
If the data on S3 is only needed for querying by Impala and can be safely discarded once the Impala workflow is
complete, create the associated S3 tables using the CREATE TABLE syntax, so that dropping the table also deletes the
corresponding data files on S3.
For example, this session creates a table in S3 with the same column layout as a table in HDFS, then examines the S3
table and queries some data from it. The table in S3 works the same as a table in HDFS as far as the expected file format
of the data, table and column statistics, and other table properties. The only indication that it is not an HDFS table is
the s3a:// URL in the LOCATION property. Many data files can reside in the S3 directory, and their combined contents
form the table data. Because the data in this example is uploaded after the table is created, a REFRESH statement
prompts Impala to update its cached information about the data files.
[localhost:21000] > create table usa_cities_s3 like usa_cities location
's3a://impala-demo/usa_cities';
[localhost:21000] > desc usa_cities_s3;
+-------+----------+---------+
| name | type | comment |
+-------+----------+---------+
| id | smallint | |
| city | string | |
| state | string | |
+-------+----------+---------+
Apache Impala Guide | 695
Using Impala with the Amazon S3 Filesystem
-- Now from a web browser, upload the same data file(s) to S3 as in the HDFS table,
-- under the relevant bucket and path. If you already have the data in S3, you would
-- point the table LOCATION at an existing path.
[localhost:21000] > refresh usa_cities_s3;
[localhost:21000] > select count(*) from usa_cities_s3;
+----------+
| count(*) |
+----------+
| 289 |
+----------+
[localhost:21000] > select distinct state from sample_data_s3 limit 5;
+----------------------+
| state |
+----------------------+
| Louisiana |
| Minnesota |
| Georgia |
| Alaska |
| Ohio |
+----------------------+
[localhost:21000] > desc formatted usa_cities_s3;
+------------------------------+------------------------------+---------+
| name | type | comment |
+------------------------------+------------------------------+---------+
| # col_name | data_type | comment |
| | NULL | NULL |
| id | smallint | NULL |
| city | string | NULL |
| state | string | NULL |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | s3_testing | NULL |
| Owner: | jrussell | NULL |
| CreateTime: | Mon Mar 16 11:36:25 PDT 2015 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Protect Mode: | None | NULL |
| Retention: | 0 | NULL |
| Location: | s3a://impala-demo/usa_cities | NULL |
| Table Type: | MANAGED_TABLE | NULL |
...
+------------------------------+------------------------------+---------+
In this case, we have already uploaded a Parquet file with a million rows of data to the sample_data directory
underneath the impala-demo bucket on S3. This session creates a table with matching column settings pointing to
the corresponding location in S3, then queries the table. Because the data is already in place on S3 when the table is
created, no REFRESH statement is required.
[localhost:21000] > create table sample_data_s3
> (id int, id bigint, val int, zerofill string,
> name string, assertion boolean, city string, state string)
> stored as parquet location 's3a://impala-demo/sample_data';
[localhost:21000] > select count(*) from sample_data_s3;;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
[localhost:21000] > select count(*) howmany, assertion from sample_data_s3 group by
assertion;
+---------+-----------+
| howmany | assertion |
+---------+-----------+
| 667149 | true |
| 332851 | false |
+---------+-----------+
696 | Apache Impala Guide
Using Impala with the Amazon S3 Filesystem
Running and Tuning Impala Queries for Data Stored on S3
Once the appropriate LOCATION attributes are set up at the table or partition level, you query data stored in S3 exactly
the same as data stored on HDFS or in HBase:
• Queries against S3 data support all the same file formats as for HDFS data.
• Tables can be unpartitioned or partitioned. For partitioned tables, either manually construct paths in S3
corresponding to the HDFS directories representing partition key values, or use ALTER TABLE ... ADD
PARTITION to set up the appropriate paths in S3.
• HDFS and HBase tables can be joined to S3 tables, or S3 tables can be joined with each other.
• Authorization using the Sentry framework to control access to databases, tables, or columns works the same
whether the data is in HDFS or in S3.
• The catalogd daemon caches metadata for both HDFS and S3 tables. Use REFRESH and INVALIDATE METADATA
for S3 tables in the same situations where you would issue those statements for HDFS tables.
• Queries against S3 tables are subject to the same kinds of admission control and resource management as HDFS
tables.
• Metadata about S3 tables is stored in the same metastore database as for HDFS tables.
• You can set up views referring to S3 tables, the same as for HDFS tables.
• The COMPUTE STATS, SHOW TABLE STATS, and SHOW COLUMN STATS statements work for S3 tables also.
Understanding and Tuning Impala Query Performance for S3 Data
Although Impala queries for data stored in S3 might be less performant than queries against the equivalent data stored
in HDFS, you can still do some tuning. Here are techniques you can use to interpret explain plans and profiles for queries
against S3 data, and tips to achieve the best performance possible for such queries.
All else being equal, performance is expected to be lower for queries running against data on S3 rather than HDFS.
The actual mechanics of the SELECT statement are somewhat different when the data is in S3. Although the work is
still distributed across the datanodes of the cluster, Impala might parallelize the work for a distributed query differently
for data on HDFS and S3. S3 does not have the same block notion as HDFS, so Impala uses heuristics to determine how
to split up large S3 files for processing in parallel. Because all hosts can access any S3 data file with equal efficiency,
the distribution of work might be different than for HDFS data, where the data blocks are physically read using
short-circuit local reads by hosts that contain the appropriate block replicas. Although the I/O to read the S3 data might
be spread evenly across the hosts of the cluster, the fact that all data is initially retrieved across the network means
that the overall query performance is likely to be lower for S3 data than for HDFS data.
In CDH 5.8 / Impala 2.6 and higher, Impala queries are optimized for files stored in Amazon S3. For Impala tables that
use the file formats Parquet, ORC, RCFile, SequenceFile, Avro, and uncompressed text, the setting fs.s3a.block.size
in the core-site.xml configuration file determines how Impala divides the I/O work of reading the data files. This
configuration setting is specified in bytes. By default, this value is 33554432 (32 MB), meaning that Impala parallelizes
S3 read operations on the files as if they were made up of 32 MB blocks. For example, if your S3 queries primarily
access Parquet files written by MapReduce or Hive, increase fs.s3a.block.size to 134217728 (128 MB) to match
the row group size of those files. If most S3 queries involve Parquet files written by Impala, increase
fs.s3a.block.size to 268435456 (256 MB) to match the row group size produced by Impala.
Because of differences between S3 and traditional filesystems, DML operations for S3 tables can take longer than for
tables on HDFS. For example, both the LOAD DATA statement and the final stage of the INSERT and CREATE TABLE
AS SELECT statements involve moving files from one directory to another. (In the case of INSERT and CREATE TABLE
AS SELECT, the files are moved from a temporary staging directory to the final destination directory.) Because S3
does not support a “rename” operation for existing objects, in these cases Impala actually copies the data files from
one location to another and then removes the original files. In CDH 5.8 / Impala 2.6, the S3_SKIP_INSERT_STAGING
query option provides a way to speed up INSERT statements for S3 tables and partitions, with the tradeoff that a
problem during statement execution could leave data in an inconsistent state. It does not apply to INSERT OVERWRITE
or LOAD DATA statements. See S3_SKIP_INSERT_STAGING Query Option (CDH 5.8 or higher only) on page 358 for
details.
Apache Impala Guide | 697
Using Impala with the Amazon S3 Filesystem
When optimizing aspects of for complex queries such as the join order, Impala treats tables on HDFS and S3 the same
way. Therefore, follow all the same tuning recommendations for S3 tables as for HDFS ones, such as using the COMPUTE
STATS statement to help Impala construct accurate estimates of row counts and cardinality. See Tuning Impala for
Performance on page 565 for details.
In query profile reports, the numbers for BytesReadLocal, BytesReadShortCircuit, BytesReadDataNodeCached,
and BytesReadRemoteUnexpected are blank because those metrics come from HDFS. If you do see any indications
that a query against an S3 table performed “remote read” operations, do not be alarmed. That is expected because,
by definition, all the I/O for S3 tables involves remote reads.
Restrictions on Impala Support for S3
Impala requires that the default filesystem for the cluster be HDFS. You cannot use S3 as the only filesystem in the
cluster.
Prior to CDH 5.8 / Impala 2.6 Impala could not perform DML operations (INSERT, LOAD DATA, or CREATE TABLE AS
SELECT) where the destination is a table or partition located on an S3 filesystem. This restriction is lifted in CDH 5.8 /
Impala 2.6 and higher.
Impala does not support the old s3:// block-based and s3n:// filesystem schemes, only s3a://.
Although S3 is often used to store JSON-formatted data, the current Impala support for S3 does not include directly
querying JSON data. For Impala queries, use data files in one of the file formats listed in How Impala Works with Hadoop
File Formats on page 634. If you have data in JSON format, you can prepare a flattened version of that data for querying
by Impala as part of your ETL cycle.
You cannot use the ALTER TABLE ... SET CACHED statement for tables or partitions that are located in S3.
Best Practices for Using Impala with S3
The following guidelines represent best practices derived from testing and field experience with Impala on S3:
• Any reference to an S3 location must be fully qualified. (This rule applies when S3 is not designated as the default
filesystem.)
• Set the safety valve fs.s3a.connection.maximum to 1500 for impalad.
• Set safety valve fs.s3a.block.size to 134217728 (128 MB in bytes) if most Parquet files queried by Impala
were written by Hive or ParquetMR jobs. Set the block size to 268435456 (256 MB in bytes) if most Parquet files
queried by Impala were written by Impala.
• DROP TABLE .. PURGE is much faster than the default DROP TABLE. The same applies to ALTER TABLE ...
DROP PARTITION PURGE versus the default DROP PARTITION operation. However, due to the eventually
consistent nature of S3, the files for that table or partition could remain for some unbounded time when using
PURGE. The default DROP TABLE/PARTITION is slow because Impala copies the files to the HDFS trash folder,
and Impala waits until all the data is moved. DROP TABLE/PARTITION .. PURGE is a fast delete operation, and
the Impala statement finishes quickly even though the change might not have propagated fully throughout S3.
• INSERT statements are faster than INSERT OVERWRITE for S3. The query option S3_SKIP_INSERT_STAGING,
which is set to true by default, skips the staging step for regular INSERT (but not INSERT OVERWRITE). This
makes the operation much faster, but consistency is not guaranteed: if a node fails during execution, the table
could end up with inconsistent data. Set this option to false if stronger consistency is required, however this
setting will make the INSERT operations slower.
• Too many files in a table can make metadata loading and updating slow on S3. If too many requests are made to
S3, S3 has a back-off mechanism and responds slower than usual. You might have many small files because of:
– Too many partitions due to over-granular partitioning. Prefer partitions with many megabytes of data, so
that even a query against a single partition can be parallelized effectively.
698 | Apache Impala Guide
– Many small INSERT queries. Prefer bulk INSERTs so that more data is written to fewer files.
Using Impala with the Amazon S3 Filesystem
Specifying Impala Credentials to Access Data in S3 with Cloudera Manager
Cloudera recommends that you use Cloudera Manager to specify Impala credentials to access data in Amazon S3. If
you are not using Cloudera Manager, see Specifying Impala Credentials to Access Data in S3 from the command line.
To configure access to data stored in S3 for Impala with Cloudera Manager, use one of the following authentication
types:
• IAM Role-based Authentication
Amazon Identity & Access Management (IAM). You must set up IAM role-based authentication in Amazon. See
Amazon documentation. This authentication method is best suited for environments where there is a single user,
or where all cluster users can have the same privileges to data in S3. See How to Configure AWS Credentials for
information about using IAM role-based authentication with Cloudera Manager.
• Access Key Authentication
For environments where you have multiple users or multi-tenancy, use an AWS access key and an AWS secret key
that you obtain from Amazon. See Amazon documentation. For this scenario, you must enable the Sentry service
and Kerberos to use the S3 Connector service. Cloudera Manager stores your AWS credentials securely and does
not store them in world-readable locations. If you can use the Sentry service and Kerberos, see the following
sections to add your AWS credentials to Cloudera Manager and to manage them:
– Adding AWS Credentials
– Managing AWS Credentials
Note: If you cannot use the Sentry service or Kerberos in your environment, see the next section,
Specifying Impala Credentials on Clusters Not Secured by Sentry or Kerberos on page 699.
Specifying Impala Credentials on Clusters Not Secured by Sentry or Kerberos
If you cannot use the Sentry service or Kerberos in your environment, specify Impala credentials in the Cluster-wide
Advanced Configuration Snippet (Safety Valve) for core-site.xml. For example:
fs.s3a.access.key
your_access_key
fs.s3a.secret.key
your_secret_key
Specifying your credentials in this safety valve does not require Kerberos or the Sentry service, but it is not as secure.
After specifying the credentials, restart both the Impala and Hive services. Restarting Hive is required because operations
such as Impala queries and CREATE TABLE statements go through the Hive metastore.
Specifying Impala Credentials to Access Data in S3
To allow Impala to access data in S3, specify values for the following configuration settings in your core-site.xml
file:
Apache Impala Guide | 699
Using Impala with the Amazon S3 Filesystem
fs.s3a.access.key
your_access_key
fs.s3a.secret.key
your_secret_key
After specifying the credentials, restart both the Impala and Hive services. Restarting Hive is required because operations
such as Impala queries and CREATE TABLE statements go through the Hive metastore.
Important:
Although you can specify the access key ID and secret key as part of the s3a:// URL in the LOCATION
attribute, doing so makes this sensitive information visible in many places, such as DESCRIBE
FORMATTED output and Impala log files. Therefore, specify this information centrally in the
core-site.xml file, and restrict read access to that file to only trusted users.
700 | Apache Impala Guide
Using Impala with the Azure Data Lake Store (ADLS)
Using Impala with the Azure Data Lake Store (ADLS)
You can use Impala to query data residing on the Azure Data Lake Store (ADLS) filesystem. This capability allows
convenient access to a storage system that is remotely managed, accessible from anywhere, and integrated with
various cloud-based services. Impala can query files in any supported file format from ADLS. The ADLS storage location
can be for an entire table or individual partitions in a partitioned table.
The default Impala tables use data files stored on HDFS, which are ideal for bulk loads and queries using full-table
scans. In contrast, queries against ADLS data are less performant, making ADLS suitable for holding “cold” data that is
only queried occasionally, while more frequently accessed “hot” data resides in HDFS. In a partitioned table, you can
set the LOCATION attribute for individual partitions to put some partitions on HDFS and others on ADLS, typically
depending on the age of the data.
Starting in CDH 6.1, Impala supports ADLS Gen2 filesystem, Azure Blob File System (ABFS).
Prerequisites
These procedures presume that you have already set up an Azure account, configured an ADLS store, and configured
your Hadoop cluster with appropriate credentials to be able to access ADLS. See the following resources for information:
• Get started with Azure Data Lake Store using the Azure Portal
• Azure Data Lake Storage Gen2
• Hadoop Azure Data Lake Support
How Impala SQL Statements Work with ADLS
Impala SQL statements work with data on ADLS as follows.
• The CREATE TABLE Statement on page 234 or ALTER TABLE Statement on page 205 statements can specify that a
table resides on the ADLS filesystem by using one of the following ADLS prefixes in the LOCATION property.
• For ADLS Gen1: adl://
• For ADLS Gen2: abfs:// or abfss://
ALTER TABLE can also set the LOCATION property for an individual partition, so that some data in a table resides
on ADLS and other data in the same table resides on HDFS.
See Creating Impala Databases, Tables, and Partitions for Data Stored on ADLS on page 703 for usage information.
• Once a table or partition is designated as residing on ADLS, the SELECT Statement on page 295 statement
transparently accesses the data files from the appropriate storage layer.
• If the ADLS table is an internal table, the DROP TABLE Statement on page 268 statement removes the corresponding
data files from ADLS when the table is dropped.
• The TRUNCATE TABLE Statement (CDH 5.5 or higher only) on page 381 statement always removes the corresponding
data files from ADLS when the table is truncated.
• The LOAD DATA Statement on page 288 can move data files residing in HDFS into an ADLS table.
• The INSERT Statement on page 277, or the CREATE TABLE AS SELECT form of the CREATE TABLE statement,
can copy data from an HDFS table or another ADLS table into an ADLS table.
For usage information about Impala SQL statements with ADLS tables, see Using Impala DML Statements for ADLS
Data on page 703.
Apache Impala Guide | 701
Using Impala with the Azure Data Lake Store (ADLS)
Specifying Impala Credentials to Access Data in ADLS
You can configure credentials to access ADLS in Cloudera Manager or in plain text.
When you configure credentials using Cloudera Manager, it provides a more secure way to access ADLS using credentials
that are not stored in plain-text files. See Configuring ADLS Access Using Cloudera Manager for the steps to configure
ADLS credentials using Cloudera Manager.
Important: Cloudera recommends that you only use the plain text method for accessing ADLS in
development environments or other environments where security is not a concern.
To allow Impala to access data in ADLS using credentials in plain text, specify values for the following configuration
settings in your core-site.xml file:
For ADLS Gen1:
dfs.adls.oauth2.access.token.provider.type
ClientCredential
dfs.adls.oauth2.client.id
your_client_id
dfs.adls.oauth2.credential
your_client_secret
dfs.adls.oauth2.refresh.url
https://login.windows.net/your_azure_tenant_id/oauth2/token
For ADLS Gen2:
fs.azure.account.auth.type
OAuth
fs.azure.account.oauth.provider.type
org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
fs.azure.account.oauth2.client.id
your_client_id
fs.azure.account.oauth2.client.secret
your_client_secret
fs.azure.account.oauth2.client.endpoint
https://login.microsoftonline.com/your_azure_tenant_id/oauth2/token
Note:
Check if your Hadoop distribution or cluster management tool includes support for filling in and
distributing credentials across the cluster in an automated way.
702 | Apache Impala Guide
After specifying the credentials, restart both the Impala and Hive services. Restarting Hive is required because Impala
DDL statements, such as the CREATE TABLE statements, go through the Hive metastore.
Using Impala with the Azure Data Lake Store (ADLS)
Loading Data into ADLS for Impala Queries
If your ETL pipeline involves moving data into ADLS and then querying through Impala, you can either use Impala DML
statements to create, move, or copy the data, or use the same data loading techniques as you would for non-Impala
data.
Using Impala DML Statements for ADLS Data
In CDH 5.12 / Impala 2.9 and higher, the Impala DML statements (INSERT, LOAD DATA, and CREATE TABLE AS
SELECT) can write data into a table or partition that resides in the Azure Data Lake Store (ADLS). ADLS Gen2 is supported
in CDH 6.1 and higher.
In theCREATE TABLE or ALTER TABLE statements, specify the ADLS location for tables and partitions with the adl://
prefix for ADLS Gen1 and abfs:// or abfss:// for ADLS Gen2 in the LOCATION attribute.
If you bring data into ADLS using the normal ADLS transfer mechanisms instead of Impala DML statements, issue a
REFRESH statement for the table before using Impala to query the ADLS data.
Manually Loading Data into Impala Tables on ADLS
As an alternative, you can use the Microsoft-provided methods to bring data files into ADLS for querying through
Impala. See the Microsoft ADLS documentation for details.
After you upload data files to a location already mapped to an Impala table or partition, or if you delete files in ADLS
from such a location, issue the REFRESH table_name statement to make Impala aware of the new set of data files.
Creating Impala Databases, Tables, and Partitions for Data Stored on ADLS
Impala reads data for a table or partition from ADLS based on the LOCATION attribute for the table or partition. Specify
the ADLS details in the LOCATION clause of a CREATE TABLE or ALTER TABLE statement. The syntax for the LOCATION
clause is:
• For ADLS Gen1:
adl://account.azuredatalakestore.net/path/file
• For ADLS Gen2:
abfs://container@account.dfs.core.windows.net/path/file
or
abfss://container@account.dfs.core.windows.net/path/file
container denotes the parent location that holds the files and folders, which is the Containers in the Azure Storage
Blobs service.
account is the name given for your storage account.
Note:
By default, TLS is enabled both with abfs:// and abfss://.
When you set the fs.azure.always.use.https=false property, TLS is disabled with abfs://,
and TLS is enabled with abfss://
Apache Impala Guide | 703
Using Impala with the Azure Data Lake Store (ADLS)
For a partitioned table, either specify a separate LOCATION clause for each new partition, or specify a base LOCATION
for the table and set up a directory structure in ADLS to mirror the way Impala partitioned tables are structured in
HDFS. Although, strictly speaking, ADLS filenames do not have directory paths, Impala treats ADLS filenames with /
characters the same as HDFS pathnames that include directories.
To point a nonpartitioned table or an individual partition at ADLS, specify a single directory path in ADLS, which could
be any arbitrary directory. To replicate the structure of an entire Impala partitioned table or database in ADLS requires
more care, with directories and subdirectories nested and named to match the equivalent directory tree in HDFS.
Consider setting up an empty staging area if necessary in HDFS, and recording the complete directory structure so that
you can replicate it in ADLS.
For example, the following session creates a partitioned table where only a single partition resides on ADLS. The
partitions for years 2013 and 2014 are located on HDFS. The partition for year 2015 includes a LOCATION attribute
with an adl:// URL, and so refers to data residing on ADLS, under a specific path underneath the store impalademo.
[localhost:21000] > create database db_on_hdfs;
[localhost:21000] > use db_on_hdfs;
[localhost:21000] > create table mostly_on_hdfs (x int) partitioned by (year int);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2013);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2014);
[localhost:21000] > alter table mostly_on_hdfs add partition (year=2015)
> location
'adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3/t1';
For convenience when working with multiple tables with data files stored in ADLS, you can create a database with a
LOCATION attribute pointing to an ADLS path. Specify a URL of the form as shown above. Any tables created inside
that database automatically create directories underneath the one specified by the database LOCATION attribute.
The following session creates a database and two partitioned tables residing entirely on ADLS, one partitioned by a
single column and the other partitioned by multiple columns. Because a LOCATION attribute with an adl:// URL is
specified for the database, the tables inside that database are automatically created on ADLS underneath the database
directory. To see the names of the associated subdirectories, including the partition key values, we use an ADLS client
tool to examine how the directory structure is organized on ADLS. For example, Impala partition directories such as
month=1 do not include leading zeroes, which sometimes appear in partition directories created through Hive.
[localhost:21000] > create database db_on_adls location
'adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3';
[localhost:21000] > use db_on_adls;
[localhost:21000] > create table partitioned_on_adls (x int) partitioned by (year int);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2013);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2014);
[localhost:21000] > alter table partitioned_on_adls add partition (year=2015);
[localhost:21000] > ! hadoop fs -ls adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3
--recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_adls/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_adls/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_adls/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_adls/year=2015/
[localhost:21000] > create table partitioned_multiple_keys (x int)
> partitioned by (year smallint, month tinyint, day tinyint);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=1);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=1,day=31);
[localhost:21000] > alter table partitioned_multiple_keys
> add partition (year=2015,month=2,day=28);
[localhost:21000] > ! hadoop fs -ls adl://impalademo.azuredatalakestore.net/dir1/dir2/dir3
--recursive;
2015-03-17 13:56:34 0 dir1/dir2/dir3/
2015-03-17 16:47:13 0 dir1/dir2/dir3/partitioned_multiple_keys/
2015-03-17 16:47:44 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=1/
704 | Apache Impala Guide
Using Impala with the Azure Data Lake Store (ADLS)
2015-03-17 16:47:50 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=1/day=31/
2015-03-17 16:47:57 0
dir1/dir2/dir3/partitioned_multiple_keys/year=2015/month=2/day=28/
2015-03-17 16:43:28 0 dir1/dir2/dir3/partitioned_on_adls/
2015-03-17 16:43:49 0 dir1/dir2/dir3/partitioned_on_adls/year=2013/
2015-03-17 16:43:53 0 dir1/dir2/dir3/partitioned_on_adls/year=2014/
2015-03-17 16:43:58 0 dir1/dir2/dir3/partitioned_on_adls/year=2015/
The CREATE DATABASE and CREATE TABLE statements create the associated directory paths if they do not already
exist. You can specify multiple levels of directories, and the CREATE statement creates all appropriate levels, similar
to using mkdir -p.
Use the standard ADLS file upload methods to actually put the data files into the right locations. You can also put the
directory paths and data files in place before creating the associated Impala databases or tables, and Impala automatically
uses the data from the appropriate location after the associated databases and tables are created.
You can switch whether an existing table or partition points to data in HDFS or ADLS. For example, if you have an Impala
table or partition pointing to data files in HDFS or ADLS, and you later transfer those data files to the other filesystem,
use an ALTER TABLE statement to adjust the LOCATION attribute of the corresponding table or partition to reflect
that change. This location-switching technique is not practical for entire databases that have a custom LOCATION
attribute.
Internal and External Tables Located on ADLS
Just as with tables located on HDFS storage, you can designate ADLS-based tables as either internal (managed by
Impala) or external, by using the syntax CREATE TABLE or CREATE EXTERNAL TABLE respectively. When you drop
an internal table, the files associated with the table are removed, even if they are on ADLS storage. When you drop
an external table, the files associated with the table are left alone, and are still available for access by other tools or
components. See Overview of Impala Tables on page 196 for details.
If the data on ADLS is intended to be long-lived and accessed by other tools in addition to Impala, create any associated
ADLS tables with the CREATE EXTERNAL TABLE syntax, so that the files are not deleted from ADLS when the table
is dropped.
If the data on ADLS is only needed for querying by Impala and can be safely discarded once the Impala workflow is
complete, create the associated ADLS tables using the CREATE TABLE syntax, so that dropping the table also deletes
the corresponding data files on ADLS.
For example, this session creates a table in ADLS with the same column layout as a table in HDFS, then examines the
ADLS table and queries some data from it. The table in ADLS works the same as a table in HDFS as far as the expected
file format of the data, table and column statistics, and other table properties. The only indication that it is not an HDFS
table is the adl:// URL in the LOCATION property. Many data files can reside in the ADLS directory, and their combined
contents form the table data. Because the data in this example is uploaded after the table is created, a REFRESH
statement prompts Impala to update its cached information about the data files.
[localhost:21000] > create table usa_cities_adls like usa_cities location
'adl://impalademo.azuredatalakestore.net/usa_cities';
[localhost:21000] > desc usa_cities_adls;
+-------+----------+---------+
| name | type | comment |
+-------+----------+---------+
| id | smallint | |
| city | string | |
| state | string | |
+-------+----------+---------+
-- Now from a web browser, upload the same data file(s) to ADLS as in the HDFS table,
-- under the relevant store and path. If you already have the data in ADLS, you would
-- point the table LOCATION at an existing path.
[localhost:21000] > refresh usa_cities_adls;
[localhost:21000] > select count(*) from usa_cities_adls;
Apache Impala Guide | 705
Using Impala with the Azure Data Lake Store (ADLS)
+----------+
| count(*) |
+----------+
| 289 |
+----------+
[localhost:21000] > select distinct state from sample_data_adls limit 5;
+----------------------+
| state |
+----------------------+
| Louisiana |
| Minnesota |
| Georgia |
| Alaska |
| Ohio |
+----------------------+
[localhost:21000] > desc formatted usa_cities_adls;
+------------------------------+----------------------------------------------------+---------+
| name | type |
comment |
+------------------------------+----------------------------------------------------+---------+
| # col_name | data_type |
comment |
| | NULL |
NULL |
| id | smallint |
NULL |
| city | string |
NULL |
| state | string |
NULL |
| | NULL |
NULL |
| # Detailed Table Information | NULL |
NULL |
| Database: | adls_testing |
NULL |
| Owner: | jrussell |
NULL |
| CreateTime: | Mon Mar 16 11:36:25 PDT 2017 |
NULL |
| LastAccessTime: | UNKNOWN |
NULL |
| Protect Mode: | None |
NULL |
| Retention: | 0 |
NULL |
| Location: | adl://impalademo.azuredatalakestore.net/usa_cities |
NULL |
| Table Type: | MANAGED_TABLE |
NULL |
...
+------------------------------+----------------------------------------------------+---------+
In this case, we have already uploaded a Parquet file with a million rows of data to the sample_data directory
underneath the impalademo store on ADLS. This session creates a table with matching column settings pointing to
the corresponding location in ADLS, then queries the table. Because the data is already in place on ADLS when the
table is created, no REFRESH statement is required.
[localhost:21000] > create table sample_data_adls
> (id int, id bigint, val int, zerofill string,
> name string, assertion boolean, city string, state string)
> stored as parquet location
'adl://impalademo.azuredatalakestore.net/sample_data';
[localhost:21000] > select count(*) from sample_data_adls;
+----------+
| count(*) |
+----------+
| 1000000 |
+----------+
[localhost:21000] > select count(*) howmany, assertion from sample_data_adls group by
706 | Apache Impala Guide
Using Impala with the Azure Data Lake Store (ADLS)
assertion;
+---------+-----------+
| howmany | assertion |
+---------+-----------+
| 667149 | true |
| 332851 | false |
+---------+-----------+
Running and Tuning Impala Queries for Data Stored on ADLS
Once the appropriate LOCATION attributes are set up at the table or partition level, you query data stored in ADLS
exactly the same as data stored on HDFS or in HBase:
• Queries against ADLS data support all the same file formats as for HDFS data.
• Tables can be unpartitioned or partitioned. For partitioned tables, either manually construct paths in ADLS
corresponding to the HDFS directories representing partition key values, or use ALTER TABLE ... ADD
PARTITION to set up the appropriate paths in ADLS.
• HDFS, Kudu, and HBase tables can be joined to ADLS tables, or ADLS tables can be joined with each other.
• Authorization using the Sentry framework to control access to databases, tables, or columns works the same
whether the data is in HDFS or in ADLS.
• The catalogd daemon caches metadata for both HDFS and ADLS tables. Use REFRESH and INVALIDATE
METADATA for ADLS tables in the same situations where you would issue those statements for HDFS tables.
• Queries against ADLS tables are subject to the same kinds of admission control and resource management as
HDFS tables.
• Metadata about ADLS tables is stored in the same metastore database as for HDFS tables.
• You can set up views referring to ADLS tables, the same as for HDFS tables.
• The COMPUTE STATS, SHOW TABLE STATS, and SHOW COLUMN STATS statements work for ADLS tables also.
Understanding and Tuning Impala Query Performance for ADLS Data
Although Impala queries for data stored in ADLS might be less performant than queries against the equivalent data
stored in HDFS, you can still do some tuning. Here are techniques you can use to interpret explain plans and profiles
for queries against ADLS data, and tips to achieve the best performance possible for such queries.
All else being equal, performance is expected to be lower for queries running against data on ADLS rather than HDFS.
The actual mechanics of the SELECT statement are somewhat different when the data is in ADLS. Although the work
is still distributed across the datanodes of the cluster, Impala might parallelize the work for a distributed query differently
for data on HDFS and ADLS. ADLS does not have the same block notion as HDFS, so Impala uses heuristics to determine
how to split up large ADLS files for processing in parallel. Because all hosts can access any ADLS data file with equal
efficiency, the distribution of work might be different than for HDFS data, where the data blocks are physically read
using short-circuit local reads by hosts that contain the appropriate block replicas. Although the I/O to read the ADLS
data might be spread evenly across the hosts of the cluster, the fact that all data is initially retrieved across the network
means that the overall query performance is likely to be lower for ADLS data than for HDFS data.
Because data files written to ADLS do not have a default block size the way HDFS data files do, any Impala INSERT or
CREATE TABLE AS SELECT statements use the PARQUET_FILE_SIZE query option setting to define the size of
Parquet data files. (Using a large block size is more important for Parquet tables than for tables that use other file
formats.)
When optimizing aspects of for complex queries such as the join order, Impala treats tables on HDFS and ADLS the
same way. Therefore, follow all the same tuning recommendations for ADLS tables as for HDFS ones, such as using the
COMPUTE STATS statement to help Impala construct accurate estimates of row counts and cardinality. See Tuning
Impala for Performance on page 565 for details.
In query profile reports, the numbers for BytesReadLocal, BytesReadShortCircuit, BytesReadDataNodeCached,
and BytesReadRemoteUnexpected are blank because those metrics come from HDFS. If you do see any indications
that a query against an ADLS table performed “remote read” operations, do not be alarmed. That is expected because,
by definition, all the I/O for ADLS tables involves remote reads.
Apache Impala Guide | 707
Using Impala with the Azure Data Lake Store (ADLS)
Restrictions on Impala Support for ADLS
Impala requires that the default filesystem for the cluster be HDFS. You cannot use ADLS as the only filesystem in the
cluster.
Although ADLS is often used to store JSON-formatted data, the current Impala support for ADLS does not include
directly querying JSON data. For Impala queries, use data files in one of the file formats listed in How Impala Works
with Hadoop File Formats on page 634. If you have data in JSON format, you can prepare a flattened version of that
data for querying by Impala as part of your ETL cycle.
You cannot use the ALTER TABLE ... SET CACHED statement for tables or partitions that are located in ADLS.
Best Practices for Using Impala with ADLS
The following guidelines represent best practices derived from testing and real-world experience with Impala on ADLS:
• Any reference to an ADLS location must be fully qualified. (This rule applies when ADLS is not designated as the
default filesystem.)
• Set any appropriate configuration settings for impalad.
708 | Apache Impala Guide
Using Impala Logging
Using Impala Logging
The Impala logs record information about:
• Any errors Impala encountered. If Impala experienced a serious error during startup, you must diagnose and
troubleshoot that problem before you can do anything further with Impala.
• How Impala is configured.
• Jobs Impala has completed.
Note:
Formerly, the logs contained the query profile for each query, showing low-level details of how the
work is distributed among nodes and how intermediate and final results are transmitted across the
network. To save space, those query profiles are now stored in zlib-compressed files in
/var/log/impala/profiles. You can access them through the Impala web user interface. For
example, at http://impalad-node-hostname:25000/queries, each query is followed by a
Profile link leading to a page showing extensive analytical data for the query execution.
The auditing feature introduced in Impala 1.1.1 produces a separate set of audit log files when enabled.
See Auditing Impala Operations on page 78 for details.
In CDH 5.12 / Impala 2.9 and higher, you can control how many audit event log files are kept on each
host through the --max_audit_event_log_files startup option for the impalad daemon, similar
to the --max_log_files option for regular log files.
The lineage feature introduced in Impala 2.2.0 produces a separate lineage log file when enabled. See
Viewing Lineage Information for Impala Data on page 80 for details.
Locations and Names of Impala Log Files
• By default, the log files are under the directory /var/log/impala. To change log file locations, edit the log file
properties in the Impala service in Cloudera Manager (Impala service > Configuration).
• The significant files for the impalad process are impalad.INFO, impalad.WARNING, and impalad.ERROR. You
might also see a file impalad.FATAL, although this is only present in rare conditions.
• The significant files for the statestored process are statestored.INFO, statestored.WARNING, and
statestored.ERROR. You might also see a file statestored.FATAL, although this is only present in rare
conditions.
• The significant files for the catalogd process are catalogd.INFO, catalogd.WARNING, and catalogd.ERROR.
You might also see a file catalogd.FATAL, although this is only present in rare conditions.
• Examine the .INFO files to see configuration settings for the processes.
• Examine the .WARNING files to see all kinds of problem information, including such things as suboptimal settings
and also serious runtime errors.
• Examine the .ERROR and/or .FATAL files to see only the most serious errors, if the processes crash, or queries
fail to complete. These messages are also in the .WARNING file.
• A new set of log files is produced each time the associated daemon is restarted. These log files have long names
including a timestamp. The .INFO, .WARNING, and .ERROR files are physically represented as symbolic links to
the latest applicable log files.
• The init script for the impala-server service also produces a consolidated log file
/var/log/impalad/impala-server.log, with all the same information as the corresponding.INFO, .WARNING,
and .ERROR files.
• The init script for the impala-state-store service also produces a consolidated log file
/var/log/impalad/impala-state-store.log, with all the same information as the corresponding.INFO,
.WARNING, and .ERROR files.
Apache Impala Guide | 709
Using Impala Logging
Impala stores information using the glog_v logging system. You will see some messages referring to C++ file names.
Logging is affected by:
• The GLOG_v environment variable specifies which types of messages are logged. See Setting Logging Levels on
page 711 for details.
• The --logbuflevel startup flag for the impalad daemon specifies how often the log information is written to
disk. The default is 0, meaning that the log is immediately flushed to disk when Impala outputs an important
messages such as a warning or an error, but less important messages such as informational ones are buffered in
memory rather than being flushed to disk immediately.
• Cloudera Manager has an Impala configuration setting that sets the -logbuflevel startup option.
Managing Impala Logs through Cloudera Manager or Manually
Cloudera recommends installing Impala through the Cloudera Manager administration interface. To assist with
troubleshooting, Cloudera Manager collects front-end and back-end logs together into a single view, and let you do a
search across log data for all the managed nodes rather than examining the logs on each node separately. If you installed
Impala using Cloudera Manager, refer to the topics on Monitoring Services or Logs.
If you are using Impala in an environment not managed by Cloudera Manager, review Impala log files on each host,
when you have traced an issue back to a specific system.
Rotating Impala Logs
Impala periodically rotates logs. It switches the physical files representing the current log files and removes older log
files that are no longer needed.
In Impala 2.2 and higher, the --max_log_files configuration option specifies how many log files to keep at each
severity level (INFO, WARNING, ERROR, and FATAL). You can specify an appropriate setting for each Impala-related
daemon (impalad, statestored, and catalogd).
• A value of 0 preserves all log files, in which case you would set up set up manual log rotation using your Linux tool
or technique of choice.
• A value of 1 preserves only the very latest log file.
• The default value is 10.
For some log levels, Impala logs are first temporarily buffered in memory and only written to disk periodically. The
--logbufsecs setting controls the maximum time that log messages are buffered for. For example, with the default
value of 5 seconds, there may be up to a 5 second delay before a logged message shows up in the log file.
It is not recommended that you set --logbufsecs to 0 as the setting makes the Impala daemon to spin in the thread
that tries to delete old log files.
To set up log rotation on a system managed by Cloudera Manager:
1. In the Impala Configuration tab, type max_log_files.
2. Set the appropriate value for the Maximum Log Files field for each Impala configuration category, Impala, Catalog
Server, and StateStore.
3. Restart the Impala service.
In earlier Cloudera Manager releases, specify the -max_log_files=maximum option in the Command Line Argument
Advanced Configuration Snippet (Safety Valve) field for each Impala configuration category.
Reviewing Impala Logs
By default, the Impala log is stored at /var/log/impalad/. The most comprehensive log, showing informational,
warning, and error messages, is in the file name impalad.INFO. View log file contents by using the web interface or
by examining the contents of the log file. (When you examine the logs through the file system, you can troubleshoot
710 | Apache Impala Guide
Using Impala Logging
problems by reading the impalad.WARNING and/or impalad.ERROR files, which contain the subsets of messages
indicating potential problems.)
On a machine named impala.example.com with default settings, you could view the Impala logs on that machine
by using a browser to access http://impala.example.com:25000/logs.
Note:
The web interface limits the amount of logging information displayed. To view every log entry, access
the log files directly through the file system.
You can view the contents of the impalad.INFO log file in the file system. With the default configuration settings,
the start of the log file appears as follows:
[user@example impalad]$ pwd
/var/log/impalad
[user@example impalad]$ more impalad.INFO
Log file created at: 2013/01/07 08:42:12
Running on machine: impala.example.com
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0107 08:42:12.292155 14876 daemon.cc:34] impalad version 0.4 RELEASE (build
9d7fadca0461ab40b9e9df8cdb47107ec6b27cff)
Built on Fri, 21 Dec 2012 12:55:19 PST
I0107 08:42:12.292484 14876 daemon.cc:35] Using hostname: impala.example.com
I0107 08:42:12.292706 14876 logging.cc:76] Flags (see also /varz are on debug webserver):
--dump_ir=false
--module_output=
--be_port=22000
--classpath=
--hostname=impala.example.com
Note: The preceding example shows only a small part of the log file. Impala log files are often several
megabytes in size.
Understanding Impala Log Contents
The logs store information about Impala startup options. This information appears once for each time Impala is started
and may include:
• Machine name.
• Impala version number.
• Flags used to start Impala.
• CPU information.
• The number of available disks.
There is information about each job Impala has run. Because each Impala job creates an additional set of data about
queries, the amount of job specific data may be very large. Logs may contained detailed information on jobs. These
detailed log entries may include:
• The composition of the query.
• The degree of data locality.
• Statistics on data throughput and response times.
Setting Logging Levels
Impala uses the GLOG system, which supports three logging levels. You can adjust the logging levels using the Cloudera
Manager Admin Console. You can adjust logging levels without going through the Cloudera Manager Admin Console
Apache Impala Guide | 711
Using Impala Logging
by exporting variable settings. To change logging settings manually, use a command similar to the following on each
node before starting impalad:
export GLOG_v=1
Note: For performance reasons, Cloudera highly recommends not enabling the most verbose logging
level of 3.
For more information on how to configure GLOG, including how to set variable logging levels for different system
components, see documentation for the glog project on github.
Understanding What is Logged at Different Logging Levels
As logging levels increase, the categories of information logged are cumulative. For example, GLOG_v=2 records
everything GLOG_v=1 records, as well as additional information.
Increasing logging levels imposes performance overhead and increases log size. Cloudera recommends using GLOG_v=1
for most cases: this level has minimal performance impact but still captures useful troubleshooting information.
Additional information logged at each level is as follows:
• GLOG_v=1 - The default level. Logs information about each connection and query that is initiated to an impalad
instance, including runtime profiles.
• GLOG_v=2 - Everything from the previous level plus information for each RPC initiated. This level also records
query execution progress information, including details on each file that is read.
• GLOG_v=3 - Everything from the previous level plus logging of every row that is read. This level is only applicable
for the most serious troubleshooting and tuning scenarios, because it can produce exceptionally large and detailed
log files, potentially leading to its own set of performance and capacity problems.
Redacting Sensitive Information from Impala Log Files
Log redaction is a security feature that prevents sensitive information from being displayed in locations used by
administrators for monitoring and troubleshooting, such as log files, the Cloudera Manager user interface, and the
Impala debug web user interface. You configure regular expressions that match sensitive types of information processed
by your system, such as credit card numbers or tax IDs, and literals matching these patterns are obfuscated wherever
they would normally be recorded in log files or displayed in administration or debugging user interfaces.
In a security context, the log redaction feature is complementary to the Sentry authorization framework. Sentry prevents
unauthorized users from being able to directly access table data. Redaction prevents administrators or support personnel
from seeing the smaller amounts of sensitive or personally identifying information (PII) that might appear in queries
issued by those authorized users.
See the CDH Security Guide for details about how to enable this feature and set up the regular expressions to detect
and redact sensitive information within SQL statement text.
712 | Apache Impala Guide
Impala Client Access
Impala Client Access
Application developers have a number of options to interface with Impala. The core development language with Impala
is SQL. You can also use Java or other languages to interact with Impala through the standard JDBC and ODBC interfaces
used by many business intelligence tools. For specialized kinds of analysis, you can supplement the Impala built-in
functions by writing user-defined functions in C++ or Java.
You can connect and submit requests to the Impala through:
• The impala-shell interactive command interpreter
• The Hue web-based user interface
• JDBC
• ODBC
Each impalad daemon process, running on separate nodes in a cluster, listens to several ports for incoming requests:
• Requests from impala-shell and Hue are routed to the impalad daemons through the same port.
• The impalad daemons listen on separate ports for JDBC and ODBC requests.
Impala Startup Options for Client Connections
The following options control client connections to Impala.
--fe_service_threads
Specifies the maximum number of concurrent client connections allowed. The default value is 64 with which 64
queries can run simultaneously.
If you have more clients trying to connect to Impala than the value of this setting, the later arriving clients have to
wait for the duration specified by --accepted_client_cnxn_timeout. You can increase this value to allow
more client connections. However, a large value means more threads to be maintained even if most of the
connections are idle, and it could negatively impact query latency. Client applications should use the connection
pool to avoid need for large number of sessions.
--accepted_client_cnxn_timeout
Controls how Impala treats new connection requests if it has run out of the number of threads configured by
--fe_service_threads
If --accepted_client_cnxn_timeout > 0, new connection requests are rejected if Impala can't get a server
thread within the specified (in seconds) timeout.
If --accepted_client_cnxn_timeout=0, i.e. no timeout, clients wait indefinitely to open the new session until
more threads are available.
The default timeout is 5 minutes.
The timeout applies only to client facing thrift servers, i.e., HS2 and Beeswax servers.
Using the Impala Shell (impala-shell Command)
You can use the Impala shell tool (impala-shell) to set up databases and tables, insert data, and issue queries. For
ad hoc queries and exploration, you can submit SQL statements in an interactive session. To automate your work, you
can specify command-line options to process a single statement or a script file. The impala-shell interpreter accepts
all the same SQL statements listed in Impala SQL Statements on page 202, plus some shell-only commands that you
can use for tuning performance and diagnosing problems.
The impala-shell command fits into the familiar Unix toolchain:
Apache Impala Guide | 713
Impala Client Access
• The -q option lets you issue a single query from the command line, without starting the interactive interpreter.
You could use this option to run impala-shell from inside a shell script or with the command invocation syntax
from a Python, Perl, or other kind of script.
• The -f option lets you process a file containing multiple SQL statements, such as a set of reports or DDL statements
to create a group of tables and views.
• The --var option lets you pass substitution variables to the statements that are executed by that impala-shell
session, for example the statements in a script file processed by the -f option. You encode the substitution variable
on the command line using the notation --var=variable_name=value. Within a SQL statement, you substitute
the value by using the notation ${var:variable_name}. This feature is available in CDH 5.7 / Impala 2.5 and
higher.
• The -o option lets you save query output to a file.
• The -B option turns off pretty-printing, so that you can produce comma-separated, tab-separated, or other
delimited text files as output. (Use the --output_delimiter option to choose the delimiter character; the
default is the tab character.)
• In non-interactive mode, query output is printed to stdout or to the file specified by the -o option, while incidental
output is printed to stderr, so that you can process just the query output as part of a Unix pipeline.
• In interactive mode, impala-shell uses the readline facility to recall and edit previous commands.
Cloudera Manager installs impala-shell automatically. You might install impala-shell manually on other systems
not managed by Cloudera Manager, so that you can issue queries from client systems that are not also running the
Impala daemon or other Apache Hadoop components.
For information about establishing a connection to a coordinator Impala daemon through the impala-shell command,
see Connecting to impalad through impala-shell on page 718.
For a list of the impala-shell command-line options, see impala-shell Configuration Options on page 714. For reference
information about the impala-shell interactive commands, see impala-shell Command Reference on page 721.
impala-shell Configuration Options
You can specify the following options when starting the impala-shell command to change how shell commands
are executed. The table shows the format to use when specifying each option on the command line, or through the
$HOME/.impalarc configuration file.
Note:
These options are different than the configuration options for the impalad daemon itself. For the
impalad options, see Modifying Impala Startup Options.
Summary of impala-shell Configuration Options
The following table shows the names and allowed arguments for the impala-shell configuration options. You can
specify options on the command line, or in a configuration file as described in impala-shell Configuration File on page
717.
Command-Line Option
Configuration File Setting
Explanation
-B or --delimited
write_delimited=true
Causes all query results to be printed in plain format as a
delimited text file. Useful for producing data files to be
used with other Hadoop components. Also useful for
avoiding the performance overhead of pretty-printing all
output, especially when running benchmark tests using
queries returning large result sets. Specify the delimiter
character with the --output_delimiter option. Store
all query results in a file rather than printing to the screen
with the -B option. Added in Impala 1.0.1.
714 | Apache Impala Guide
Command-Line Option
Configuration File Setting
Explanation
Impala Client Access
-b or
kerberos_host_fqdn=
--kerberos_host_fqdn
load-balancer-hostname
--print_header
print_header=true
-o filename or --output_file
filename
output_file=filename
--output_delimiter=character
output_delimiter=character
-p or --show_profiles
show_profiles=true
If set, the setting overrides the expected hostname of the
Impala daemon's Kerberos service principal.
impala-shell will check that the server's principal
matches this hostname. This may be used when impalad
is configured to be accessed via a load-balancer, but it is
desired for impala-shell to talk to a specific impalad
directly.
Stores all query results in the specified file. Typically used
to store the results of a single query issued from the
command line with the -q option. Also works for
interactive sessions; you see the messages such as number
of rows fetched, but not the actual result set. To suppress
these incidental messages when combining the -q and -o
options, redirect stderr to /dev/null. Added in Impala
1.0.1.
Specifies the character to use as a delimiter between fields
when query results are printed in plain format by the -B
option. Defaults to tab ('\t'). If an output value contains
the delimiter character, that field is quoted, escaped by
doubling quotation marks, or both. Added in Impala 1.0.1.
Displays the query execution plan (same output as the
EXPLAIN statement) and a more detailed low-level
breakdown of execution steps, for every query executed
by the shell.
-h or --help
N/A
Displays help information.
N/A
history_max=1000
-i hostname or
--impalad=hostname[:portnum]
impalad=hostname[:portnum]
-q query or --query=query
query=query
-f query_file or
--query_file=query_file
query_file=path_to_query_file
Sets the maximum number of queries to store in the
history file.
Connects to the impalad daemon on the specified host.
The default port of 21000 is assumed unless you provide
another value. You can connect to any host in your cluster
that is running impalad. If you connect to an instance of
impalad that was started with an alternate port specified
by the --fe_port flag, provide that alternative port.
Passes a query or other impala-shell command from
the command line. The impala-shell interpreter
immediately exits after processing the statement. It is
limited to a single statement, which could be a SELECT,
CREATE TABLE, SHOW TABLES, or any other statement
recognized in impala-shell. Because you cannot pass
a USE statement and another query, fully qualify the
names for any tables outside the default database. (Or
use the -f option to pass a file with a USE statement
followed by other queries.)
Passes a SQL query from a file. Multiple statements must
be semicolon (;) delimited. In CDH 5.5 / Impala 2.3 and
higher, you can specify a filename of - to represent
Apache Impala Guide | 715
Impala Client Access
Command-Line Option
Configuration File Setting
Explanation
-k or --kerberos
use_kerberos=true
--query_option="option=value"
-Q "option=value"
-s kerberos_service_name or
--kerberos_service_name=name
Header line
[impala.query_options],
followed on subsequent
lines by option=value, one
option per line.
kerberos_service_name=name
standard input. This feature makes it convenient to use
impala-shell as part of a Unix pipeline where SQL
statements are generated dynamically by other tools.
Kerberos authentication is used when the shell connects
to impalad. If Kerberos is not enabled on the instance of
impalad to which you are connecting, errors are
displayed.
See Enabling Kerberos Authentication for Impala for the
steps to set up and use Kerberos authentication in Impala.
Sets default query options for an invocation of the
impala-shell command. To set multiple query options
at once, use more than one instance of this command-line
option. The query option names are not case-sensitive.
Instructs impala-shell to authenticate to a particular
impalad service principal. If a kerberos_service_name is
not specified, impala is used by default. If this option is
used in conjunction with a connection in which Kerberos
is not supported, errors are returned.
-V or --verbose
verbose=true
Enables verbose output.
--quiet
verbose=false
Disables verbose output.
-v or --version
version=true
Displays version information.
-c
ignore_query_failure=true
Continues on query failure.
-d default_db or
--database=default_db
default_db=default_db
Specifies the database to be used on startup. Same as
running the USE statement after connecting. If not
specified, a database named DEFAULT is used.
--ssl
ssl=true
Enables TLS/SSL for impala-shell.
--ca_cert=path_to_certificate
ca_cert=path_to_certificate
The local pathname pointing to the third-party CA
certificate, or to a copy of the server certificate for
self-signed server certificates. If --ca_cert is not set,
impala-shell enables TLS/SSL, but does not validate
the server certificate. This is useful for connecting to a
known-good Impala that is only running over TLS/SSL,
when a copy of the certificate is not available (such as
when debugging customer installations).
-l
-u
use_ldap=true
Enables LDAP authentication.
user=user_name
Supplies the username, when LDAP authentication is
enabled by the -l option. (Specify the short username,
not the full LDAP distinguished name.) The shell then
prompts interactively for the password.
Specifies a command to run to retrieve the LDAP password,
when LDAP authentication is enabled by the -l option. If
the command includes space-separated arguments,
enclose the command and its arguments in quotation
marks.
--ldap_password_cmd=command
N/A
716 | Apache Impala Guide
Command-Line Option
Configuration File Setting
Explanation
Impala Client Access
--config_file=path_to_config_file
N/A
--live_progress
N/A
--live_summary
N/A
--var=variable_name=value
N/A
--auth_creds_ok_in_clear
N/A
impala-shell Configuration File
Specifies the path of the file containing impala-shell
configuration settings. The default is $HOME/.impalarc.
This setting can only be specified on the command line.
Prints a progress bar showing roughly the percentage
complete for each query. The information is updated
interactively as the query progresses. See LIVE_PROGRESS
Query Option (CDH 5.5 or higher only) on page 336.
Prints a detailed report, similar to the SUMMARY command,
showing progress details for each phase of query
execution. The information is updated interactively as the
query progresses. See LIVE_SUMMARY Query Option (CDH
5.5 or higher only) on page 337.
Defines a substitution variable that can be used within the
impala-shell session. The variable can be substituted
into statements processed by the -q or -f options, or in
an interactive shell session. Within a SQL statement, you
substitute the value by using the notation
${var:variable_name}. This feature is available in CDH
5.7 / Impala 2.5 and higher.
Allows LDAP authentication to be used with an insecure
connection to the shell. WARNING: This will allow
authentication credentials to be sent unencrypted, and
hence may be vulnerable to an attack.
You can define a set of default options for your impala-shell environment, stored in the file $HOME/.impalarc.
This file consists of key-value pairs, one option per line. Everything after a # character on a line is treated as a comment
and ignored.
The configuration file must contain a header label [impala], followed by the options specific to impala-shell. (This
standard convention for configuration files lets you use a single file to hold configuration options for multiple
applications.)
To specify a different filename or path for the configuration file, specify the argument
--config_file=path_to_config_file on the impala-shell command line.
The names of the options in the configuration file are similar (although not necessarily identical) to the long-form
command-line arguments to the impala-shell command. For the names to use, see Summary of impala-shell
Configuration Options on page 714.
Any options you specify on the impala-shell command line override any corresponding options within the
configuration file.
The following example shows a configuration file that you might use during benchmarking tests. It sets verbose mode,
so that the output from each SQL query is followed by timing information. impala-shell starts inside the database
containing the tables with the benchmark data, avoiding the need to issue a USE statement or use fully qualified table
names.
In this example, the query output is formatted as delimited text rather than enclosed in ASCII art boxes, and is stored
in a file rather than printed to the screen. Those options are appropriate for benchmark situations, so that the overhead
of impala-shell formatting and printing the result set does not factor into the timing measurements. It also enables
Apache Impala Guide | 717
Impala Client Access
the show_profiles option. That option prints detailed performance information after each query, which might be
valuable in understanding the performance of benchmark queries.
[impala]
verbose=true
default_db=tpc_benchmarking
write_delimited=true
output_delimiter=,
output_file=/home/tester1/benchmark_results.csv
show_profiles=true
The following example shows a configuration file that connects to a specific remote Impala node, runs a single query
within a particular database, then exits. Any query options predefined under the [impala.query_options] section
in the configuration file take effect during the session.
You would typically use this kind of single-purpose configuration setting with the impala-shell command-line option
--config_file=path_to_config_file, to easily select between many predefined queries that could be run
against different databases, hosts, or even different clusters. To run a sequence of statements instead of a single query,
specify the configuration option query_file=path_to_query_file instead.
[impala]
impalad=impala-test-node1.example.com
default_db=site_stats
# Issue a predefined query and immediately exit.
query=select count(*) from web_traffic where event_date = trunc(now(),'dd')
[impala.query_options]
mem_limit=32g
Connecting to impalad through impala-shell
Within an impala-shell session, you can only issue queries while connected to an instance of the impalad daemon.
You can specify the connection information:
• Through command-line options when you run the impala-shell command.
• Through a configuration file that is read when you run the impala-shell command.
• During an impala-shell session, by issuing a CONNECT command.
See impala-shell Configuration Options on page 714 for the command-line and configuration file options you can use.
You can connect to any Impala daemon (impalad), and that daemon coordinates the execution of all queries sent to
it.
For simplicity during development, you might always connect to the same host, perhaps running impala-shell on
the same host as impalad and specifying the hostname as localhost.
In a production environment, you might enable load balancing, in which you connect to specific host/port combination
but queries are forwarded to arbitrary hosts. This technique spreads the overhead of acting as the coordinator node
among all the Impala daemons in the cluster. See Using Impala through a Proxy for High Availability on page 72 for
details.
To connect the Impala shell during shell startup:
1. Locate the hostname that is running an instance of the impalad daemon. If that impalad uses a non-default
port (something other than port 21000) for impala-shell connections, find out the port number also.
2. Use the -i option to the impala-shell interpreter to specify the connection information for that instance of
impalad:
# When you are logged into the same machine running impalad.
# The prompt will reflect the current hostname.
$ impala-shell
# When you are logged into the same machine running impalad.
# The host will reflect the hostname 'localhost'.
718 | Apache Impala Guide
Impala Client Access
$ impala-shell -i localhost
# When you are logged onto a different host, perhaps a client machine
# outside the Hadoop cluster.
$ impala-shell -i some.other.hostname
# When you are logged onto a different host, and impalad is listening
# on a non-default port. Perhaps a load balancer is forwarding requests
# to a different host/port combination behind the scenes.
$ impala-shell -i some.other.hostname:port_number
To connect the Impala shell after shell startup:
1. Start the Impala shell with no connection:
impala-shell
You should see a prompt like the following:
Welcome to the Impala shell. Press TAB twice to see a list of available commands.
Copyright (c) year Cloudera, Inc. All rights reserved.
(Shell
build version: Impala Shell v3.3.x (hash) built on
date)
[Not connected] >
2. Locate the hostname that is running the impalad daemon. If that impalad uses a non-default port (something
other than port 21000) for impala-shell connections, find out the port number also.
3. Use the connect command to connect to an Impala instance. Enter a command of the form:
[Not connected] > connect impalad-host
[impalad-host:21000] >
Note: Replace impalad-host with the hostname you have configured to run Impala in your
environment. The changed prompt indicates a successful connection.
To start impala-shell in a specific database:
You can use all the same connection options as in previous examples. For simplicity, these examples assume that you
are logged into one of the Impala daemons.
1. Find the name of the database containing the relevant tables, views, and so on that you want to operate on.
2. Use the -d option to the impala-shell interpreter to connect and immediately switch to the specified database,
without the need for a USE statement or fully qualified names:
# Subsequent queries with unqualified names operate on
# tables, views, and so on inside the database named 'staging'.
$ impala-shell -i localhost -d staging
# It is common during development, ETL, benchmarking, and so on
# to have different databases containing the same table names
# but with different contents or layouts.
$ impala-shell -i localhost -d parquet_snappy_compression
$ impala-shell -i localhost -d parquet_gzip_compression
To run one or several statements in non-interactive mode:
You can use all the same connection options as in previous examples. For simplicity, these examples assume that you
are logged into one of the Impala daemons.
Apache Impala Guide | 719
Impala Client Access
1. Construct a statement, or a file containing a sequence of statements, that you want to run in an automated way,
without typing or copying and pasting each time.
2. Invoke impala-shell with the -q option to run a single statement, or the -f option to run a sequence of
statements from a file. The impala-shell command returns immediately, without going into the interactive
interpreter.
# A utility command that you might run while developing shell scripts
# to manipulate HDFS files.
$ impala-shell -i localhost -d database_of_interest -q 'show tables'
# A sequence of CREATE TABLE, CREATE VIEW, and similar DDL statements
# can go into a file to make the setup process repeatable.
$ impala-shell -i localhost -d database_of_interest -f recreate_tables.sql
Running Commands and SQL Statements in impala-shell
The following are a few of the key syntax and usage rules for running commands and SQL statements in impala-shell.
• To see the full set of available commands, press TAB twice.
• To cycle through and edit previous commands, click the up-arrow and down-arrow keys.
• Use the standard set of keyboard shortcuts in GNU Readline library for editing and cursor movement, such as
Ctrl-A for the beginning of line and Ctrl-E for the end of line.
• Commands and SQL statements must be terminated by a semi-colon.
• Commands and SQL statements can span multiple lines.
• Use -- to denote a single-line comment and /* */ to denote a multi-line comment.
A comment is considered part of the statement it precedes, so when you enter a -- or /* */ comment, you get
a continuation prompt until you finish entering a statement ending with a semicolon. For example:
[impala] > -- This is a test comment
> SHOW TABLES LIKE 't*';
• If a comment contains the ${variable_name} and it is not for a variable substitution, the $ character must be
escaped, e.g. -- \${hello}.
For information on available impala-shell commands, see impala-shell Command Reference on page 721.
Variable Substitution in impala-shell
In CDH 5.7 / Impala 2.5 and higher, you can define substitution variables to be used within SQL statements processed
by impala-shell.
1. You specify the variable and its value as below.
• On the command line, you specify the option --var=variable_name=value
• Within an interactive session or a script file processed by the -f option, use the SET
VAR:variable_name=value command.
2. Use the above variable in SQL statements in the impala-shell session using the notation:
${VAR:variable_name}.
Note: Because this feature is part of impala-shell rather than the impalad backend, make sure
the client system you are connecting from has the most recent impala-shell. You can use this
feature with a new impala-shell connecting to an older impalad, but not the reverse.
For example, here are some impala-shell commands that define substitution variables and then use them in SQL
statements executed through the -q and -f options. Notice how the -q argument strings are single-quoted to prevent
720 | Apache Impala Guide
Impala Client Access
shell expansion of the ${var:value} notation, and any string literals within the queries are enclosed by double
quotation marks.
$ impala-shell --var=tname=table1 --var=colname=x --var=coltype=string -q 'CREATE TABLE
${var:tname} (${var:colname} ${var:coltype}) STORED AS PARQUET'
Query: CREATE TABLE table1 (x STRING) STORED AS PARQUET
The below example shows a substitution variable passed in by the --var option, and then referenced by statements
issued interactively. Then the variable is reset with the SET command.
$ impala-shell --quiet --var=tname=table1
[impala] > SELECT COUNT(*) FROM ${var:tname};
[impala] > SET VAR:tname=table2;
[impala] > SELECT COUNT(*) FROM ${var:tname};
impala-shell Command Reference
Use the following commands within impala-shell to pass requests to the impalad daemon that the shell is connected
to. You can enter a command interactively at the prompt, or pass it as the argument to the -q option of impala-shell.
Most of these commands are passed to the Impala daemon as SQL statements; refer to the corresponding SQL language
reference sections for full syntax details.
Command
alter
Explanation
Changes the underlying structure or settings of an Impala table, or a table shared between
Impala and Hive. See ALTER TABLE Statement on page 205 and ALTER VIEW Statement on page
218 for details.
compute stats
Gathers important performance-related information for a table, used by Impala to optimize
queries. See COMPUTE STATS Statement on page 219 for details.
connect
describe
drop
explain
Connects to the specified instance of impalad. The default port of 21000 is assumed unless
you provide another value. You can connect to any host in your cluster that is running impalad.
If you connect to an instance of impalad that was started with an alternate port specified
by the --fe_port flag, you must provide that alternate port. See Connecting to impalad
through impala-shell on page 718 for examples.
The SET statement has no effect until the impala-shell interpreter is connected to an
Impala server. Once you are connected, any query options you set remain in effect as you
issue a subsequent CONNECT command to connect to a different Impala host.
Shows the columns, column data types, and any column comments for a specified table.
DESCRIBE FORMATTED shows additional information such as the HDFS data directory,
partitions, and internal properties for the table. See DESCRIBE Statement on page 251 for
details about the basic DESCRIBE output and the DESCRIBE FORMATTED variant. You can
use DESC as shorthand for the DESCRIBE command.
Removes a schema object, and in some cases its associated data files. See DROP TABLE
Statement on page 268, DROP VIEW Statement on page 270, DROP DATABASE Statement on
page 262, and DROP FUNCTION Statement on page 263 for details.
Provides the execution plan for a query. EXPLAIN represents a query as a series of steps. For
example, these steps might be map/reduce stages, metastore operations, or file system
operations such as move or rename. See EXPLAIN Statement on page 271 and Using the EXPLAIN
Plan for Performance Tuning on page 602 for details.
help
Help provides a list of all available commands and options.
Apache Impala Guide | 721
Impala Client Access
Command
history
insert
invalidate
metadata
profile
quit
refresh
rerun or @
select
set
Explanation
Maintains an enumerated cross-session command history. This history is stored in the ~/
.impalahistory file.
Writes the results of a query to a specified table. This either overwrites table data or appends
data to the existing table content. See INSERT Statement on page 277 for details.
Updates impalad metadata for table existence and structure. Use this command after creating,
dropping, or altering databases, tables, or partitions in Hive. See INVALIDATE METADATA
Statement on page 286 for details.
Displays low-level information about the most recent query. Used for performance diagnosis
and tuning. The report starts with the same information as produced by the EXPLAIN
statement and the SUMMARY command. See Using the Query Profile for Performance Tuning
on page 604 for details.
Exits the shell. Remember to include the final semicolon so that the shell recognizes the end
of the command.
Refreshes impalad metadata for the locations of HDFS blocks corresponding to Impala data
files. Use this command after loading new data files into an Impala table through Hive or
through HDFS commands. See REFRESH Statement on page 291 for details.
Executes a previous impala-shell command again, from the list of commands displayed
by the history command. These could be SQL statements, or commands specific to
impala-shell such as quit or profile.
Specify an integer argument. A positive integer N represents the command labelled N in the
output of the HISTORY command. A negative integer -N represents the Nth command from
the end of the list, such as -1 for the most recent command. Commands that are executed
again do not produce new entries in the HISTORY output list.
Specifies the data set on which to complete some action. All information returned from
select can be sent to some output such as the console or a file or can be used to complete
some other element of query. See SELECT Statement on page 295 for details.
Manages query options for an impala-shell session. The available options are the ones
listed in Query Options for the SET Statement on page 322. These options are used for query
tuning and troubleshooting. Issue SET with no arguments to see the current query options,
either based on the impalad defaults, as specified by you at impalad startup, or based on
earlier SET statements in the same session. To modify option values, issue commands with
the syntax set option=value. To restore an option to its default, use the unset command.
The SET statement has no effect until the impala-shell interpreter is connected to an
Impala server. Once you are connected, any query options you set remain in effect as you
issue a subsequent CONNECT command to connect to a different Impala host.
In Impala 2.0 and later, SET is available as a SQL statement for any kind of application as well
as in impala-shell. See SET Statement on page 321 for details.
shell
Executes the specified command in the operating system shell without exiting impala-shell.
You can use the ! character as shorthand for the shell command.
Note: Quote any instances of the -- or /* tokens to avoid them being
interpreted as the start of a comment. To embed comments within source
or ! commands, use the shell comment character # before the comment
portion of the line.
722 | Apache Impala Guide
Impala Client Access
Command
Explanation
show
source or src
summary
unset
use
Displays metastore data for schema objects created and accessed through Impala, Hive, or
both. show can be used to gather information about objects such as databases, tables, and
functions. See SHOW Statement on page 363 for details.
Executes one or more statements residing in a specified file from the local filesystem. Allows
you to perform the same kinds of batch operations as with the -f option, but interactively
within the interpreter. The file can contain SQL statements and other impala-shell
commands, including additional SOURCE commands to perform a flexible sequence of actions.
Each command or statement, except the last one in the file, must end with a semicolon. See
Running Commands and SQL Statements in impala-shell on page 720 for examples.
Summarizes the work performed in various stages of a query. It provides a higher-level view
of the information displayed by the EXPLAIN command. Added in Impala 1.4.0. See Using
the SUMMARY Report for Performance Tuning on page 603 for details about the report format
and how to interpret it.
The time, memory usage, and so on reported by SUMMARY only include the portions of the
statement that read data, not when data is written. Therefore, the PROFILE command is
better for checking the performance and scalability of INSERT statements.
In CDH 5.5 / Impala 2.3 and higher, you can see a continuously updated report of the summary
information while a query is in progress. See LIVE_SUMMARY Query Option (CDH 5.5 or higher
only) on page 337 for details.
Removes any user-specified value for a query option and returns the option to its default
value. See Query Options for the SET Statement on page 322 for the available query options.
In CDH 5.7 / Impala 2.5 and higher, it can also remove user-specified substitution variables
using the notation UNSET VAR:variable_name.
Indicates the database against which to execute subsequent commands. Lets you avoid using
fully qualified names when referring to tables in databases other than default. See USE
Statement on page 385 for details. Not effective with the -q option, because that option only
allows a single statement in the argument.
version
Returns Impala version information.
Configuring Impala to Work with ODBC
Third-party products can be designed to integrate with Impala using ODBC. For the best experience, ensure any
third-party product you intend to use is supported. Verifying support includes checking that the versions of Impala,
ODBC, the operating system, and the third-party product have all been approved for use together. Before configuring
your systems to use ODBC, download a connector. You may need to sign in and accept license agreements before
accessing the pages required for downloading ODBC connectors.
Downloading the ODBC Driver
Important: As of late 2015, most business intelligence applications are certified with the 2.x ODBC
drivers. Although the instructions on this page cover both the 2.x and 1.x drivers, expect to use the
2.x drivers exclusively for most ODBC applications connecting to Impala. CDH 6.0 has been tested with
the Impala ODBC driver version 2.5.42, and Cloudera recommends that you use this version when you
start using CDH 6.0.
See the database drivers section on the Cloudera downloads web page to download and install the driver.
Apache Impala Guide | 723
Impala Client Access
Configuring the ODBC Port
Versions 2.5 and 2.0 of the Cloudera ODBC Connector, currently certified for some but not all BI applications, use the
HiveServer2 protocol, corresponding to Impala port 21050. Impala supports Kerberos authentication with all the
supported versions of the driver, and requires ODBC 2.05.13 for Impala or higher for LDAP username/password
authentication.
Version 1.x of the Cloudera ODBC Connector uses the original HiveServer1 protocol, corresponding to Impala port
21000.
Example of Setting Up an ODBC Application for Impala
To illustrate the outline of the setup process, here is a transcript of a session to set up all required drivers and a business
intelligence application that uses the ODBC driver, under Mac OS X. Each .dmg file runs a GUI-based installer, first for
the underlying IODBC driver needed for non-Windows systems, then for the Cloudera ODBC Connector, and finally for
the BI tool itself.
$ ls -1
Cloudera-ODBC-Driver-for-Impala-Install-Guide.pdf
BI_Tool_Installer.dmg
iodbc-sdk-3.52.7-macosx-10.5.dmg
ClouderaImpalaODBC.dmg
$ open iodbc-sdk-3.52.7-macosx-10.dmg
Install the IODBC driver using its installer
$ open ClouderaImpalaODBC.dmg
Install the Cloudera ODBC Connector using its installer
$ installer_dir=$(pwd)
$ cd /opt/cloudera/impalaodbc
$ ls -1
Cloudera ODBC Driver for Impala Install Guide.pdf
Readme.txt
Setup
lib
ErrorMessages
Release Notes.txt
Tools
$ cd Setup
$ ls
odbc.ini odbcinst.ini
$ cp odbc.ini ~/.odbc.ini
$ vi ~/.odbc.ini
$ cat ~/.odbc.ini
[ODBC]
# Specify any global ODBC configuration here such as ODBC tracing.
[ODBC Data Sources]
Sample Cloudera Impala DSN=Cloudera ODBC Driver for Impala
[Sample Cloudera Impala DSN]
# Description: DSN Description.
# This key is not necessary and is only to give a description of the data source.
Description=Cloudera ODBC Driver for Impala DSN
# Driver: The location where the ODBC driver is installed to.
Driver=/opt/cloudera/impalaodbc/lib/universal/libclouderaimpalaodbc.dylib
# The DriverUnicodeEncoding setting is only used for SimbaDM
# When set to 1, SimbaDM runs in UTF-16 mode.
# When set to 2, SimbaDM runs in UTF-8 mode.
#DriverUnicodeEncoding=2
# Values for HOST, PORT, KrbFQDN, and KrbServiceName should be set here.
# They can also be specified on the connection string.
HOST=hostname.sample.example.com
PORT=21050
Schema=default
# The authentication mechanism.
# 0 - No authentication (NOSASL)
724 | Apache Impala Guide
Impala Client Access
# 1 - Kerberos authentication (SASL)
# 2 - Username authentication (SASL)
# 3 - Username/password authentication (SASL)
# 4 - Username/password authentication with SSL (SASL)
# 5 - No authentication with SSL (NOSASL)
# 6 - Username/password authentication (NOSASL)
AuthMech=0
# Kerberos related settings.
KrbFQDN=
KrbRealm=
KrbServiceName=
# Username/password authentication with SSL settings.
UID=
PWD
CAIssuedCertNamesMismatch=1
TrustedCerts=/opt/cloudera/impalaodbc/lib/universal/cacerts.pem
# Specify the proxy user ID to use.
#DelegationUID=
# General settings
TSaslTransportBufSize=1000
RowsFetchedPerBlock=10000
SocketTimeout=0
StringColumnLength=32767
UseNativeQuery=0
$ pwd
/opt/cloudera/impalaodbc/Setup
$ cd $installer_dir
$ open BI_Tool_Installer.dmg
Install the BI tool using its installer
$ ls /Applications | grep BI_Tool
BI_Tool.app
$ open -a BI_Tool.app
In the BI tool, connect to a data source using port 21050
Notes about JDBC and ODBC Interaction with Impala SQL Features
Most Impala SQL features work equivalently through the impala-shell interpreter of the JDBC or ODBC APIs. The
following are some exceptions to keep in mind when switching between the interactive shell and applications using
the APIs:
Note: If your JDBC or ODBC application connects to Impala through a load balancer such as haproxy,
be cautious about reusing the connections. If the load balancer has set up connection timeout values,
either check the connection frequently so that it never sits idle longer than the load balancer timeout
value, or check the connection validity before using it and create a new one if the connection has
been closed.
Configuring Impala to Work with JDBC
Impala supports the standard JDBC interface, allowing access from commercial Business Intelligence tools and custom
software written in Java or other programming languages. The JDBC driver allows you to access Impala from a Java
program that you write, or a Business Intelligence or similar tool that uses JDBC to communicate with various database
products.
Setting up a JDBC connection to Impala involves the following steps:
• Verifying the communication port where the Impala daemons in your cluster are listening for incoming JDBC
requests.
• Installing the JDBC driver on every system that runs the JDBC-enabled application.
• Specifying a connection string for the JDBC application to access one of the servers running the impalad daemon,
with the appropriate security settings.
Apache Impala Guide | 725
Impala Client Access
Configuring the JDBC Port
The default port used by JDBC 2.0 and later (as well as ODBC 2.x) is 21050. Impala server accepts JDBC connections
through this same port 21050 by default. Make sure this port is available for communication with other hosts on your
network, for example, that it is not blocked by firewall software. If your JDBC client software connects to a different
port, specify that alternative port number with the --hs2_port option when starting impalad. See Impala Startup
Options for details about Impala startup options. See Ports Used by Impala on page 743 for information about all ports
used for communication between Impala and clients or between Impala components.
Choosing the JDBC Driver
In Impala 2.0 and later, you have the choice between the Cloudera JDBC Connector and the Hive 0.13 or higher JDBC
driver. Cloudera recommends using the Cloudera JDBC Connector where practical.
If you are already using JDBC applications with an earlier Impala release, you must update your JDBC driver to one of
these choices, because the Hive 0.12 driver that was formerly the only choice is not compatible with Impala 2.0 and
later.
Both the Cloudera JDBC Connector and the Hive JDBC driver provide a substantial speed increase for JDBC applications
with Impala 2.0 and higher, for queries that return large result sets.
Enabling Impala JDBC Support on Client Systems
Using the Cloudera JDBC Connector (recommended)
Download and install the Cloudera JDBC connector on any Linux, Windows, or Mac system where you intend to run
JDBC-enabled applications. From the Cloudera Downloads page, navigate to the Database Drivers section of the page
and choose the appropriate protocol (JDBC or ODBC) and target product (Impala or Hive). The ease of downloading
and installing on a wide variety of systems makes this connector a convenient choice for organizations with
heterogeneous environments. This is the download pagefor the Impala JDBC Connector.
Using the Hive JDBC Driver
Install the Hive JDBC driver (hive-jdbc package) through the Linux package manager, on hosts within the CDH cluster.
The driver consists of several JAR files. The same driver can be used by Impala and Hive.
To get the JAR files, install the Hive JDBC driver on each host in the cluster that will run JDBC applications. Follow the
instructions for Installing Cloudera JDBC and ODBC Drivers on Clients in CDH.
Note: The latest JDBC driver, corresponding to Hive 0.13, provides substantial performance
improvements for Impala queries that return large result sets. Impala 2.0 and later are compatible
with the Hive 0.13 driver. If you already have an older JDBC driver installed, and are running Impala
2.0 or higher, consider upgrading to the latest Hive JDBC driver for best performance with JDBC
applications.
If you are using JDBC-enabled applications on hosts outside the CDH cluster, you cannot use the CDH install procedure
on the non-CDH hosts. Install the JDBC driver on at least one CDH host using the preceding procedure. Then download
the JAR files to each client machine that will use JDBC with Impala:
commons-logging-X.X.X.jar
hadoop-common.jar
hive-common-X.XX.X-cdhX.X.X.jar
hive-jdbc-X.XX.X-cdhX.X.X.jar
hive-metastore-X.XX.X-cdhX.X.X.jar
hive-service-X.XX.X-cdhX.X.X.jar
httpclient-X.X.X.jar
httpcore-X.X.X.jar
libfb303-X.X.X.jar
libthrift-X.X.X.jar
log4j-X.X.XX.jar
slf4j-api-X.X.X.jar
726 | Apache Impala Guide
Impala Client Access
slf4j-logXjXX-X.X.X.jar
To enable JDBC support for Impala on the system where you run the JDBC application:
1. Download the JAR files listed above to each client machine.
Note: For Maven users, see this sample github page for an example of the dependencies you
could add to a pom file instead of downloading the individual JARs.
2. Store the JAR files in a location of your choosing, ideally a directory already referenced in your CLASSPATH setting.
For example:
• On Linux, you might use a location such as /opt/jars/.
• On Windows, you might use a subdirectory underneath C:\Program Files.
3. To successfully load the Impala JDBC driver, client programs must be able to locate the associated JAR files. This
often means setting the CLASSPATH for the client process to include the JARs. Consult the documentation for
your JDBC client for more details on how to install new JDBC drivers, but some examples of how to set CLASSPATH
variables include:
• On Linux, if you extracted the JARs to /opt/jars/, you might issue the following command to prepend the
JAR files path to an existing classpath:
export CLASSPATH=/opt/jars/*.jar:$CLASSPATH
• On Windows, use the System Properties control panel item to modify the Environment Variables for your
system. Modify the environment variables to include the path to which you extracted the files.
Note: If the existing CLASSPATH on your client machine refers to some older version of the
Hive JARs, ensure that the new JARs are the first ones listed. Either put the new JAR files
earlier in the listings, or delete the other references to Hive JAR files.
Establishing JDBC Connections
The JDBC driver class depends on which driver you select.
Note: If your JDBC or ODBC application connects to Impala through a load balancer such as haproxy,
be cautious about reusing the connections. If the load balancer has set up connection timeout values,
either check the connection frequently so that it never sits idle longer than the load balancer timeout
value, or check the connection validity before using it and create a new one if the connection has
been closed.
Using the Cloudera JDBC Connector (recommended)
Depending on the level of the JDBC API your application is targeting, you can use the following fully-qualified class
names (FQCNs):
• com.cloudera.impala.jdbc41.Driver
• com.cloudera.impala.jdbc41.DataSource
• com.cloudera.impala.jdbc4.Driver
• com.cloudera.impala.jdbc4.DataSource
• com.cloudera.impala.jdbc3.Driver
• com.cloudera.impala.jdbc3.DataSource
Apache Impala Guide | 727
Impala Client Access
The connection string has the following format:
jdbc:impala://Host:Port[/Schema];Property1=Value;Property2=Value;...
The port value is typically 21050 for Impala.
For full details about the classes and the connection string (especially the property values available for the connection
string), download the appropriate driver documentation for your platform from the Impala JDBC Connector download
page.
Using the Hive JDBC Driver
For example, with the Hive JDBC driver, the class name is org.apache.hive.jdbc.HiveDriver. Once you have
configured Impala to work with JDBC, you can establish connections between the two. To do so for a cluster that does
not use Kerberos authentication, use a connection string of the form jdbc:hive2://host:port/;auth=noSasl.
For example, you might use:
jdbc:hive2://myhost.example.com:21050/;auth=noSasl
To connect to an instance of Impala that requires Kerberos authentication, use a connection string of the form
jdbc:hive2://host:port/;principal=principal_name. The principal must be the same user principal you
used when starting Impala. For example, you might use:
jdbc:hive2://myhost.example.com:21050/;principal=impala/myhost.example.com@H2.EXAMPLE.COM
To connect to an instance of Impala that requires LDAP authentication, use a connection string of the form
jdbc:hive2://host:port/db_name;user=ldap_userid;password=ldap_password. For example, you might
use:
jdbc:hive2://myhost.example.com:21050/test_db;user=fred;password=xyz123
Note:
Prior to CDH 5.7 / Impala 2.5, the Hive JDBC driver did not support connections that use both Kerberos
authentication and SSL encryption. If your cluster is running an older release that has this restriction,
to use both of these security features with Impala through a JDBC application, use the Cloudera JDBC
Connector as the JDBC driver.
Notes about JDBC and ODBC Interaction with Impala SQL Features
Most Impala SQL features work equivalently through the impala-shell interpreter of the JDBC or ODBC APIs. The
following are some exceptions to keep in mind when switching between the interactive shell and applications using
the APIs:
• Complex type considerations:
– Queries involving the complex types (ARRAY, STRUCT, and MAP) require notation that might not be available
in all levels of JDBC and ODBC drivers. If you have trouble querying such a table due to the driver level or
inability to edit the queries used by the application, you can create a view that exposes a “flattened” version
of the complex columns and point the application at the view. See Complex Types (CDH 5.5 or higher only)
on page 139 for details.
– The complex types available in CDH 5.5 / Impala 2.3 and higher are supported by the JDBC getColumns()
API. Both MAP and ARRAY are reported as the JDBC SQL Type ARRAY, because this is the closest matching Java
SQL type. This behavior is consistent with Hive. STRUCT types are reported as the JDBC SQL Type STRUCT.
To be consistent with Hive's behavior, the TYPE_NAME field is populated with the primitive type name for
scalar types, and with the full toSql() for complex types. The resulting type names are somewhat inconsistent,
728 | Apache Impala Guide
because nested types are printed differently than top-level types. For example, the following list shows how
toSQL() for Impala types are translated to TYPE_NAME values:
Impala Client Access
DECIMAL(10,10) becomes DECIMAL
CHAR(10) becomes CHAR
VARCHAR(10) becomes VARCHAR
ARRAY becomes ARRAY
ARRAY becomes ARRAY
ARRAY becomes ARRAY
Kudu Considerations for DML Statements
Currently, Impala INSERT, UPDATE, or other DML statements issued through the JDBC interface against a Kudu table
do not return JDBC error codes for conditions such as duplicate primary key columns. Therefore, for applications that
issue a high volume of DML statements, prefer to use the Kudu Java API directly rather than a JDBC application.
Apache Impala Guide | 729
Troubleshooting Impala
Troubleshooting Impala
Troubleshooting for Impala requires being able to diagnose and debug problems with performance, network connectivity,
out-of-memory conditions, disk space usage, and crash or hang conditions in any of the Impala-related daemons.
Troubleshooting Impala SQL Syntax Issues
In general, if queries issued against Impala fail, you can try running these same queries against Hive.
• If a query fails against both Impala and Hive, it is likely that there is a problem with your query or other elements
of your CDH environment:
– Review the Language Reference to ensure your query is valid.
– Check Impala Reserved Words on page 745 to see if any database, table, column, or other object names in
your query conflict with Impala reserved words. Quote those names with backticks (``) if so.
– Check Impala Built-In Functions on page 391 to confirm whether Impala supports all the built-in functions
being used by your query, and whether argument and return types are the same as you expect.
– Review the contents of the Impala logs for any information that may be useful in identifying the source of
the problem.
• If a query fails against Impala but not Hive, it is likely that there is a problem with your Impala installation.
Troubleshooting Crashes Caused by Memory Resource Limit
Under very high concurrency, Impala could encounter a serious error due to usage of various operating system resources.
Errors similar to the following may be caused by operating system resource exhaustion:
F0629 08:20:02.956413 29088 llvm-codegen.cc:111] LLVM hit fatal error: Unable to allocate
section memory!
terminate called after throwing an instance of
'boost::exception_detail::clone_impl
>'
The KRPC implementation in CDH 6.1 greatly reduces thread counts and the chances of hitting a resource limit in CDH
6.1 and higher.
If you still get an error similar to the above in Impala 3.0 and higher, try increasing the max_map_count OS virtual
memory parameter. max_map_count defines the maximum number of memory map areas that a process can use.
Configure each host running an impalad daemon with the command to increase max_map_count to 8 GB.
echo 8000000 > /proc/sys/vm/max_map_count
To make the above settings durable, refer to your OS documentation. For example, on RHEL 6.x:
1. Add the following line to /etc/sysctl.conf:
vm.max_map_count=8000000
2. Run the following command:
sysctl -p
730 | Apache Impala Guide
Troubleshooting Impala
Troubleshooting I/O Capacity Problems
Impala queries are typically I/O-intensive. If there is an I/O problem with storage devices, or with HDFS itself, Impala
queries could show slow response times with no obvious cause on the Impala side. Slow I/O on even a single Impala
daemon could result in an overall slowdown, because queries involving clauses such as ORDER BY, GROUP BY, or JOIN
do not start returning results until all executor Impala daemons have finished their work.
To test whether the Linux I/O system itself is performing as expected, run Linux commands like the following on each
host Impala daemon is running:
$ sudo sysctl -w vm.drop_caches=3 vm.drop_caches=0
vm.drop_caches = 3
vm.drop_caches = 0
$ sudo dd if=/dev/sda bs=1M of=/dev/null count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.60373 s, 192 MB/s
$ sudo dd if=/dev/sdb bs=1M of=/dev/null count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.51145 s, 195 MB/s
$ sudo dd if=/dev/sdc bs=1M of=/dev/null count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.58096 s, 192 MB/s
$ sudo dd if=/dev/sdd bs=1M of=/dev/null count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.43924 s, 197 MB/s
On modern hardware, a throughput rate of less than 100 MB/s typically indicates a performance issue with the storage
device. Correct the hardware problem before continuing with Impala tuning or benchmarking.
Impala Troubleshooting Quick Reference
The following table lists common problems and potential solutions.
Symptom
Explanation
Recommendation
Impala takes a
long time to
start.
Impala instances with large numbers of tables,
partitions, or data files take longer to start
because the metadata for these objects is
broadcast to all impalad nodes and cached.
Joins fail to
complete.
There may be insufficient memory. During a join,
data from the second, third, and so on sets to be
joined is loaded into memory. If Impala chooses
an inefficient join order or join mechanism, the
query could exceed the total memory available.
Queries return
incorrect
results.
Impala metadata may be outdated after changes
are performed in Hive.
Adjust timeout and synchronicity settings.
Start by gathering statistics with the COMPUTE
STATS statement for each table involved in the
join. Consider specifying the [SHUFFLE] hint so
that data from the joined tables is split up
between nodes rather than broadcast to each
node. If tuning at the SQL level is not sufficient,
add more memory to your system or join smaller
data sets.
Where possible, use the appropriate Impala
statement (INSERT, LOAD DATA, CREATE TABLE,
ALTER TABLE, COMPUTE STATS, and so on)
rather than switching back and forth between
Impala and Hive. Impala automatically broadcasts
the results of DDL and DML operations to all
Impala nodes in the cluster, but does not
Apache Impala Guide | 731
Troubleshooting Impala
Symptom
Explanation
Recommendation
automatically recognize when such changes are
made through Hive. After inserting data, adding
a partition, or other operation in Hive, refresh the
metadata for the table as described in REFRESH
Statement on page 291.
Ensure Impala is installed on all DataNodes. Start
any impalad instances that are not running.
Queries are
slow to return
results.
Some impalad instances may not have started.
Using a browser, connect to the host running the
Impala state store. Connect using an address of
the form http://hostname:port/metrics.
Note: Replace hostname and port
with the hostname and port of your
Impala state store host machine
and web server port. The default
port is 25010.
The number of impalad instances listed should
match the expected number of impalad instances
installed in the cluster. There should also be one
impalad instance installed on each DataNode.
Impala may not be configured to use native
checksumming. Native checksumming uses
machine-specific instructions to compute
checksums over HDFS data very quickly. Review
Impala logs. If you find instances of "INFO
util.NativeCodeLoader: Loaded the
native-hadoop" messages, native
checksumming is not enabled.
Impala may not be configured to use data locality
tracking.
This can be the result of permissions issues. For
example, you could use the Hive shell as the hive
user to create a table. After creating this table,
you could attempt to complete some action, such
as an INSERT-SELECT on the table. Because the
table was created using one user and the
INSERT-SELECT is attempted by another, this
action may fail due to permissions issues.
Queries are
slow to return
results.
Queries are
slow to return
results.
Attempts to
complete
Impala tasks
such as
executing
INSERT-SELECT
actions fail. The
Impala logs
include notes
that files could
not be opened
due to
permission
denied.
Ensure Impala is configured to use native
checksumming as described in Post-Installation
Configuration for Impala on page 36.
Test Impala for data locality tracking and make
configuration changes as necessary. Information
on this process can be found in Post-Installation
Configuration for Impala on page 36.
In general, ensure the Impala user has sufficient
permissions. In the preceding example, ensure
the Impala user has sufficient permissions to the
table that the Hive user created.
Impala fails to
start up, with
the impalad
logs referring
A large number of databases, tables, partitions,
and so on can require metadata synchronization,
particularly on startup, that takes longer than the
default timeout for the statestore service.
Configure the statestore timeout value and
possibly other settings related to the frequency
of statestore updates and metadata loading. See
732 | Apache Impala Guide
Symptom
Explanation
Recommendation
Troubleshooting Impala
to errors
connecting to
the statestore
service and
attempts to
re-register.
Increasing the Statestore Timeout on page 70 and
Scalability Considerations for the Impala
Statestore on page 606.
Impala Web User Interface for Debugging
Each of the Impala daemons (impalad, statestored, and catalogd) includes a built-in web server that displays
diagnostic and status information.
impalad Web UI
The impalad Web UI includes information about configuration settings, running and completed queries, and
associated performance and resource usage for queries. In particular, the Details link for each query displays
alternative views of the query including a graphical representation of the plan, and the output of the EXPLAIN,
SUMMARY, and PROFILE statements from impala-shell. Each host that runs the impalad daemon has its own
instance of the Web UI, with details about those queries for which that host served as the coordinator. The impalad
Web UI is primarily used for diagnosing query problems that can be traced to a particular node.
statestored Web UI
The statestored Web UI includes information about memory usage, configuration settings, and ongoing health
checks performed by statestored. Because there is only a single instance of the statestored within any Impala
cluster, you access the Web UI only on the particular host that serves as the Impala StateStore.
catalogd Web UI
The catalogd Web UI includes information about the databases, tables, and other objects managed by Impala, in
addition to the resource usage and configuration settings of the catalogd. Because there is only a single instance
of the catalogd within any Impala cluster, you access the Web UI only on the particular host that serves as the
Impala Catalog Server.
Debug Web UI for impalad
To debug and troubleshoot an impalad using a web-based interface, open the URL
http://impala-server-hostname:25000/ in a browser. (For secure clusters, use the prefix https:// instead
of http://.)
Because each Impala node produces its own set of debug information, you should choose a specific node that you
want to investigate an issue on.
Turning off the Web UI for impalad
To disable Web UI for an impalad:
1. Stop the impalad.
2. Restart the impalad with the --enable_webserver=false startup flag.
Main Page
The main impalad Web UI page at / lists the following information about the impalad:
• The version of the impalad daemon
The Version section also contains other information, such as when Impala was built and what build flags were
used.
• Process start time
Apache Impala Guide | 733
Troubleshooting Impala
• Hardware information
• OS information
• Process information
• CGroup information
Admission Controller Page
The Admission Controller impalad debug Web UI is at /admission page under the main impalad Web UI.
Use the /admission page to troubleshoot queued queries and the admission control.
The admission page provides the following information about each resource pool to which queries have been submitted
at least once:
• Time since the statestored received the last update
• A warning if this impalad is considered disconnected from the statestored and thus the information on this
page could be stale.
• Pool configuration
• Queued queries submitted to this coordinator, in the order of submission
• Running queries submitted to this coordinator
• Pool stats
– Average of time in queue: An exponential moving average which represents the average time in queue over
the last 10 to 12 queries. If a query is admitted immediately, the wait time of 0 is used in calculating this
average wait time.
• Histogram of the distribution of peak memory used by queries admitted to the pool
Use the histogram to figure out settings for the minimum and maximum query MEM_LIMIT ranges for this pool.
The histogram displays data for all queries admitted to the pool, including the queries that finished, got canceled,
or hit an error.
Click on the pool name to only display information relevant to that pool. You can then refresh the debug page to see
only the information for that specific pool.
Click Reset informational stats for all pools to reset the stats that keep track of historical data, such as Totals stats,
Time in queue (exponential moving average), and the histogram.
The above information is also available as a JSON object from the following HTTP endpoint:
http://impala-server-hostname:port/admission?json
See Admission Control and Query Queuing on page 549 for the description of the properties in admission control.
Known Backends Page
The Known backends page of the impalad debug Web UI is at /backends under the main impalad Web UI.
This page lists the following info for each of the impalad nodes in the cluster. Because each impalad daemon knows
about every other impalad daemon through the StateStore, this information should be the same regardless of which
node you select.
• Address of the node: Host name and port
• KRPC address: The KRPC address of the impalad. Use this address when you issue the SHUTDOWN command for
this impalad.
• Whether acting as a coordinator
• Whether acting as an executor
• Quiescing status: Specify whether the graceful shutdown process has been initiated on this impalad.
• Memory limit for admission: The amount of memory that can be admitted to this backend by the admission
controller.
734 | Apache Impala Guide
Troubleshooting Impala
• Memory reserved: The amount of memory reserved by queries that are active, either currently executing or
finished but not yet closed, on this backend.
The memory reserved for a query that is currently executing is its memory limit, if set. Otherwise, if the query has
no limit or if the query finished executing, the current consumption is used.
• Memory of the queries admitted to this coordinator: The memory submitted to this particular host by the queries
admitted by this coordinator.
Catalog Page
The Catalog page of the impalad debug Web UI is at /catalog under the main impalad Web UI.
This page displays a list of databases and associated tables recognized by this instance of impalad. You can use this
page to locate which database a table is in, check the exact spelling of a database or table name, look for identical
table names in multiple databases. The primary debugging use case would be to check if an impalad instance has
knowledge of a particular table that someone expects to be in a particular database.
Hadoop Configuration
The Hadoop Configuration page of the impalad debug Web UI is at /hadoop-varz under the main impalad Web UI.
This page displays the Hadoop common configurations that Impala is running with.
JMX
The JMX page of the impalad debug Web UI is at /jmx under the main impalad Web UI.
This page displays monitoring information about various JVM subsystems, such as memory pools, thread management,
runtime. etc.
Java Log Level
The Change log level page of the impalad debug Web UI is at /log_level under the main impalad Web UI.
This page displays the current Java and backend log levels, and it allows you to change the log levels dynamically
without having to restart the impalad.
Logs Page
The INFO logs page of the impalad debug Web UI is at /logs under the main impalad Web UI.
This page shows the last portion of the impalad.INFO log file, including the info, warning, and error logs for the
impalad. You can see the details of the most recent operations, whether the operations succeeded or encountered
errors. This page provides one central place for the log files and saves you from looking around the filesystem for the
log files, which could be in different locations on clusters that use cluster management software.
Memz Page
The Memory Usage page of the impalad debug Web UI is at /memz under the main impalad Web UI.
This page displays the summary and detailed information about memory usage by the impalad.
Metrics Page
The Metrics page of the impalad debug Web UI is at /metrics under the main impalad Web UI.
This page displays the current set of metrics, counters and flags representing various aspects of impalad internal
operations.
Queries Page
The Queries page of the impalad debug Web UI is at /queries under the main impalad Web UI.
This page lists:
• Currently running queries
Apache Impala Guide | 735
Troubleshooting Impala
• Queries that have completed their execution, but have not been closed yet
• Completed queries whose details still reside in memory
The queries are listed in reverse chronological order, with the most recent at the top. You can control the amount of
memory devoted to completed queries by specifying the ----query_log_size startup option for impalad.
This page provides:
• How many SQL statements are failing (State value of EXCEPTION)
• How large the result sets are (# rows fetched)
• How long each statement took (Start Time and End Time)
Click the Details link for a query to display the detailed performance characteristics of that query, such as the profile
output.
On the query detail page, in the Profile tab, you have options to export the query profile output to the Thrif, text, or
Json format.
The Queries page also includes the Query Locations section that lists the number of running queries with fragments
on this host.
RPC Services Page
The RPC durations page of the impalad debug Web UI is at /rpcz under the main impalad Web UI.
This page displays information, such as the duration, about the RPC communications of this impalad with other Impala
daemons.
Sessions Page
The Sessions page of the impalad debug Web UI is at /session under the main impalad Web UI.
This page displays information about the sessions currently connected to this impalad instance. For example, sessions
could include connections from the impala-shell command, JDBC or ODBC applications, or the Impala Query UI in
the Hue web interface.
Threadz Page
The Threads page of the impalad debug Web UI is at /threadz under the main impalad Web UI.
This page displays information about the threads used by this instance of impalad, and it shows which categories
they are grouped into. Making use of this information requires substantial knowledge about Impala internals.
Varz Page
The Varz page of the impalad debug Web UI is at /varz under the main impalad Web UI.
This page shows the configuration settings in effect when this instance of impalad communicates with other Hadoop
components such as HDFS and YARN. These settings are collected from a set of configuration files.
The bottom of this page also lists all the command-line settings in effect for this instance of impalad. See Modifying
Impala Startup Options for information about modifying these values.
Prometheus Metrics Page
At /metrics_prometheus under the main impalad Web UI, the metrics are generated in Prometheus exposition format
that Prometheus can consume for event monitoring and alerting. The /metrics_prometheus is not shown in the Web
UI list of pages.
Debug Web UI for statestored
To debug and troubleshoot the statestored daemon using a web-based interface, open the URL
http://impala-server-hostname:25010/ in a browser. (For secure clusters, use the prefix https:// instead
of http://.)
736 | Apache Impala Guide
Troubleshooting Impala
Turning off the Web UI for statestored
To disable Web UI for the statestored:
1. Stop the statestored.
2. Restart the statestored with the --enable_webserver=false startup flag.
Main Page
The main statestored Web UI page at / lists the following information about the statestored:
• The version of the statestored daemon
• Process start time
• Hardware information
• OS information
• Process information
• CGroup information
Logs Page
The INFO logs page of the debug Web UI is at /logs under the main statestored Web UI.
This page shows the last portion of the statestored.INFO log file, including the info, warning, and error logs for the
statestored. You can refer here to see the details of the most recent operations, whether the operations succeeded
or encountered errors. This page provides one central place for the log files and saves you from looking around the
filesystem for the log files, which could be in different locations on clusters that use cluster management software.
Memz Page
The Memory Usage page of the debug Web UI is at /memz under the main statestored Web UI.
This page displays summary and detailed information about memory usage by the statestored. You can see the
memory limit in effect for the node, and how much of that memory Impala is currently using.
Metrics Page
The Metrics page of the debug Web UI is at /metrics under the main statestored Web UI.
This page displays the current set of metrics: counters and flags representing various aspects of statestored internal
operation.
RPC Services Page
The RPC durations page of the statestored debug Web UI is at /rpcz under the main statestored Web UI.
This page displays information, such as the durations, about the RPC communications of this statestored with other
Impala daemons.
Subscribers Page
The Subscribers page of the debug Web UI is at /subscribers under the main statestored Web UI.
This page displays information about the other Impala daemons that have registered with the statestored to receive
and send updates.
Threadz Page
The Threads page of the debug Web UI is at /threadz under the main statestored Web UI.
This page displays information about the threads used by this instance of statestored, and shows which categories
they are grouped into. Making use of this information requires substantial knowledge about Impala internals.
Topics Page
The Topics page of the debug Web UI is at /topics under the main statestored Web UI.
Apache Impala Guide | 737
Troubleshooting Impala
This page displays information about the topics to which the other Impala daemons have registered to receive updates.
Varz Page
The Varz page of the debug Web UI is at /varz under the main statestored Web UI.
This page shows the configuration settings in effect when this instance of statestored communicates with other
Hadoop components such as HDFS and YARN. These settings are collected from a set of configuration files.
The bottom of this page also lists all the command-line settings in effect for this instance of statestored. See
Modifying Impala Startup Options for information about modifying these values.
Prometheus Metrics Page
At /metrics_prometheus under the main statestored Web UI, the metrics are generated in Prometheus exposition
format that Prometheus can consumes for event monitoring and alerting. The /metrics_prometheus is not shown in
the Web UI list of pages.
Debug Web UI for catalogd
The main page of the debug Web UI is at http://impala-server-hostname:25020/ (non-secure cluster) or
https://impala-server-hostname:25020/ (secure cluster).
Turning off the Web UI for catalogd
To disable Web UI for the catalogd:
1. Stop the catalogd.
2. Restart the catalogd with the --enable_webserver=false startup flag.
Main Page
The main catalogd Web UI page at / lists the following information about the catalogd:
• The version of the catalogd daemon
• Process start time
• Hardware information
• OS information
• Process information
• CGroup information
Catalog Page
The Catalog page of the debug Web UI is at /catalog under the main catalogd Web UI.
This page displays a list of databases and associated tables recognized by this instance of catalogd. You can use this
page to locate which database a table is in, check the exact spelling of a database or table name, look for identical
table names in multiple databases. The catalog information is represented as the underlying Thrift data structures.
JMX
The JMX page of the catalogd debug Web UI is at /jmx under the main catalogd Web UI.
This page displays monitoring information about various JVM subsystems, such as memory pools, thread management,
runtime. etc.
Java Log Level
The Change log level page of the catalogd debug Web UI is at /log_level under the main catalogd Web UI.
The page displays the current Java and backend log levels and allows you to change the log levels dynamically without
having to restart the catalogd
738 | Apache Impala Guide
Troubleshooting Impala
Logs Page
The INFO logs page of the debug Web UI is at /logs under the main catalogd Web UI.
This page shows the last portion of the catalogd.INFO log file, including the info, warning, and error logs for the
catalogd daemon. You can refer here to see the details of the most recent operations, whether the operations
succeeded or encountered errors. This page provides one central place for the log files and saves you from looking
around the filesystem for the log files, which could be in different locations on clusters that use cluster management
software.
Memz Page
The Memory Usage page of the debug Web UI is at /memz under the main catalogd Web UI.
This page displays summary and detailed information about memory usage by the catalogd. You can see the memory
limit in effect for the node, and how much of that memory Impala is currently using.
Metrics Page
The Metrics page of the debug Web UI is at /metrics under the main catalogd Web UI.
This page displays the current set of metrics: counters and flags representing various aspects of catalogd internal
operation.
RPC Services Page
The RPC durations page of the catalogd debug Web UI is at /rpcz under the main catalogd Web UI.
This page displays information, such as the durations, about the RPC communications of this catalogd with other
Impala daemons.
Threadz Page
The Threads page of the debug Web UI is at /threadz under the main catalogd Web UI.
This page displays information about the threads used by this instance of catalogd, and shows which categories they
are grouped into. Making use of this information requires substantial knowledge about Impala internals.
Varz Page
The Varz page of the debug Web UI is at /varz under the main catalogd Web UI.
This page shows the configuration settings in effect when this instance of catalogd communicates with other Hadoop
components such as HDFS and YARN. These settings are collected from a set of configuration files.
The bottom of this page also lists all the command-line settings in effect for this instance of catalogd. See Modifying
Impala Startup Options for information about modifying these values.
Prometheus Metrics Page
At /metrics_prometheus under the main impalad Web UI, the metrics are generated in Prometheus exposition format
that Prometheus can consume for event monitoring and alerting. The /metrics_prometheus is not shown in the Web
UI list of pages.
Breakpad Minidumps for Impala (CDH 5.8 or higher only)
The breakpad project is an open-source framework for crash reporting. In CDH 5.8 / Impala 2.6 and higher, Impala can
use breakpad to record stack information and register values when any of the Impala-related daemons crash due to
an error such as SIGSEGV or unhandled exceptions. The dump files are much smaller than traditional core dump files.
The dump mechanism itself uses very little memory, which improves reliability if the crash occurs while the system is
low on memory.
Apache Impala Guide | 739
Troubleshooting Impala
Important: Because of the internal mechanisms involving Impala memory allocation and Linux
signalling for out-of-memory (OOM) errors, if an Impala-related daemon experiences a crash due to
an OOM condition, it does not generate a minidump for that error.
Enabling or Disabling Minidump Generation
By default, a minidump file is generated when an Impala-related daemon crashes.
To turn off generation of the minidump files, use one of the following options:
• Set the --enable_minidumps configuration setting to false. Restart the corresponding services or daemons.
• Set the --minidump_path configuration setting to an empty string. Restart the corresponding services or daemons.
In CDH 5.9 / Impala 2.7 and higher, you can send a SIGUSR1 signal to any Impala-related daemon to write a Breakpad
minidump. For advanced troubleshooting, you can now produce a minidump without triggering a crash.
Specifying the Location for Minidump Files
By default, all minidump files are written to the following location on the host where a crash occurs:
• Clusters managed by Cloudera Manager: /var/log/impala-minidumps/daemon_name
• Clusters not managed by Cloudera Manager: impala_log_dir/daemon_name/minidumps/daemon_name
The minidump files for impalad, catalogd, and statestored are each written to a separate directory.
To specify a different location, set the minidump_path configuration setting of one or more Impala-related daemons,
and restart the corresponding services or daemons.
If you specify a relative path for this setting, the value is interpreted relative to the default minidump_path directory.
Controlling the Number of Minidump Files
Like any files used for logging or troubleshooting, consider limiting the number of minidump files, or removing unneeded
ones, depending on the amount of free storage space on the hosts in the cluster.
Because the minidump files are only used for problem resolution, you can remove any such files that are not needed
to debug current issues.
To control how many minidump files Impala keeps around at any one time, set the max_minidumps configuration
setting for of one or more Impala-related daemon, and restart the corresponding services or daemons. The default
for this setting is 9. A zero or negative value is interpreted as “unlimited”.
Detecting Crash Events
You can see in the Impala log files or in the Cloudera Manager charts for Impala when crash events occur that generate
minidump files. Because each restart begins a new log file, the “crashed” message is always at or near the bottom of
the log file. (There might be another later message if core dumps are also enabled.)
Using the Minidump Files for Problem Resolution
Typically, you provide minidump files to Cloudera Support as part of problem resolution, in the same way that you
might provide a core dump. The Send Diagnostic Data under the Support menu in Cloudera Manager guides you
through the process of selecting a time period and volume of diagnostic data, then collects the data from all hosts and
transmits the relevant information for you.
740 | Apache Impala Guide
Troubleshooting Impala
Figure 6: Send Diagnostic Data choice under Support menu
You might get additional instructions from Cloudera Support about collecting minidumps to better isolate a specific
problem. Because the information in the minidump files is limited to stack traces and register contents, the possibility
of including sensitive information is much lower than with core dump files. If any sensitive information is included in
the minidump, Cloudera Support preserves the confidentiality of that information.
Demonstration of Breakpad Feature
The following example uses the command kill -11 to simulate a SIGSEGV crash for an impalad process on a single
DataNode, then examines the relevant log files and minidump file.
First, as root on a worker node, we kill the impalad process with a SIGSEGV error. The original process ID was 23114.
(Cloudera Manager restarts the process with a new pid, as shown by the second ps command.)
# ps ax | grep impalad
23114 ? Sl 0:18
/opt/cloudera/parcels//lib/impala/sbin-retail/impalad
--flagfile=/var/run/cloudera-scm-agent/process/114-impala-IMPALAD/impala-conf/impalad_flags
31259 pts/0 S+ 0:00 grep impalad
#
# kill -11 23114
#
# ps ax | grep impalad
31374 ? Rl 0:04
/opt/cloudera/parcels//lib/impala/sbin-retail/impalad
--flagfile=/var/run/cloudera-scm-agent/process/114-impala-IMPALAD/impala-conf/impalad_flags
31475 pts/0 S+ 0:00 grep impalad
Apache Impala Guide | 741
Troubleshooting Impala
We locate the log directory underneath /var/log. There is a .INFO, .WARNING, and .ERROR log file for the 23114
process ID. The minidump message is written to the .INFO file and the .ERROR file, but not the .WARNING file. In this
case, a large core file was also produced.
# cd /var/log/impalad
# ls -la | grep 23114
-rw------- 1 impala impala 3539079168 Jun 23 15:20 core.23114
-rw-r--r-- 1 impala impala 99057 Jun 23 15:20 hs_err_pid23114.log
-rw-r--r-- 1 impala impala 351 Jun 23 15:20
impalad.worker_node_123.impala.log.ERROR.20160623-140343.23114
-rw-r--r-- 1 impala impala 29101 Jun 23 15:20
impalad.worker_node_123.impala.log.INFO.20160623-140343.23114
-rw-r--r-- 1 impala impala 228 Jun 23 14:03
impalad.worker_node_123.impala.log.WARNING.20160623-140343.23114
The .INFO log includes the location of the minidump file, followed by a report of a core dump. With the breakpad
minidump feature enabled, now we might disable core dumps or keep fewer of them around.
# cat impalad.worker_node_123.impala.log.INFO.20160623-140343.23114
...
Wrote minidump to
/var/log/impala-minidumps/impalad/0980da2d-a905-01e1-25ff883a-04ee027a.dmp
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000030c0e0b68a, pid=23114, tid=139869541455968
#
# JRE version: Java(TM) SE Runtime Environment (7.0_67-b01) (build 1.7.0_67-b01)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed
oops)
# Problematic frame:
# C [libpthread.so.0+0xb68a] pthread_cond_wait+0xca
#
# Core dump written. Default location: /var/log/impalad/core or core.23114
#
# An error report file with more information is saved as:
# /var/log/impalad/hs_err_pid23114.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
...
# cat impalad.worker_node_123.impala.log.ERROR.20160623-140343.23114
Log file created at: 2016/06/23 14:03:43
Running on machine:.worker_node_123
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0623 14:03:43.911002 23114 logging.cc:118] stderr will be logged to this file.
Wrote minidump to
/var/log/impala-minidumps/impalad/0980da2d-a905-01e1-25ff883a-04ee027a.dmp
The resulting minidump file is much smaller than the corresponding core file, making it much easier to supply diagnostic
information to Cloudera Support. The transmission process for the minidump files is automated through Cloudera
Manager.
# pwd
/var/log/impalad
# cd ../impala-minidumps/impalad
# ls
0980da2d-a905-01e1-25ff883a-04ee027a.dmp
# du -kh *
2.4M 0980da2d-a905-01e1-25ff883a-04ee027a.dmp
742 | Apache Impala Guide
Ports Used by Impala
Ports Used by Impala
Impala uses the TCP ports listed in the following table. Before deploying Impala, ensure these ports are open on each
system.
Scope / Role
Setting in Cloudera Manager
Impala Daemon
Impala Daemon Frontend Port
Default
Port
21000
Impala Daemon
Impala Daemon Frontend Port
21050
Impala Daemon
Impala Daemon Backend Port
22000
Impala Daemon
StateStoreSubscriber Service Port
23000
Catalog Daemon
StateStoreSubscriber Service Port
23020
Impala Daemon
Impala Daemon HTTP Server Port
25000
Impala StateStore
Daemon
StateStore HTTP Server Port
25010
Impala Catalog Daemon
Catalog HTTP Server Port
25020
Impala StateStore
Daemon
StateStore Service Port
24000
Impala Catalog Daemon
Catalog Service Port
26000
Comment
Used to transmit commands and
receive results by impala-shell and
version 1.2 of the Cloudera ODBC
driver.
Used to transmit commands and
receive results by applications, such
as Business Intelligence tools, using
JDBC, the Beeswax query editor in
Hue, and version 2.0 or higher of the
Cloudera ODBC driver.
Internal use only. Impala daemons use
this port to communicate with each
other.
Internal use only. Impala daemons
listen on this port for updates from
the statestore daemon.
Internal use only. The catalog daemon
listens on this port for updates from
the statestore daemon.
Impala web interface for
administrators to monitor and
troubleshoot.
StateStore web interface for
administrators to monitor and
troubleshoot.
Catalog service web interface for
administrators to monitor and
troubleshoot. New in Impala 1.2 and
higher.
Internal use only. The statestore
daemon listens on this port for
registration/unregistration requests.
Internal use only. The catalog service
uses this port to communicate with
the Impala daemons. New in Impala
1.2 and higher.
Apache Impala Guide | 743
Ports Used by Impala
Scope / Role
Setting in Cloudera Manager
Impala Daemon
KRPC Port
Default
Port
27000
Comment
Internal use only. Impala daemons use
this port for KRPC based
communication with each other.
744 | Apache Impala Guide
Impala Reserved Words
Impala Reserved Words
This topic lists the reserved words in Impala.
A reserved word is one that cannot be used directly as an identifier. If you need to use it as an identifier, you must
quote it with backticks. For example:
• CREATE TABLE select (x INT): fails
• CREATE TABLE `select` (x INT): succeeds
Because different database systems have different sets of reserved words, and the reserved words change from release
to release, carefully consider database, table, and column names to ensure maximum compatibility between products
and versions.
Also consider whether your object names are the same as any Hive keywords, and rename or quote any that conflict
as you might switch between Impala and Hive when doing analytics and ETL. Consult the list of Hive keywords.
To future-proof your code, you should avoid additional words in case they become reserved words if Impala adds
features in later releases. This kind of planning can also help to avoid name conflicts in case you port SQL from other
systems that have different sets of reserved words. The Future Keyword column in the table below indicates those
additional words that you should avoid for table, column, or other object names, even though they are not currently
reserved by Impala.
The following is a summary of the process for deciding whether a particular SQL 2016 word is to be reserved in Impala.
• By default, Impala targets to have the same list of reserved words as SQL 2016.
• At the same time, to be compatible with earlier versions of Impala and to avoid breaking existing tables/workloads,
Impala built-in function names are removed from the reserved words list, e.g. COUNT, AVG, as Impala generally
does not need to reserve the names of built-in functions for parsing to work.
• For those remaining SQL 2016 reserved words, if a word is likely to be in-use by users of older Impala versions
and if there is a low chance of Impala needing to reserve that word in the future, then the word is not reserved.
• Otherwise, the word is reserved in Impala.
List of Current Reserved Words
Keyword
Reserved
Reserved
Reserved
Future Keyword
in
in
in
SQL:2016
Impala 2.12 and lower
CDH 6.0 / Impala 3.0
and higher
abs
acos
add
aggregate
all
allocate
alter
analytic
and
anti
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Apache Impala Guide | 745
Impala Reserved Words
any
api_version
are
array
array_agg
array_max_cardinality
as
asc
asensitive
asin
asymmetric
at
atan
atomic
authorization
avg
avro
backup
begin
begin_frame
begin_partition
between
bigint
binary
blob
block_size
boolean
both
break
browse
bulk
by
cache
cached
call
called
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
746 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
cardinality
cascade
cascaded
case
cast
ceil
ceiling
change
char
char_length
character
character_length
check
checkpoint
class
classifier
clob
close
close_fn
clustered
coalesce
collate
collect
column
columns
comment
commit
compression
compute
condition
conf
connect
constraint
contains
continue
convert
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
Apache Impala Guide | 747
Impala Reserved Words
copy
corr
corresponding
cos
cosh
count
covar_pop
covar_samp
create
cross
cube
cume_dist
current
current_catalog
current_date
current_default_transform_group
current_path
current_role
current_row
current_schema
current_time
current_timestamp
current_transform_group_for_type
current_user
cursor
cycle
data
database
databases
date
datetime
day
dayofweek
dbcc
deallocate
dec
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
748 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
decfloat
decimal
declare
default
define
delete
delimited
dense_rank
deny
deref
desc
describe
deterministic
disconnect
disk
distinct
distributed
div
double
drop
dump
dynamic
each
element
else
empty
encoding
end
end-exec
end_frame
end_partition
equals
errlvl
escape
escaped
every
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
Apache Impala Guide | 749
Impala Reserved Words
except
exchange
exec
execute
exists
exit
exp
explain
extended
external
extract
false
fetch
fields
file
filefactor
fileformat
files
filter
finalize_fn
first
first_value
float
floor
following
for
foreign
format
formatted
frame_row
free
freetext
from
full
function
functions
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
750 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
fusion
get
global
goto
grant
group
grouping
groups
hash
having
hold
holdlock
hour
identity
if
ignore
ilike
import
in
incremental
index
indicator
init_fn
initial
inner
inout
inpath
insensitive
insert
int
integer
intermediate
intersect
intersection
interval
into
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
X
X
X
X
Apache Impala Guide | 751
Impala Reserved Words
invalidate
iregexp
is
join
json_array
json_arrayagg
json_exists
json_object
json_objectagg
json_query
json_table
json_table_primitive
json_value
key
kill
kudu
lag
language
large
last
last_value
lateral
lead
leading
left
less
like
like_regex
limit
lineno
lines
listagg
ln
load
local
localtime
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
752 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
localtimestamp
location
log
log10
lower
macro
map
match
match_number
match_recognize
matches
max
member
merge
merge_fn
metadata
method
min
minute
mod
modifies
module
month
more
multiset
national
natural
nchar
nclob
new
no
nocheck
nonclustered
none
normalize
not
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
X
X
Apache Impala Guide | 753
Impala Reserved Words
nth_value
ntile
null
nullif
nulls
numeric
occurrences_regex
octet_length
of
off
offset
offsets
old
omit
on
one
only
open
option
or
order
out
outer
over
overlaps
overlay
overwrite
parameter
parquet
parquetfile
partialscan
partition
partitioned
partitions
pattern
per
754 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
percent
percent_rank
percentile_cont
percentile_disc
period
pivot
plan
portion
position
position_regex
power
precedes
preceding
precision
prepare
prepare_fn
preserve
primary
print
proc
procedure
produced
ptf
public
purge
raiseerror
range
rank
rcfile
read
reads
readtext
real
reconfigure
recover
recursive
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
Apache Impala Guide | 755
Impala Reserved Words
reduce
ref
references
referencing
refresh
regexp
regr_avgx
regr_avgy
regr_count
regr_intercept
regr_r2
regr_slope
regr_sxx
regr_sxy
regr_syy
release
rename
repeatable
replace
replication
restore
restrict
result
return
returns
revert
revoke
right
rlike
role
roles
rollback
rollup
row
row_number
rowcount
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
756 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
rows
rule
running
save
savepoint
schema
schemas
scope
scroll
search
second
securityaudit
seek
select
semi
sensitive
sequencefile
serdeproperties
serialize_fn
session_user
set
setuser
show
shutdown
similar
sin
sinh
skip
smallint
some
sort
specific
specifictype
sql
sqlexception
sqlstate
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
Apache Impala Guide | 757
Impala Reserved Words
sqlwarning
sqrt
start
static
statistics
stats
stddev_pop
stddev_samp
stored
straight_join
string
struct
submultiset
subset
substring
substring_regex
succeeds
sum
symbol
symmetric
system
system_time
system_user
table
tables
tablesample
tan
tanh
tblproperties
terminated
textfile
textsize
then
time
timestamp
timezone_hour
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
758 | Apache Impala Guide
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
timezone_minute
tinyint
to
top
trailing
tran
transform
translate
translate_regex
translation
treat
trigger
trim
trim_array
true
truncate
try_convert
uescape
unbounded
uncached
union
unique
uniquejoin
unknown
unnest
unpivot
update
update_fn
updatetext
upper
upsert
use
user
using
utc_tmestamp
value
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Impala Reserved Words
Apache Impala Guide | 759
Impala Reserved Words
value_of
values
var_pop
var_samp
varbinary
varchar
varying
versioning
view
views
waitfor
when
whenever
where
while
width_bucket
window
with
within
without
writetext
year
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
760 | Apache Impala Guide
Impala Frequently Asked Questions
Impala Frequently Asked Questions
Here are the categories of frequently asked questions for Apache Impala, the interactive SQL engine for CDH.
Transition to Apache Governance
Does "Apache Impala (incubating)" mean Impala is not production-ready?
No. The “(incubating)” label was only applied to the Apache Impala project while it was transitioning to governance
by the Apache Software Foundation. Impala graduated to a top-level Apache project on November 15, 2017.
Impala has always been Apache-licensed. The software itself is the same production-ready and battle-tested analytic
database that has been supported by Cloudera since Impala 1.0 in 2013.
Why does the Impala version string in CDH 5.10 say Impala 2.7, while the docs refer to Impala 2.8?
The version of Impala that is included in CDH 5.10 is Impala 2.7 plus almost all the patches that went into Impala 2.8.
CDH 5.10 was released very shortly after Apache Impala 2.8, and the version string was not updated in the CDH
packaging. To accurately relate the CDH 5.10 feature set to the corresponding level of Apache Impala, the documentation
refers to Impala 2.8 as the minimum Impala version number for features such as the MT_DOP query option and full
integration of Impala SQL syntax with Apache Kudu.
The full list of relevant commits for the Impala included with CDH 5.10.0, and the upstream Apache Impala project,
are:
• CDH 5.10: https://github.com/cloudera/Impala/commits/cdh5-2.7.0_5.10.0
• Apache Impala 2.8: https://github.com/apache/incubator-impala/commits/branch-2.8.0
•
Because the Cloudera policy is to keep version numbering consistent across CDH maintenance releases, the Impala
version string in the CDH packaging remains at 2.7 for CDH 5.10.1 and any future 5.10 maintenance releases.
Trying Impala
How do I try Impala out?
To look at the core features and functionality on Impala, the easiest way to try out Impala is to download the Cloudera
QuickStart VM and start the Impala service through Cloudera Manager, then use impala-shell in a terminal window
or the Impala Query UI in the Hue web interface.
To do performance testing and try out the management features for Impala on a cluster, you need to move beyond
the QuickStart VM with its virtualized single-node environment. Ideally, download the Cloudera Manager software to
set up the cluster, then install the Impala software through Cloudera Manager.
Does Cloudera offer a VM for demonstrating Impala?
Cloudera offers a demonstration VM called the QuickStart VM, available in VMWare, VirtualBox, and KVM formats.
For more information, see the Cloudera QuickStart VM. After booting the QuickStart VM, many services are turned
off by default; in the Cloudera Manager UI that appears automatically, turn on Impala and any other components that
you want to try out.
Apache Impala Guide | 761
Impala Frequently Asked Questions
Where can I find Impala documentation?
The core Impala developer and administrator information remains in the associated Impala documentation portion.
Information about Impala release notes, installation, configuration, startup, and security is embedded in the
corresponding CDH guides.
• Impala Upgrade Considerations on page 38
• Configuring Impala
• Security for Impala
• CDH Version and Packaging Information
Where can I get more information about Impala?
More product information is available here:
• O'Reilly introductory e-book: Cloudera Impala: Bringing the SQL and Hadoop Worlds Together
• O'Reilly getting started guide for developers: Getting Started with Impala: Interactive SQL for Apache Hadoop
• Blog: Cloudera Impala: Real-Time Queries in Apache Hadoop, For Real
• Webinar: Introduction to Impala
• Product website page: Cloudera Enterprise RTQ
To see the latest release announcements for Impala, see the Cloudera Announcements forum.
How can I ask questions and provide feedback about Impala?
• Join the Impala discussion forum and the Impala mailing list to ask questions and provide feedback.
• Use the Impala Jira project to log bug reports and requests for features.
Where can I get sample data to try?
You can get scripts that produce data files and set up an environment for TPC-DS style benchmark tests from this Github
repository. In addition to being useful for experimenting with performance, the tables are suited to experimenting
with many aspects of SQL on Impala: they contain a good mixture of data types, data distributions, partitioning, and
relational data suitable for join queries.
Impala System Requirements
What are the software and hardware requirements for running Impala?
For information on Impala requirements, see Impala Requirements on page 23. Note that there is often a minimum
required level of Cloudera Manager for any given Impala version.
How much memory is required?
Although Impala is not an in-memory database, when dealing with large tables and large result sets, you should expect
to dedicate a substantial portion of physical memory for the impalad daemon. Recommended physical memory for
an Impala node is 128 GB or higher. If practical, devote approximately 80% of physical memory to Impala.
The amount of memory required for an Impala operation depends on several factors:
• The file format of the table. Different file formats represent the same data in more or fewer data files. The
compression and encoding for each file format might require a different amount of temporary memory to
decompress the data for analysis.
• Whether the operation is a SELECT or an INSERT. For example, Parquet tables require relatively little memory
to query, because Impala reads and decompresses data in 8 MB chunks. Inserting into a Parquet table is a more
memory-intensive operation because the data for each data file (potentially hundreds of megabytes, depending
on the value of the PARQUET_FILE_SIZE query option) is stored in memory until encoded, compressed, and
written to disk.
762 | Apache Impala Guide
Impala Frequently Asked Questions
• Whether the table is partitioned or not, and whether a query against a partitioned table can take advantage of
partition pruning.
• Whether the final result set is sorted by the ORDER BY clause. Each Impala node scans and filters a portion of the
total data, and applies the LIMIT to its own portion of the result set. In CDH 5.1 / Impala 1.4 and higher, if the
sort operation requires more memory than is available on any particular host, Impala uses a temporary disk work
area to perform the sort. The intermediate result sets are all sent back to the coordinator node, which does the
final sorting and then applies the LIMIT clause to the final result set.
For example, if you execute the query:
select * from giant_table order by some_column limit 1000;
and your cluster has 50 nodes, then each of those 50 nodes will transmit a maximum of 1000 rows back to the
coordinator node. The coordinator node needs enough memory to sort (LIMIT * cluster_size) rows, although in
the end the final result set is at most LIMIT rows, 1000 in this case.
Likewise, if you execute the query:
select * from giant_table where test_val > 100 order by some_column;
then each node filters out a set of rows matching the WHERE conditions, sorts the results (with no size limit), and
sends the sorted intermediate rows back to the coordinator node. The coordinator node might need substantial
memory to sort the final result set, and so might use a temporary disk work area for that final phase of the query.
• Whether the query contains any join clauses, GROUP BY clauses, analytic functions, or DISTINCT operators. These
operations all require some in-memory work areas that vary depending on the volume and distribution of data.
In Impala 2.0 and later, these kinds of operations utilize temporary disk work areas if memory usage grows too
large to handle. See SQL Operations that Spill to Disk on page 607 for details.
• The size of the result set. When intermediate results are being passed around between nodes, the amount of data
depends on the number of columns returned by the query. For example, it is more memory-efficient to query
only the columns that are actually needed in the result set rather than always issuing SELECT *.
• The mechanism by which work is divided for a join query. You use the COMPUTE STATS statement, and query
hints in the most difficult cases, to help Impala pick the most efficient execution plan. See Performance
Considerations for Join Queries on page 568 for details.
See Hardware Requirements on page 24 for more details and recommendations about Impala hardware prerequisites.
What processor type and speed does Cloudera recommend?
Impala makes use of SSE 4.1 instructions.
What EC2 instances are recommended for Impala?
For large storage capacity and large I/O bandwidth, consider the hs1.8xlarge and cc2.8xlarge instance types.
Impala I/O patterns typically do not benefit enough from SSD storage to make up for the lower overall size. For
performance and security considerations for deploying CDH and its components on AWS, see Cloudera Enterprise
Reference Architecture for AWS Deployments.
Supported and Unsupported Functionality In Impala
What are the main features of Impala?
• A large set of SQL statements, including SELECT and INSERT, with joins, Subqueries in Impala SELECT Statements
on page 312, and Impala Analytic Functions on page 506. Highly compatible with HiveQL, and also including some
vendor extensions. For more information, see Impala SQL Language Reference on page 101.
Apache Impala Guide | 763
Impala Frequently Asked Questions
• Distributed, high-performance queries. See Tuning Impala for Performance on page 565 for information about
Impala performance optimizations and tuning techniques for queries.
• Using Cloudera Manager, you can deploy and manage your Impala services. Cloudera Manager is the best way to
get started with Impala on your cluster.
• Using Hue for queries.
• Appending and inserting data into tables through the INSERT statement. See How Impala Works with Hadoop File
Formats on page 634 for the details about which operations are supported for which file formats.
• ODBC: Impala is certified to run against MicroStrategy and Tableau, with restrictions. For more information, see
Configuring Impala to Work with ODBC on page 723.
• Querying data stored in HDFS and HBase in a single query. See Using Impala to Query HBase Tables on page 684
for details.
• In Impala 2.2.0 and higher, querying data stored in the Amazon Simple Storage Service (S3). See Using Impala with
the Amazon S3 Filesystem on page 692 for details.
• Concurrent client requests. Each Impala daemon can handle multiple concurrent client requests. The effects on
performance depend on your particular hardware and workload.
• Kerberos authentication. For more information, see Impala Security on page 82.
• Partitions. With Impala SQL, you can create partitioned tables with the CREATE TABLE statement, and add and
drop partitions with the ALTER TABLE statement. Impala also takes advantage of the partitioning present in Hive
tables. See Partitioning for Impala Tables on page 625 for details.
What features from relational databases or Hive are not available in Impala?
• Querying streaming data.
• Deleting individual rows. You delete data in bulk by overwriting an entire table or partition, or by dropping a table.
• Indexing (not currently). LZO-compressed text files can be indexed outside of Impala, as described in Using
LZO-Compressed Text Files on page 640.
• Full text search on text fields. The Cloudera Search product is appropriate for this use case.
• Custom Hive Serializer/Deserializer classes (SerDes). Impala supports a set of common native file formats that
have built-in SerDes in CDH. See How Impala Works with Hadoop File Formats on page 634 for details.
• Checkpointing within a query. That is, Impala does not save intermediate results to disk during long-running
queries. Currently, Impala cancels a running query if any host on which that query is executing fails. When one or
more hosts are down, Impala reroutes future queries to only use the available hosts, and Impala detects when
the hosts come back up and begins using them again. Because a query can be submitted through any Impala node,
there is no single point of failure. In the future, we will consider adding additional work allocation features to
Impala, so that a running query would complete even in the presence of host failures.
• Hive indexes.
• Non-Hadoop data stores, such as relational databases.
For the detailed list of features that are different between Impala and HiveQL, see SQL Differences Between Impala
and Hive on page 541.
Does Impala support generic JDBC?
Impala supports the HiveServer2 JDBC driver.
Is Avro supported?
Yes, Avro is supported. Impala has always been able to query Avro tables. You can use the Impala LOAD DATA statement
to load existing Avro data files into a table. Starting with CDH 5.1 / Impala 1.4, you can create Avro tables with Impala.
Currently, you still use the INSERT statement in Hive to copy data from another table into an Avro table. See Using
the Avro File Format with Impala Tables on page 659 for details.
764 | Apache Impala Guide
Impala Frequently Asked Questions
How do I?
How do I prevent users from seeing the text of SQL queries?
For instructions on making the Impala log files unreadable by unprivileged users, see Securing Impala Data and Log
Files on page 83.
For instructions on password-protecting the web interface to the Impala log files and other internal server information,
see Securing the Impala Web User Interface on page 84.
In CDH 5.4 / Impala 2.2 and higher, you can use the log redaction feature to obfuscate sensitive information in Impala
log files. See http://www.cloudera.com/documentation/enterprise/latest/topics/sg_redaction.html for details.
How do I know how many Impala nodes are in my cluster?
The Impala statestore keeps track of how many impalad nodes are currently available. You can see this information
through the statestore web interface. For example, at the URL http://statestore_host:25010/metrics you
might see lines like the following:
statestore.live-backends:3
statestore.live-backends.list:[host1:22000, host1:26000, host2:22000]
The number of impalad nodes is the number of list items referring to port 22000, in this case two. (Typically, this
number is one less than the number reported by the statestore.live-backends line.) If an impalad node became
unavailable or came back after an outage, the information reported on this page would change appropriately.
Impala Performance
Are results returned as they become available, or all at once when a query completes?
Impala streams results whenever they are available, when possible. Certain SQL operations (aggregation or ORDER
BY) require all of the input to be ready before Impala can return results.
Why does my query run slowly?
There are many possible reasons why a given query could be slow. Use the following checklist to diagnose performance
issues with existing queries, and to avoid such issues when writing new queries, setting up new nodes, creating new
tables, or loading data.
• Immediately after the query finishes, issue a SUMMARY command in impala-shell. You can check which phases
of execution took the longest, and compare estimated values for memory usage and number of rows with the
actual values.
• Immediately after the query finishes, issue a PROFILE command in impala-shell. The numbers in the BytesRead,
BytesReadLocal, and BytesReadShortCircuit should be identical for a specific node. For example:
- BytesRead: 180.33 MB
- BytesReadLocal: 180.33 MB
- BytesReadShortCircuit: 180.33 MB
If BytesReadLocal is lower than BytesRead, something in your cluster is misconfigured, such as the impalad
daemon not running on all the data nodes. If BytesReadShortCircuit is lower than BytesRead, short-circuit
reads are not enabled properly on that node; see Post-Installation Configuration for Impala on page 36 for
instructions.
• If the table was just created, or this is the first query that accessed the table after an INVALIDATE METADATA
statement or after the impalad daemon was restarted, there might be a one-time delay while the metadata for
the table is loaded and cached. Check whether the slowdown disappears when the query is run again. When doing
Apache Impala Guide | 765
Impala Frequently Asked Questions
performance comparisons, consider issuing a DESCRIBE table_name statement for each table first, to make
sure any timings only measure the actual query time and not the one-time wait to load the table metadata.
• Is the table data in uncompressed text format? Check by issuing a DESCRIBE FORMATTED table_name statement.
A text table is indicated by the line:
InputFormat: org.apache.hadoop.mapred.TextInputFormat
Although uncompressed text is the default format for a CREATE TABLE statement with no STORED AS clauses,
it is also the bulkiest format for disk storage and consequently usually the slowest format for queries. For data
where query performance is crucial, particularly for tables that are frequently queried, consider starting with or
converting to a compact binary file format such as Parquet, Avro, RCFile, or SequenceFile. For details, see How
Impala Works with Hadoop File Formats on page 634.
• If your table has many columns, but the query refers to only a few columns, consider using the Parquet file format.
Its data files are organized with a column-oriented layout that lets queries minimize the amount of I/O needed
to retrieve, filter, and aggregate the values for specific columns. See Using the Parquet File Format with Impala
Tables on page 643 for details.
• If your query involves any joins, are the tables in the query ordered so that the tables or subqueries are ordered
with the one returning the largest number of rows on the left, followed by the smallest (most selective), the second
smallest, and so on? That ordering allows Impala to optimize the way work is distributed among the nodes and
how intermediate results are routed from one node to another. For example, all other things being equal, the
following join order results in an efficient query:
select some_col from
huge_table join big_table join small_table join medium_table
where
huge_table.id = big_table.id
and big_table.id = medium_table.id
and medium_table.id = small_table.id;
See Performance Considerations for Join Queries on page 568 for performance tips for join queries.
• Also for join queries, do you have table statistics for the table, and column statistics for the columns used in the
join clauses? Column statistics let Impala better choose how to distribute the work for the various pieces of a join
query. See Table and Column Statistics on page 575 for details about gathering statistics.
• Does your table consist of many small data files? Impala works most efficiently with data files in the multi-megabyte
range; Parquet, a format optimized for data warehouse-style queries, uses large files (originally 1 GB, now 256
MB in Impala 2.0 and higher) with a block size matching the file size. Use the DESCRIBE FORMATTED table_name
statement in impala-shell to see where the data for a table is located, and use the hadoop fs -ls or hdfs
dfs -ls Unix commands to see the files and their sizes. If you have thousands of small data files, that is a signal
that you should consolidate into a smaller number of large files. Use an INSERT ... SELECT statement to copy
the data to a new table, reorganizing into new data files as part of the process. Prefer to construct large data files
and import them in bulk through the LOAD DATA or CREATE EXTERNAL TABLE statements, rather than issuing
many INSERT ... VALUES statements; each INSERT ... VALUES statement creates a separate tiny data file.
If you have thousands of files all in the same directory, but each one is megabytes in size, consider using a partitioned
table so that each partition contains a smaller number of files. See the following point for more on partitioning.
• If your data is easy to group according to time or geographic region, have you partitioned your table based on the
corresponding columns such as YEAR, MONTH, and/or DAY? Partitioning a table based on certain columns allows
queries that filter based on those same columns to avoid reading the data files for irrelevant years, postal codes,
and so on. (Do not partition down to too fine a level; try to structure the partitions so that there is still sufficient
data in each one to take advantage of the multi-megabyte HDFS block size.) See Partitioning for Impala Tables on
page 625 for details.
Why does my SELECT statement fail?
When a SELECT statement fails, the cause usually falls into one of the following categories:
• A timeout because of a performance, capacity, or network issue affecting one particular node.
• Excessive memory use for a join query, resulting in automatic cancellation of the query.
766 | Apache Impala Guide
Impala Frequently Asked Questions
• A low-level issue affecting how native code is generated on each node to handle particular WHERE clauses in the
query. For example, a machine instruction could be generated that is not supported by the processor of a certain
node. If the error message in the log suggests the cause was an illegal instruction, consider turning off native code
generation temporarily, and trying the query again.
• Malformed input data, such as a text data file with an enormously long line, or with a delimiter that does not
match the character specified in the FIELDS TERMINATED BY clause of the CREATE TABLE statement.
Why does my INSERT statement fail?
When an INSERT statement fails, it is usually the result of exceeding some limit within a Hadoop component, typically
HDFS.
• An INSERT into a partitioned table can be a strenuous operation due to the possibility of opening many files and
associated threads simultaneously in HDFS. Impala 1.1.1 includes some improvements to distribute the work more
efficiently, so that the values for each partition are written by a single node, rather than as a separate data file
from each node.
• Certain expressions in the SELECT part of the INSERT statement can complicate the execution planning and result
in an inefficient INSERT operation. Try to make the column data types of the source and destination tables match
up, for example by doing ALTER TABLE ... REPLACE COLUMNS on the source table if necessary. Try to avoid
CASE expressions in the SELECT portion, because they make the result values harder to predict than transferring
a column unchanged or passing the column through a built-in function.
• Be prepared to raise some limits in the HDFS configuration settings, either temporarily during the INSERT or
permanently if you frequently run such INSERT statements as part of your ETL pipeline.
• The resource usage of an INSERT statement can vary depending on the file format of the destination table.
Inserting into a Parquet table is memory-intensive, because the data for each partition is buffered in memory
until it reaches 1 gigabyte, at which point the data file is written to disk. Impala can distribute the work for an
INSERT more efficiently when statistics are available for the source table that is queried during the INSERT
statement. See Table and Column Statistics on page 575 for details about gathering statistics.
Does Impala performance improve as it is deployed to more hosts in a cluster in much the same way that Hadoop
performance does?
Yes. Impala scales with the number of hosts. It is important to install Impala on all the DataNodes in the cluster, because
otherwise some of the nodes must do remote reads to retrieve data not available for local reads. Data locality is an
important architectural aspect for Impala performance. See this Impala performance blog post for background. Note
that this blog post refers to benchmarks with Impala 1.1.1; Impala has added even more performance features in the
1.2.x series.
Is the HDFS block size reduced to achieve faster query results?
No. Impala does not make any changes to the HDFS or HBase data sets.
The default Parquet block size is relatively large (256 MB in Impala 2.0 and later; 1 GB in earlier releases). You can
control the block size when creating Parquet files using the PARQUET_FILE_SIZE query option.
Does Impala use caching?
Impala does not cache table data. It does cache some table and file metadata. Although queries might run faster on
subsequent iterations because the data set was cached in the OS buffer cache, Impala does not explicitly control this.
Impala takes advantage of the HDFS caching feature in CDH. You can designate which tables or partitions are cached
through the CACHED and UNCACHED clauses of the CREATE TABLE and ALTER TABLE statements. Impala can also
take advantage of data that is pinned in the HDFS cache through the hdfscacheadmin command. See Using HDFS
Caching with Impala (CDH 5.3 or higher only) on page 593 for details.
Apache Impala Guide | 767
Impala Frequently Asked Questions
Impala Use Cases
What are good use cases for Impala as opposed to Hive or MapReduce?
Impala is well-suited to executing SQL queries for interactive exploratory analytics on large data sets. Hive and
MapReduce are appropriate for very long running, batch-oriented tasks such as ETL.
Is MapReduce required for Impala? Will Impala continue to work as expected if MapReduce is stopped?
Impala does not use MapReduce at all.
Can Impala be used for complex event processing?
For example, in an industrial environment, many agents may generate large amounts of data. Can Impala be used to
analyze this data, checking for notable changes in the environment?
Complex Event Processing (CEP) is usually performed by dedicated stream-processing systems. Impala is not a
stream-processing system, as it most closely resembles a relational database.
Is Impala intended to handle real time queries in low-latency applications or is it for ad hoc queries for the purpose of
data exploration?
Ad-hoc queries are the primary use case for Impala. We anticipate it being used in many other situations where
low-latency is required. Whether Impala is appropriate for any particular use-case depends on the workload, data size
and query volume. See Impala Benefits on page 16 for the primary benefits you can expect when using Impala.
Questions about Impala And Hive
How does Impala compare to Hive and Pig?
Impala is different from Hive and Pig because it uses its own daemons that are spread across the cluster for queries.
Because Impala does not rely on MapReduce, it avoids the startup overhead of MapReduce jobs, allowing Impala to
return results in real time.
Can I do transforms or add new functionality?
Impala adds support for UDFs in Impala 1.2. You can write your own functions in C++, or reuse existing Java-based Hive
UDFs. The UDF support includes scalar functions and user-defined aggregate functions (UDAs). User-defined table
functions (UDTFs) are not currently supported.
Impala does not currently support an extensible serialization-deserialization framework (SerDes), and so adding extra
functionality to Impala is not as straightforward as for Hive or Pig.
Can any Impala query also be executed in Hive?
Yes. There are some minor differences in how some queries are handled, but Impala queries can also be completed
in Hive. Impala SQL is a subset of HiveQL, with some functional limitations such as transforms. For details of the Impala
SQL dialect, see Impala SQL Statements on page 202. For the Impala built-in functions, see Impala Built-In Functions on
page 391. For the detailed list of differences between Impala and HiveQL, see SQL Differences Between Impala and
Hive on page 541.
Can I use Impala to query data already loaded into Hive and HBase?
There are no additional steps to allow Impala to query tables managed by Hive, whether they are stored in HDFS or
HBase. Make sure that Impala is configured to access the Hive metastore correctly and you should be ready to go.
Keep in mind that impalad, by default, runs as the impala user, so you might need to adjust some file permissions
depending on how strict your permissions are currently.
768 | Apache Impala Guide
Impala Frequently Asked Questions
See Using Impala to Query HBase Tables on page 684 for details about querying data in HBase.
Is Hive an Impala requirement?
The Hive metastore service is a requirement. Impala shares the same metastore database as Hive, allowing Impala and
Hive to access the same tables transparently.
Hive itself is optional, and does not need to be installed on the same nodes as Impala. Currently, Impala supports a
wider variety of read (query) operations than write (insert) operations; you use Hive to insert data into tables that use
certain file formats. See How Impala Works with Hadoop File Formats on page 634 for details.
Impala Availability
Is Impala production ready?
Impala has finished its beta release cycle, and the 1.0, 1.1, and 1.2 GA releases are production ready. The 1.1.x series
includes additional security features for authorization, an important requirement for production use in many
organizations. The 1.2.x series includes important performance features, particularly for large join queries. Some
Cloudera customers are already using Impala for large workloads.
The Impala 1.3.0 and higher releases are bundled with corresponding levels of CDH. The number of new features grows
with each release. See New Features in CDH 6.0.0 for a full list.
How do I configure Hadoop high availability (HA) for Impala?
You can set up a proxy server to relay requests back and forth to the Impala servers, for load balancing and high
availability. See Using Impala through a Proxy for High Availability on page 72 for details.
You can enable HDFS HA for the Hive metastore. See the CDH5 High Availability Guide for details.
What happens if there is an error in Impala?
There is not a single point of failure in Impala. All Impala daemons are fully able to handle incoming queries. If a machine
fails however, all queries with fragments running on that machine will fail. Because queries are expected to return
quickly, you can just rerun the query if there is a failure. See Impala Concepts and Architecture on page 18 for details
about the Impala architecture.
The longer answer: Impala must be able to connect to the Hive metastore. Impala aggressively caches metadata so
the metastore host should have minimal load. Impala relies on the HDFS NameNode, and you can configure HA for
HDFS. Impala also has centralized services, known as the statestore and catalog services, that run on one host only.
Impala continues to execute queries if the statestore host is down, but it will not get state updates. For example, if a
host is added to the cluster while the statestore host is down, the existing instances of impalad running on the other
hosts will not find out about this new host. Once the statestore process is restarted, all the information it serves is
automatically reconstructed from all running Impala daemons.
What is the maximum number of rows in a table?
There is no defined maximum. Some customers have used Impala to query a table with over a trillion rows.
Can Impala and MapReduce jobs run on the same cluster without resource contention?
Yes. See Controlling Impala Resource Usage on page 588 for how to control Impala resource usage using the Linux
cgroup mechanism, and Resource Management on page 549 for how to use Impala with the YARN resource management
framework. Impala is designed to run on the DataNode hosts. Any contention depends mostly on the cluster setup
and workload.
For a detailed information about configuring a cluster to share resources between Impala queries and MapReduce
jobs, see https://www.cloudera.com/documentation/enterprise/latest/topics/admin_howto_multitenancy.html and
Configuring Resource Pools and Admission Control on page 554.
Apache Impala Guide | 769
Impala Frequently Asked Questions
Impala Internals
On which hosts does Impala run?
Cloudera strongly recommends running the impalad daemon on each DataNode for good performance. Although this
topology is not a hard requirement, if there are data blocks with no Impala daemons running on any of the hosts
containing replicas of those blocks, queries involving that data could be very inefficient. In that case, the data must be
transmitted from one host to another for processing by “remote reads”, a condition Impala normally tries to avoid.
See Impala Concepts and Architecture on page 18 for details about the Impala architecture. Impala schedules query
fragments on all hosts holding data relevant to the query, if possible.
In cases where some hosts in the cluster have much greater CPU and memory capacity than others, or where some
hosts have extra CPU capacity because some CPU-intensive phases are single-threaded, some users have run multiple
impalad daemons on a single host to take advantage of the extra CPU capacity. This configuration is only practical
for specific workloads that rely heavily on aggregation, and the physical hosts must have sufficient memory to
accommodate the requirements for multiple impalad instances.
How are joins performed in Impala?
By default, Impala automatically determines the most efficient order in which to join tables using a cost-based method,
based on their overall size and number of rows. (This is a new feature in Impala 1.2.2 and higher.) The COMPUTE STATS
statement gathers information about each table that is crucial for efficient join performance. Impala chooses between
two techniques for join queries, known as “broadcast joins” and “partitioned joins”. See Joins in Impala SELECT
Statements on page 296 for syntax details and Performance Considerations for Join Queries on page 568 for performance
considerations.
How does Impala process join queries for large tables?
Impala utilizes multiple strategies to allow joins between tables and result sets of various sizes. When joining a large
table with a small one, the data from the small table is transmitted to each node for intermediate processing. When
joining two large tables, the data from one of the tables is divided into pieces, and each node processes only selected
pieces. See Joins in Impala SELECT Statements on page 296 for details about join processing, Performance Considerations
for Join Queries on page 568 for performance considerations, and Optimizer Hints in Impala on page 387 for how to
fine-tune the join strategy.
What is Impala's aggregation strategy?
Impala currently only supports in-memory hash aggregation. In Impala 2.0 and higher, if the memory requirements
for a join or aggregation operation exceed the memory limit for a particular host, Impala uses a temporary work area
on disk to help the query complete successfully.
How is Impala metadata managed?
Impala uses two pieces of metadata: the catalog information from the Hive metastore and the file metadata from the
NameNode. Currently, this metadata is lazily populated and cached when an impalad needs it to plan a query.
The REFRESH statement updates the metadata for a particular table after loading new data through Hive. The INVALIDATE
METADATA Statement on page 286 statement refreshes all metadata, so that Impala recognizes new tables or other
DDL and DML changes performed through Hive.
In Impala 1.2 and higher, a dedicated catalogd daemon broadcasts metadata changes due to Impala DDL or DML
statements to all nodes, reducing or eliminating the need to use the REFRESH and INVALIDATE METADATA statements.
What load do concurrent queries produce on the NameNode?
The load Impala generates is very similar to MapReduce. Impala contacts the NameNode during the planning phase
to get the file metadata (this is only run on the host the query was sent to). Every impalad will read files as part of
normal processing of the query.
770 | Apache Impala Guide
Impala Frequently Asked Questions
How does Impala achieve its performance improvements?
These are the main factors in the performance of Impala versus that of other Hadoop components and related
technologies.
Impala avoids MapReduce. While MapReduce is a great general parallel processing model with many benefits, it is not
designed to execute SQL. Impala avoids the inefficiencies of MapReduce in these ways:
• Impala does not materialize intermediate results to disk. SQL queries often map to multiple MapReduce jobs with
all intermediate data sets written to disk.
• Impala avoids MapReduce start-up time. For interactive queries, the MapReduce start-up time becomes very
noticeable. Impala runs as a service and essentially has no start-up time.
• Impala can more naturally disperse query plans instead of having to fit them into a pipeline of map and reduce
jobs. This enables Impala to parallelize multiple stages of a query and avoid overheads such as sort and shuffle
when unnecessary.
Impala uses a more efficient execution engine by taking advantage of modern hardware and technologies:
• Impala generates runtime code. Impala uses LLVM to generate assembly code for the query that is being run.
Individual queries do not have to pay the overhead of running on a system that needs to be able to execute
arbitrary queries.
• Impala uses available hardware instructions when possible. Impala uses the supplemental SSE3 (SSSE3) instructions
which can offer tremendous speedups in some cases. (Impala 2.0 and 2.1 required the SSE4.1 instruction set;
Impala 2.2 and higher relax the restriction again so only SSSE3 is required.)
• Impala uses better I/O scheduling. Impala is aware of the disk location of blocks and is able to schedule the order
to process blocks to keep all disks busy.
• Impala is designed for performance. A lot of time has been spent in designing Impala with sound
performance-oriented fundamentals, such as tight inner loops, inlined function calls, minimal branching, better
use of cache, and minimal memory usage.
What happens when the data set exceeds available memory?
Currently, if the memory required to process intermediate results on a node exceeds the amount available to Impala
on that node, the query is cancelled. You can adjust the memory available to Impala on each node, and you can fine-tune
the join strategy to reduce the memory required for the biggest queries. We do plan on supporting external joins and
sorting in the future.
Keep in mind though that the memory usage is not directly based on the input data set size. For aggregations, the
memory usage is the number of rows after grouping. For joins, the memory usage is the combined size of the tables
excluding the biggest table, and Impala can use join strategies that divide up large joined tables among the various
nodes rather than transmitting the entire table to each node.
What are the most memory-intensive operations?
If a query fails with an error indicating “memory limit exceeded”, you might suspect a memory leak. The problem could
actually be a query that is structured in a way that causes Impala to allocate more memory than you expect, exceeded
the memory allocated for Impala on a particular node. Some examples of query or table structures that are especially
memory-intensive are:
• INSERT statements using dynamic partitioning, into a table with many different partitions. (Particularly for tables
using Parquet format, where the data for each partition is held in memory until it reaches the full block size in
size before it is written to disk.) Consider breaking up such operations into several different INSERT statements,
for example to load data one year at a time rather than for all years at once.
• GROUP BY on a unique or high-cardinality column. Impala allocates some handler structures for each different
value in a GROUP BY query. Having millions of different GROUP BY values could exceed the memory limit.
• Queries involving very wide tables, with thousands of columns, particularly with many STRING columns. Because
Impala allows a STRING value to be up to 32 KB, the intermediate results during such queries could require
substantial memory allocation.
Apache Impala Guide | 771
Impala Frequently Asked Questions
When does Impala hold on to or return memory?
Impala allocates memory using tcmalloc, a memory allocator that is optimized for high concurrency. Once Impala
allocates memory, it keeps that memory reserved to use for future queries. Thus, it is normal for Impala to show high
memory usage when idle. If Impala detects that it is about to exceed its memory limit (defined by the -mem_limit
startup option or the MEM_LIMIT query option), it deallocates memory not needed by the current queries.
When issuing queries through the JDBC or ODBC interfaces, make sure to call the appropriate close method afterwards.
Otherwise, some memory associated with the query is not freed.
SQL
Is there an UPDATE statement?
In CDH 5.10 / Impala 2.8 and higher, Impala has the statements UPDATE, DELETE, and UPSERT. These statements apply
to Kudu tables only.
For non-Kudu tables, you can use the following techniques to achieve the same goals as the familiar UPDATE statement,
in a way that preserves efficient file layouts for subsequent queries:
• Replace the entire contents of a table or partition with updated data that you have already staged in a different
location, either using INSERT OVERWRITE, LOAD DATA, or manual HDFS file operations followed by a REFRESH
statement for the table. Optionally, you can use built-in functions and expressions in the INSERT statement to
transform the copied data in the same way you would normally do in an UPDATE statement, for example to turn
a mixed-case string into all uppercase or all lowercase.
• To update a single row, use an HBase table, and issue an INSERT ... VALUES statement using the same key as
the original row. Because HBase handles duplicate keys by only returning the latest row with a particular key
value, the newly inserted row effectively hides the previous one.
Can Impala do user-defined functions (UDFs)?
Impala 1.2 and higher does support UDFs and UDAs. You can either write native Impala UDFs and UDAs in C++, or reuse
UDFs (but not UDAs) originally written in Java for use with Hive. See User-Defined Functions (UDFs) on page 525 for
details.
Why do I have to use REFRESH and INVALIDATE METADATA, what do they do?
In Impala 1.2 and higher, there is much less need to use the REFRESH and INVALIDATE METADATA statements:
• The new impala-catalog service, represented by the catalogd daemon, broadcasts the results of Impala DDL
statements to all Impala nodes. Thus, if you do a CREATE TABLE statement in Impala while connected to one
node, you do not need to do INVALIDATE METADATA before issuing queries through a different node.
• The catalog service only recognizes changes made through Impala, so you must still issue a REFRESH statement
if you load data through Hive or by manipulating files in HDFS, and you must issue an INVALIDATE METADATA
statement if you create a table, alter a table, add or drop partitions, or do other DDL statements in Hive.
• Because the catalog service broadcasts the results of REFRESH and INVALIDATE METADATA statements to all
nodes, in the cases where you do still need to issue those statements, you can do that on a single node rather
than on every node, and the changes will be automatically recognized across the cluster, making it more convenient
to load balance by issuing queries through arbitrary Impala nodes rather than always using the same coordinator
node.
Why is space not freed up when I issue DROP TABLE?
Impala deletes data files when you issue a DROP TABLE on an internal table, but not an external one. By default, the
CREATE TABLE statement creates internal tables, where the files are managed by Impala. An external table is created
with a CREATE EXTERNAL TABLE statement, where the files reside in a location outside the control of Impala. Issue
a DESCRIBE FORMATTED statement to check whether a table is internal or external. The keyword MANAGED_TABLE
772 | Apache Impala Guide
Impala Frequently Asked Questions
indicates an internal table, from which Impala can delete the data files. The keyword EXTERNAL_TABLE indicates an
external table, where Impala will leave the data files untouched when you drop the table.
Even when you drop an internal table and the files are removed from their original location, you might not get the
hard drive space back immediately. By default, files that are deleted in HDFS go into a special trashcan directory, from
which they are purged after a period of time (by default, 6 hours). For background information on the trashcan
mechanism, see HDFS Architecture. For information on purging files from the trashcan, see File System Shell.
When Impala deletes files and they are moved to the HDFS trashcan, they go into an HDFS directory owned by the
impala user. If the impala user does not have an HDFS home directory where a trashcan can be created, the files
are not deleted or moved, as a safety measure. If you issue a DROP TABLE statement and find that the table data files
are left in their original location, create an HDFS directory /user/impala, owned and writeable by the impala user.
For example, you might find that /user/impala is owned by the hdfs user, in which case you would switch to the
hdfs user and issue a command such as:
hdfs dfs -chown -R impala /user/impala
Is there a DUAL table?
You might be used to running queries against a single-row table named DUAL to try out expressions, built-in functions,
and UDFs. Impala does not have a DUAL table. To achieve the same result, you can issue a SELECT statement without
any table name:
select 2+2;
select substr('hello',2,1);
select pow(10,6);
Partitioned Tables
How do I load a big CSV file into a partitioned table?
To load a data file into a partitioned table, when the data file includes fields like year, month, and so on that correspond
to the partition key columns, use a two-stage process. First, use the LOAD DATA or CREATE EXTERNAL TABLE
statement to bring the data into an unpartitioned text table. Then use an INSERT ... SELECT statement to copy
the data from the unpartitioned table to a partitioned one. Include a PARTITION clause in the INSERT statement to
specify the partition key columns. The INSERT operation splits up the data into separate data files for each partition.
For examples, see Partitioning for Impala Tables on page 625. For details about loading data into partitioned Parquet
tables, a popular choice for high-volume data, see Loading Data into Parquet Tables on page 644.
Can I do INSERT ... SELECT * into a partitioned table?
When you use the INSERT ... SELECT * syntax to copy data into a partitioned table, the columns corresponding
to the partition key columns must appear last in the columns returned by the SELECT *. You can create the table with
the partition key columns defined last. Or, you can use the CREATE VIEW statement to create a view that reorders
the columns: put the partition key columns last, then do the INSERT ... SELECT * from the view.
HBase
What kinds of Impala queries or data are best suited for HBase?
HBase tables are ideal for queries where normally you would use a key-value store. That is, where you retrieve a single
row or a few rows, by testing a special unique key column using the = or IN operators.
Apache Impala Guide | 773
Impala Frequently Asked Questions
HBase tables are not suitable for queries that produce large result sets with thousands of rows. HBase tables are also
not suitable for queries that perform full table scans because the WHERE clause does not request specific values from
the unique key column.
Use HBase tables for data that is inserted one row or a few rows at a time, such as by the INSERT ... VALUES syntax.
Loading data piecemeal like this into an HDFS-backed table produces many tiny files, which is a very inefficient layout
for HDFS data files.
If the lack of an UPDATE statement in Impala is a problem for you, you can simulate single-row updates by doing an
INSERT ... VALUES statement using an existing value for the key column. The old row value is hidden; only the new
row value is seen by queries.
HBase tables are often wide (containing many columns) and sparse (with most column values NULL). For example, you
might record hundreds of different data points for each user of an online service, such as whether the user had registered
for an online game or enabled particular account features. With Impala and HBase, you could look up all the information
for a specific customer efficiently in a single query. For any given customer, most of these columns might be NULL,
because a typical customer might not make use of most features of an online service.
774 | Apache Impala Guide
Appendix: Apache License, Version 2.0
Appendix: Apache License, Version 2.0
SPDX short identifier: Apache-2.0
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through
9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are
under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of
fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source
code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including
but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as
indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix
below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the
Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole,
an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or
additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work
by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For
the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to
the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code
control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of
discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated
in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received
by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly
display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims
Cloudera | 775
Appendix: Apache License, Version 2.0
licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their
Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against
any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated
within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under
this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You meet the following conditions:
1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
2. You must cause any modified files to carry prominent notices stating that You changed the files; and
3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark,
and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part
of the Derivative Works; and
4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute
must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices
that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE
text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along
with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party
notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify
the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or
as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be
construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license
terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as
a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated
in this License.
5. Submission of Contributions.
Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the
Licensor shall be under the terms and conditions of this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement
you may have executed with Licensor regarding such Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks, service marks, or product names of the
Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing
the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides
its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,
including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or
FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or
redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability.
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required
by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable
to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising
as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss
of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even
if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
776 | Cloudera
Appendix: Apache License, Version 2.0
While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance
of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in
accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any
other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional
liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets
"[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the
appropriate comment syntax for the file format. We also recommend that a file or class name and description of
purpose be included on the same "printed page" as the copyright notice for easier identification within third-party
archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Cloudera | 777